Thunderbolt saves me money

Thunderbolt is a technology that is misunderstood by almost everyone. Even tech commentators regularly say things like, "Why would I pay more for Thunderbolt?" Or “USB is cheaper so it will eventually kill off Thunderbolt.” I understand why this is the prevailing wisdom. Thunderbolt is expensive and it’s predecessor Firewire, (or IEEE1394 if you have a PC) was discontinued a few years back because USB became fast enough to do everything Firewire could do. Like I said, almost no one understands the tech involved. Thunderbolt is not just a faster port. It has capabilities that USB will never have.

The tech

USB was originally developed to replace the different legacy computer ports, (serial, parallel, PS2, etc.) with one standard port. This enabled computer manufacturers to remove extraneous components and simplify their motherboard designs. For example, this allowed the motherboard of the original iMac to be simpler and smaller than many laptops of the day.

iMac ports - image attribution: Fletcher at English Wikipedia

iMac ports - image attribution: Fletcher at English Wikipedia

While it was originally somewhat slow (12mbps) back in the 90’s, it’s now pushing 20gbps in the latest spec. Also, the new USB type-c connector has some fantastic properties. It’s reversible, can send up to 100W of power in either direction, and is small enough to be used on even the thinnest mobile devices. Fundamentally though, it’s still just a peripheral port replacement. Thunderbolt on the other hand is designed to replace expansion busses like PCI-express.

Internal PCI-Express & PCI slots - image attribution: English Wikipedia

Internal PCI-Express & PCI slots - image attribution: English Wikipedia

Thunderbolt is a channel which allows the PCI-express bus to be extended outside of the motherboard. It encapsulates the PCI-express and DisplayPort signals into a format that can be transmitted over a relatively inexpensive cable. However, the two features that really enable Thunderbolt to be a true expansion bus are DMA access and daisy-chaining. DMA or “Direct Memory Access” is the ability to access the computer’s memory, (either to store or retrieve data) without relying on the computer’s processor to be the intermediary. This allows any device in the Thunderbolt chain to essentially act as a co-processor as if it was installed directly onto the host-computer’s motherboard.

Thunderbolt 2 functional diagram - image attribution: Shigeru23 at English Wikipedia

Thunderbolt 2 functional diagram - image attribution: Shigeru23 at English Wikipedia

Daisy-chaining (or bus topology) means that a device with two ports can act as an end point and a repeater for other devices that are plugged into its second port. You can have up to 6 devices in a chain (not including a display at the end) and each device in the chain of devices has full-bandwidth and access to the host computer. If you have multiple Thunderbolt ports on your computer, (like the new Mac mini shown below) you can host a ton of devices that all have the same abilities as an internal expansion card.

2018 Mac mini, (note the four Thunderbolt 3 ports) - image attribution Apple press archive

2018 Mac mini, (note the four Thunderbolt 3 ports) - image attribution Apple press archive

The only downside of Thunderbolt is its cost, which is admittedly high. Unlike USB, Intel charges a license fee for every Thunderbolt device. USB was designed originally by a consortium of companies lead by Intel and Apple but almost every tech company in the world is a member of the group. USB ports can be added to any product without licensing fees. This means that every consumer electronic device can put in a USB port and as long as it conforms to the USB specification; it should work just fine. Thunderbolt, however, requires some very specific hardware to function. Every device has to have an Intel made chip that allows communication on the Thunderbolt bus. Each device is required to have the same controller as the host computer because each device in the chain is essentially a peer device, (because of its need to forward data as well as receive it). This makes the tech somewhat pricey to include in each device. The exception being the endpoint device in the chain which can be a cheaper single-port version of the controller chip. This license fee is unlikely to entirely go away anytime soon as Intel needs to test and qualify each thunderbolt device before it can be sold. Thunderbolt cables are also more costly than USB because they have to be special “active” cables with transceivers built into the ends.

Thunderbolt 3 “active” cable - image attribution: Amin at English Wikipedia

Thunderbolt 3 “active” cable - image attribution: Amin at English Wikipedia

Despite the high price, Thunderbolt isn’t going away anytime soon because the increased functionality is worth the extra cost. Also, Intel built the latest revision, (Thunderbolt 3) into the USB-C port. This means that while not every USB-C port has Thunderbolt 3, Thunderbolt 3 ports are also USB-C ports. This allows computer manufacturers to add Thunderbolt, without the confusion of extra ports being added to your computer. As a bonus, each Thunderbolt 3 enabled USB port is its own dedicated USB-C bus, unlike the standard USB which shares its speed among all of the ports.

USB “Type-C” plug used by Thunderbolt 3 - image attribution: Hibiskus100 at English Wikipedia

USB “Type-C” plug used by Thunderbolt 3 - image attribution: Hibiskus100 at English Wikipedia

My setup

Thunderbolt has enabled me to switch from having a large desktop computer and a laptop to just using a laptop for everything. Thunderbolt extends the capabilities of my laptop with equipment that traditionally could only be installed inside a powerful desktop computer. Here is some of the Thunderbolt equipment I use:

  • OWC Thunderbolt 2 Dock

  • OWC Mercury Helios 3 PCI-express expansion chassis with ATTO H680 SAS expansion card attached to:

    • 8TB G-technology G-SPEED ESpro

    • HP LTO6 tape drive

  • 6TB OWC Mercury Elite Pro Dual RAID

  • OWC Mercury Helios 2 PCI-express expansion chassis with AJA Kona LHi expansion card

  • Blackmagic Design Ultrastudio mini recorder

  • 3x LaCie rugged Thunderbolt/USB 3.0 drives

I have the two PCI-express cards, six hard drives and the LTO data tape drive plugged into my laptop when it’s at my desk. When I take my laptop on the road, I use the Blackmagic recorder and the LaCie rugged drives. That way I have high-speed storage and a video interface for my desk and for use in the field. Even with the older version of Thunderbolt in my laptop, I’ve never run out of interface speed.

More importantly; because I am able to get by with just one computer in my life, I’m actually saving more money than I’m spending for “pricier” Thunderbolt devices.

Trailer for AVA: a new dramatic series coming soon

I’m so excited to finally be able to show off the trailer for AVA, a dramatic series I’ve been editing for the past few months. Created by actress/writer/director Catherane Skillen, the series follows Ava, a woman who has it all… until she loses everything. The series is coming soon to YouTube! Make sure you subscribe and check out the official facebook page for more info.

ARM Processors for Macintosh

The rumor just broke this week that Apple will be replacing the Intel processors in its Macintosh computers with ARM-based processors of it’s own design by 2020. Most consumers don’t really know what that means, so inevitably there will be a ton of scary sounding articles written to prey on that ignorance. To counter this let’s examine what the actual consequences of such a change would be.

 

History of change

Motorola 68000 processor

Motorola 68000 processor

The Macintosh has gone through two previous processor architecture transitions in it’s 30+ year history. Originally built to run on the 68000 series of processors from Motorola, the first transition was to the PowerPC chips built by IBM, (and Motorola) in the early 90s. The second transition was of course to the Intel x86 architecture in the mid 2000s. I was using Macintosh computers throughout this time and both of these transitions were accomplished with very few issues. The reason for this is that Apple has put themselves in a very good position to make these kind of sweeping changes.

IBM PowerPC 601 Processor

IBM PowerPC 601 Processor

In my last article, I explained how in the 80s and 90s software had to be written specifically for the processor that it was going to run on to get good performance. This is no longer strictly true. In fact Apple’s current software platforms are all based on Steve Jobs’ NextStep OS, (later OpenStep) that was created to run on multiple different architectures. It’s rumored that Apple bought Next instead of it’s competitors because of a demo of the cross-platform capabilities in it’s development tools. Provided that a developer used only Next’s frameworks to build their applications, only a re-compile was necessary to target a different architecture. After purchasing Next, Apple set about the difficult task of transitioning their OS 9 developers over to the new environment. The problem was that while the new development environment was much easier and more modern, porting applications with old code bases would take a lot of re-writing. Developers of large professional application packages, (Adobe, Microsoft, etc) didn’t want to undertake that task because their code bases were very large and very old. Apple gave in and wrote a compatibility environment, (called Carbon) for them that only required them to re-write a portion of the old OS 9 code, (estimates were around 10%) to gain access to the new operating system.

Intel Core i5-2500 Processor

Intel Core i5-2500 Processor

In the mid 2000s when the PowerPC architecture was running out of steam Apple decided that the Macintosh would be better off on the Intel platform. The new Intel Core architecture, (built from the Pentium 3 line and not the Pentium 4) was performing very well for the wattage it was drawing. In contrast, the PowerPC was struggling to keep up in the performance per watt race. The G5 tower systems of the era required liquid cooling to even get close to Intel performance. But the bigger problem was that there were no mobile PowerPC G5 chips at all. It was simply too power-hungry. With laptop and mobile systems selling way more than desktop systems, Apple needed to make the switch to compete. Once again Apple created a translator that would run old code on the new processors, however this time there was a catch. The translator routine was not going to be extended to 64bit code. This would allow old applications to continue to run on new Intel systems but if developers wanted to take advantage of the full speed of 64bit computing they would have to bite the bullet and rewrite their programs. The transition away from the Carbon API was started in 2012 with MacOS X 10.8, but is finally being completed now in 2018 with MacOS X 10.13 High Sierra; which is the final version of the Macintosh operating system to support legacy OS 9 code, (EDIT: This was delayed and Mac OS 10.14 Mojave is the last Mac OS to support legacy code). Apple is finally drawing a line in the sand by requiring developers to move their code from the old Carbon API to the Cocoa API that was taken from NextStep. This gives Apple the ability to change the underlying architecture of the Macintosh much easier than ever before.

Apple A4 Processor from iPad 1

Apple A4 Processor from iPad 1

In 2007 Apple released the iPhone and one of the most interesting details about the device was that they announced it would run a version of the desktop Mac OS that originally came from NextStep. This was a first in mobile computing and showed not only the confidence Apple had in its code but also it’s ability to port to other platforms. The iPhone, iPad and Apple Watch have all used a version of the Mac OS X operating environment and developer tools that are made for the ARM processor architecture. Originally Apple sourced those processors from other manufacturers, but they have since built a world-class processor team over 1000 engineers strong. The current A-series processors are the fastest and most full-featured ARM chips in the world and are currently very close to the ultimate performance of Intel’s mobile chips while soundly beating them in performance per watt. With Intel’s current difficulties in shrinking their fabrication process and the recent security flaws in their designs you can’t blame Apple for thinking this is the right time to make a change.

 

ARM Macintoshes

Apple A11 processor in iPhone X

Apple A11 processor in iPhone X

So we have seen that Apple has the ability to make this transition to ARM processors but what will the result be for consumers? Let’s look at the Pros and Cons of switching to an ARM architecture:

 

PROS:

 

• More secure

Because Apple is designing the entire platform from the ground up, they can build security into the chip like they have on the iPhone. This would allow for systems that are far harder to hack than we have today.

• Tighter integration with Apple software

Apple already writes the drivers for the hardware that goes into their Macintosh systems. However, building the entire machine from the ground up will allow them to create specific integrations that enable their systems to outperform competing solutions. A good example is the iPad Pro that can play 3 simultaneous 4K video streams in iMovie where an Intel Core-m would struggle to play one. Or it could allow tight integration with user features like the Apple Pencil on the iPad Pro. The possibilities are endless.

• More performance/watt

Because it was designed for mobile devices from the beginning ARM chips are the king of low power, high-performance computing. Imagine if your MacBook could go 24-hours on a single charge… Or your iMac could have 16 cores and no fan. ARM is really good at getting the most performance out of the least power.

• Less costly

This is a big one because Apple is really starting to have trouble justifying the price of their Mac systems. At first glance, generic PCs seem like they are so much cheaper. A big part of that is the cost of the Intel-branded processor. Top-end Intel chips can be as much as $1000-$2000 of the purchase price. Apple could reduce that significantly if they made the chips themselves.

• Specific application performance

Apple has built special processing hardware into their iOS chips that do some things extremely quickly. For example, image processing. The camera on the latest iPhone does magic and it does it without seeming to break a sweat. The kind of image processing that would take hours or days on a desktop happens almost instantly. Apple could bring that kind of specific hardware to the desktop and applications written to take full advantage of it would see a huge boost.

 

CONS:

 

• Some applications will need optimizing.

Even though Apple will likely create a translation layer so that Intel code can run, developers will still need to update their programs to see full performance. This will be especially apparent with professional video and image-processing applications. They have the hardest workloads and their code is the oldest. Without analyzing the actual code it’s not really possible to know how much work would have to be done. If the developers are using only Apple’s APIs then it’s as simple as a re-compile, but if they have built their own APIs then it could be a big undertaking.

• Loss of native Windows support

This one might not be too big of a deal as more business applications are moved online, but it is definitely nice to know that you can boot straight to windows if you need to. Windows emulators will still exist but native support is always a plus.

• Potentially Less top-end performance

We have to remember that this is a transition. Apple makes multiple lines of Macintosh computers for a wide variety of users. It would probably take several years to transition all those lines to ARM. They will likely start with the mobile systems. The current MacBook has an Intel Core-M processor that is already bested by the A9X in Apple’s latest iPads. Building an 18-core server chip like the Intel processor in the iMac Pro takes a lot more work. Professional-grade processors aren’t just fast; they also enable professional workflows through massive IO, (input/output) performance. Today’s Intel chips have up to 40 PCIe lanes that each can send/receive 1000MBps of data to peripherals. Apple’s current ARM chips have just now added USB 3 support so they are definitely behind in this area. You can expect Intel chips to remain in the professional grade machines for years and maybe indefinitely. Apple has maintained multiple platforms for years in the past and may choose to do so again.

 

Final thoughts

Apple MacBook (2017)

Apple MacBook (2017)

One of the reasons that tech pundits are skeptical about ARM chips being used in desktop computers is that Microsoft tried to port Windows to ARM chips a few years back and failed. They were trying to get ahead of the iPad by releasing a competing ARM tablet. But Microsoft failed because their rushed implementation of Windows for ARM didn’t run any legacy (Win32) applications. Windows was never designed to be a cross-architechture operating system the way Mac OS was. Apple will succeed in making this transition, and they will do it in their own time and on their terms.

Microsoft Surface RT

Microsoft Surface RT

As for consumers; the majority of them don’t know what processor is in their system. All they care is that it runs the applications they want to use and doesn’t slow down or crash. The reality is that the software you run is much more important than the hardware it runs on. The real question is will Professional users be able to continue using Macs for the heavy workflows that require more advanced hardware? The answer is yes. Apple isn’t “abandoning” professional users even though every few months somebody starts that whisper campaign all over again. Apple just created the iMac Pro, the world’s fastest all-in-one desktop and this year they are creating a brand-new Mac Pro. These actions show that they are committed to their professional customers. Also this news means that they are not discontinuing the Macintosh. I’ve heard a lot of chatter from tech pundits worried that Apple will drop the Macintosh in favor of iOS. Instead, they have just committed to bringing it into a new decade with brand new hardware designs. That should be encouraging to Apple’s most loyal customers.

Finally, it’s important to remember that you don’t have to upgrade your computer every couple of years. The hardware Apple makes works flawlessly for years and in some cases decades. There are people still using and upgrading their 10-year old Mac Pro towers because Apple’s current systems don’t meet all of their needs. My main system is still a top-end MacBook Pro from 2013. I’ve stuck with it because it’s stable and runs all of the software I need to use. Apple systems bought today will still be going strong in 2028 and beyond and the professional user will continue to have a home on the Mac platform, no matter what processor it uses.

Software is the bottleneck

 

In my last article, I made the case that Apple's supposed problem with professional users has nothing to do with the kind of hardware they are making and everything to do with price. The reason Apple owned the creative professional market ten years ago was that the total cost of ownership (hardware & software) of their systems was significantly cheaper than PCs. Now that cost advantage has evaporated and many creatives are looking at switching to any of the many, high-power Windows systems being advertised for content creation. However, with all of the synthetic benchmarks and chest-thumping, no one is asking the most important question: Does buying faster hardware actually speed up creative workflows?

Throughout the short history of computing, nerds have lusted after bigger and faster hardware. It’s an easy obsession to understand. As physical beings, we are drawn to physical objects - things that we can see and touch. Hardware manufacturers have played on this basic psychology by designing computers as beautiful objects, and Apple has mastered this. We could easily fill a glossy, expensive coffee table book with pictures of their hardware, (oh look they already have).

This is the part where technophiles inevitably say something along the lines of, “That’s why Apple makes so much money… Marketing, form over function, blah blah blah”. This is a common refrain but it’s dead wrong. Yes, Apple makes beautiful designs but that just gets you in the door. It’s software that actually sells systems because that’s what we interact with. Apple understands that consumers don’t place any value on software. Some don’t understand that there is a difference between software and hardware. However, if Apple Inc. sold the same hardware but shipped Windows, (or Android) as the software interface, they wouldn’t be in business.

The right software actually sells hardware all on it’s own. The tech press refers to this as a “killer app”- as in “the killer app for the PC is Microsoft Office.” It’s an annoying phrase, but the sentiment is right. Great software enables us to accomplish more, be more creative and communicate faster. At the same time, bad or out of date software causes more problems than it fixes. This is the biggest challenge creative professionals in every field are facing as the software they use is buckling under increasingly complex workloads. My contention is that it is the software, not the hardware that is the biggest bottleneck in content creation today. There is a solution - but before I get to that - here is a little history.

 

Math is hard

 

Early version of Adobe Premiere running on a Macintosh Quadro

Early version of Adobe Premiere running on a Macintosh Quadro

 

Not so long ago, computers were extremely slow. You may have missed the dark ages of computing so I’ll try to put things in perspective. The U.S. government had a law forbidding the export of supercomputers to various hostile nations. About twenty years ago, a supercomputer was defined as any system with at least 1 teraflop of computational power, (one billion floating-point operations per second). We have pocket-sized devices now with processor speed measured in multiple teraflops. My first computer on the other hand - an Apple Performa 550 - didn’t even have a floating-point execution unit! It could process floating point operations only by running them through the integer unit. This was extremely slow. Doing complex tasks like image manipulation wasn’t easy. Doing it in real-time, (i.e. while playing video) was impossible without many thousands of dollars in specialty hardware. This was the time when the kind of hardware you ran really did matter because basic computer systems were unequal to the task.

Computer graphics applications process video in up to 32bit floating-point colorspace. This means they use 32bits to describe each pixel's specific color.  Video compression usually uses 8 to 12bits to describe each pixel’s color so even a 16bit color space should be enough to process video in. Having an extra 16bits of overhead means that there is more than enough precision to transform the colors accurately without rounding errors that can cause visual artifacts. However, even with today's very fast CPU's there are still way too many pixels in 4K, (or 8K!) video to process using the CPUs floating point unit alone. Special image processing hardware had to be devised to ensure that real-time image processing and effects don't slow the computer to crawl. Here is a list of these special processing units, (presented from slowest to fastest):

  1. Integer Emulation (software)
  2. Dedicated FPU (hardware)
  3. Specialized execution units (hardware vector)
  4. GPU Compute (massively parallel hardware vector)
  5. FPGA (fully programmable application specific integrated circuit)
  6. ASIC (application specific integrated circuit)

A full explanation of these specialty execution units is beyond the scope of this article, but we need some background to understand the problem. Everything in the list above past number 2 has to be specifically supported in software. Today that is accomplished through high-level APIs that abstract the hardware details from the application layer, (making it trivial to support new hardware). Twenty years ago the hardware was so slow, the software had to eek out every drop of speed. You couldn't use APIs, (even if any had been developed at this early stage) so developers had to support specific dedicated processing hardware in their applications to get workable speeds out of their code. This meant that every new add-on card or updated CPU required the application to be re-written to support it’s hardware.

To make matters worse, high-level languages weren’t generally used because the code they produced ran slower than low-level languages. For code that needed to be extra responsive even C was too slow. Programmers turned to assembly languages specific to the processor they were intended to run on. This produced the fastest code but made the programs extremely difficult to port to other platforms. Steve Jobs’ NeXT Inc. was trying to solve exactly this problem in the 90’s with their NeXTstep/OpenStep operating system and Objective C programing language. The goal was to abstract the code enough so that the only thing required to run an application on different hardware was a re-compile. Sun’s Java programming language took the idea even further. Allowing you to compile at runtime enabled the use of the same code on multiple types of hardware with no extra work from the developer. The problem is that the more you abstract the code, the less efficient the code becomes, (this is a generalization/simplification but mostly holds true) and the more processing power is lost to the abstraction layer. In the end, any code that had to be fast/real time was written at the lowest level possible and re-written when ported between different hardware systems.

Re-writing basic functions for every new piece of hardware is the perfect recipe for a completely unmanageable code base. There are more opportunities for bugs to crop up and it also takes up valuable resources that could be spent creating new features. With every new advance in computing power, the code has to be revised. In the long run this becomes untenable and leads to a bloated, complex codebase that is next to impossible to bring up to date. The other way to make old applications compatible with new hardware is to use the extra speed from new hardware to write a translation or compatibility layer. This solves the problem of having to rewrite core code with every new platform , but the drawback is that this leads to even less new code development because all the coding effort is going into the compatibility layer instead of the application itself. Imagine the difficulty of having a conversation with someone that only speaks a foreign language through an interpreter. You can make yourself understood but it takes a lot longer. In older software there can be multiple translation layers above the normal driver-OS-API layers that are present in every modern system. When you see a major difference in processing time using different software packages either a lot more work is being done or the code is massively less efficient. 

Ultimately, the proper solution to aging code is to throw it out and start over from scratch. Building on a modern, high-level language allows for the new code to be much simpler, more efficient and much easier to port to new hardware. However, in the case of large, professional applications this requires a very large investment in time and money. Usually the functionality would take a long time to reach parity with the old versions well. This is what Apple did with Final Cut Pro X starting in 2009 and it’s taken them several years and many versions to get the program back in fighting shape. In contrast, Adobe Premiere, Photoshop, After Effects, etc. all have legacy code bases that are holding them back, (sometimes severely) and they are trying to modernize a piece at a time. This strategy will keep the program operational, but could lead to leaner, focused competitors pulling the rug out from under them.

 

Real-world tests

 

We’ve made the case that old code is holding back many professional applications but how much difference does it actually make to a given workflow? Here is an example comparing Final Cut X, Premiere Pro, and the new version of Davinci Resolve on both an iMac and MacBook Pro:

Ignoring the stabilization in Final Cut, (which is very much the curve breaker at 20x faster than premiere or resolve) you can see that Premiere is 2-10x slower across the board on the same hardware. The heavier the workload, the slower Premiere performs. Premiere is not taking advantage of the specialty hardware that is made for processing images, (items 3-6 on the above chart) and is trying to process everything using the most generic CPU and GPU functions. In contrast, Final Cut and Resolve are taking advantage of those special execution units to make sure that they are processing as fast as possible. Here’s another example that really highlights what’s going on:

 

In this video you see a challenge was laid down to try and edit 4K video on a laptop. YouTube channel "Linus Tech Tips" decided to see if a top-end PC ultrabook laptop, (a portable PC laptop and not a bulky desktop replacement laptop) could edit 4K video in Premiere. The answer they came to was yes, but only if transcoded to an easier-to-edit codec first like cineform. A process that required a beefy desktop to get done in a reasonable amount of time. Jonathan from "Tld Today" took up the same challenge but used a 12” MacBook with a 1.1ghz Core-M processor, (this is a tablet sized laptop with no fan and no discrete graphics card) which is multiple times slower than the ultrabook used for the Premiere test. But with Final Cut not only could he edit 4K directly out of the camera, but the rendering finished in less than half of the time and the video was almost twice as long, (that’s around 4-5x faster for everybody keeping track). He then issued a counter-challenge saying he would be able to shoot and edit an entire video on the 12" MacBook in Final Cut before Linus could do the same thing using his 36-core server in Premiere. There were no takers.

This poor showing by Premiere reinforces the argument that it is not taking advantage of all of the specialty processing hardware it can. Now depending on your workflow, render times may or may not be a large part of the day’s work, but timeline speed is definitely important. Especially on a lower-end system like a laptop, Final Cut and the new version of resolve will be easily usable where Premiere probably won’t. You can definitely build a workstation powerful enough to edit any footage in Premiere, but you shouldn’t need to spend thousands of dollars on 16-core monster machines to get simple work done. The point of all of this is to show that today’s hardware is plenty fast enough for the work we are doing on it. For the most part professional software hasn’t kept up with the speed of the hardware. And to drive the point home, let’s take a look at editing 4K on the iPad Pro.

You read correctly… you can’t edit 4K on an ultrabook in Premiere, but you can with an iPad… Of course there are some caveats. The video has to be in h.264 format to play in real-time on the iPad, (the iPad CPU has a dedicated ASIC for processing h.264 encoding) but the point still stands. With the right software, (software that can take advantage of today’s efficient hardware features) creative professionals can keep up with today’s workloads without buying expensive, bulky, power-hungry systems.

In my next article, I will detail different workflows and show how to minimize performance bottlenecks.

-Mario Colangelo

Blacksmithing in Folsom

Folsom Historical Society:

Down in the old town in Folsom, California there is this great little museum with an amazing looking old blacksmith shop. Inside they invite people from all walks of life to try their hand at shaping metal the old-fashion way. I was able to partner with them to promote the facility. Enjoy!

In Iceland: Winter is Coming

My wife and I took a little trip to Iceland to see the country and shoot some footage. We drove the whole island and It's a very wild and beautiful place. The glaciers were my favorite part, and we even got the opportunity to take a tour inside one. I will be posting some video later but for now here are a few photos we snapped along the way. 

Iceland

Octopus by Joy

I love working on a project that is really creative but also has very interesting content. Earlier this year I helped Lumixar with a kickstarter & indiegogo video for a new company called Joy. They have a unique product that they are bringing to market called the Octopus. It's an icon-based smartwatch for children and special needs individuals to help them learn to stay on a schedule. The campaign was a big success, and raised almost $2,000,000 to get the product off of the ground! Enjoy the video!