No back link


Blackmagic Design has released DaVinci Resolve 14 with a slew of new features, notably the addition of Fairlight audio. At the 2017 NAB press conference CEO Grant Petty explained that “The problem we are trying to solve is audio for the film and television industry.” The company expects that incorporation of Fairlight technology into the color grading and video editing system will transform workflows for the audio-for-video post.

New features include up to 10 times performance improvement, a whole new audio post production suite with Fairlight audio built into Blackmagic Design's DaVinci Resolve, and multi-user collaboration tools that let several users edit, color and mix audio from multiple systems, all in the same project at the same time.

What this means is that DaVinci Resolve 14 is like three high end applications in one. Customers get professional editing, color correction and the new Fairlight audio tools. All it takes is a single click to switch between the editing, color and audio screens. Then the new multi user collaboration tools let everyone work on the same project at the same time, so customers no longer have to import, export, translate or conform projects.

DaVinci Resolve 14 promises to change post-production from a linear to a parallel workflow, so everyone can work at the same time, giving editors, colorists and audio engineers more time to be creative.

Under the hood

Blackmagic developers have redesigned the Resolve processing engine to be up to 10x faster than previously. In addition to extensive CPU and GPU optimizations, customers also get better threading and GPU pipelining, lower latency, much faster UI refresh rates, support for Apple Metal and much more. This makes DaVinci Resolve 14 faster and more responsive than ever so customers get incredibly fluid performance and more precise editing, even on long timelines with thousands of clips. Scrubbing and playback are instantaneous and there is powerful new acceleration for processor intensive formats like H.264, making it possible to edit 4K images on a laptop.


Blackmagic announced the acquisition of Fairlight at IBC 2016. In six months the developers have integrated Fairlight's audio tools into the DaVinci Resolve video application.

Customers get a complete set of professional audio tools for recording, editing and sweetening, professional bussing, mixing and routing, and multi format mastering to 3D audio formats such as 5.1, 7.1, Dolby and even 22.2. The super-low-latency audio engine is designed to work with 192kHz 96-bit audio and delivers up to 1,000 tracks with real time EQ, dynamics processing and plug-ins on every track when used with the Fairlight Audio Accelerator (Petty stated it delivers around 60 channels on a regular computer). Plus the new Fairlight audio can record up to 96 channels while simultaneously playing back up to 150 audio channels, while mixing it all in real time! There simply is no other software available with this level of dedicated audio power.

The new Fairlight audio in DaVinci Resolve 14 features a full multi track timeline for subframe editing of audio, down to the sample level. The mixer is designed to let customers create sophisticated sequences and has several main, sub and aux buses for mastering and delivering to multiple formats at the same time. Every channel on the mixer features real time 6 band parametric EQ, along with expander/gate, compressor and limiter dynamics. Clip time warping lets customers stretch or compress audio without shifting pitch. In addition, every single parameter can be automated, even VST plug ins, using a variety of automation modes.

DaVinci Resolve 14 is available now in beta as a free download. The final version sees a price drop from $995 to $299. Couple that with the recently released micro and mini grading panels, and Blackmagic is offering a powerful post package as a very reasonable price.

Brainstorm’s InfinitySet 3 technology seamlessly combines 3D virtual graphics with real on-screen talent t o take you where no news or sports set has gone before.

We’re going to see Brainstorm’s latest InfinitySet 3 virtual set system at the 2017 NAB Show demonstrating advanced graphics for real time augmented reality presentations thanks to Brainstorm’s TrackFree technology and its TeleTransporter feature.

As Ricardo Montesa, CEO and founder of Brainstorm Multimedia told The Broadcast Bridge in an exclusive interview, InfinitySet 3 not only seamlessly integrates their Aston graphics creation system, “it can now edit, manage and create any kind of 2D/3D motion graphics and cg from scratch.”

This is especially important to the on-air look of today’s graphics expectations because Aston is not just a product, but a whole family of 2D and 3D modules from creation to playout including Designer, Player, cg and Snap Render.

“In fact, both products are based on our eStudio,” Montesa said, “so they enjoy many common features and that is why it is possible for us to make InfinitySet a complete standalone solution for graphics and virtual set applications.”

But what is undoubtedly going to draw the crowds around the Brainstorm exhibit at the 2017 NAB Show will be their TelePorter demonstrations.

Montesa previews it for us as, “In essence, the TeleTransporter feature seamlessly combines 3D virtual sets and live or pre-recorded video feeds with chroma keyed characters, all moving accordingly with precise perspective matching. This allows presenters, or on-screen talent, as well as 3D objects, to be precisely inserted into videos from remote locations and pre-recorded feeds.”

Key to this is Brainstorm’s unique 3D Presenter feature which is much more than a chroma key.

“The 3D Presenter feature is indeed more than a traditional green screen process.” Montesa said. “Traditional green screen results in a video layer where the presenter is a ‘sticker’ a two-dimensional object inside a video composition. When we’re dealing with 3D environments, these video ‘stickers’ can’t behave as any other object around, so dropping the proper shadows or reflections, or being affected by real 3D lights is impossible for them.

For years, technicians tried to solve this by adding furniture to the green screen set and keying in shadows to the real set. But they never interacted properly with a 2D object such as the video layer of the on-screen talent.

“What Brainstorm has developed with the 3D Presenter feature is a technology that allows the real-time extrusion of the video layer of the presenter to create a 3D object with real volume and correct shape in the virtual set,” he detailed for us. “Therefore the chroma keyed character is no longer a video layer but an extruded 3D object, which can interact properly with the virtual elements in the scene by dropping real shadows and intersecting correctly with the augmented reality object, as if it were an object in the scene.”

In truth, Brainstorm’s VirtualGate feature really has to be seen to be understood. But Montesa was good enough to take a shot at explaining it to us.

“In essence, this is like applying the sequence shot’s concept to a mixture of virtual and real images, with the keyed character being the continuity element. This is something that has been done in advertising and post production, but never in real-time 3D broadcast operation before.”

The upshot is that, thanks to Brainstorm’s TrackFree technology, the on-screen talent is integrated not only into the virtual set, but also inside the additional content. As a result, the presenter in the virtual set can walk into a virtual screen and become part of the video itself with correct spatial reference.

Once you get past the VirtualGate demonstration, you will be treated to Brainstorm’s VideoCAVE, which, by using multiple screens, gives you the essence of virtual reality without the glasses.

“The CAVE concept means ‘Cave Automatic Virtual Environment’,” and is related to a multiple virtual window, Montesa said. It’s an immersive virtual reality environment where projections are directed to a cube or room space, or a set of screens. These projections are related in perspective to the tracked person in the room, or to the camera which is shooting the scene.”

James Eddershaw has recently been appointed Managing Director of Shotoku UK, replacing Mike Wolfe, a founder of Shotoku UK, who is assuming the position of Chairman. Eddershaw talked about the robotics market, the resurgence of virtual studios, and introduced some new products.

Shotoku started up a UK division in 2005, with some former Radamec staff shortly after the acquisition of Radamec by former competitor, Vinten. The parent company Shotoku Corp. had been the Radamec reseller in Japan so both parties understood each other. The Japanese and UK divisions are complementary, Japan manufactures cranes and jibs, but did not have robotics. The UK arm brought robotic expertise and the manufacture of pedestals to the company.

Shotoku UK is now a global supplier of camera robotics, with Europe their largest market, especially Germany and UK. The U.S.A. and Canada are also large markets, popular with the prime networks, plus the home country, Japan.

Shotoku USA

In April 2017 the company launched Shotoku USA, based in NY State to further support their growing customer base in N. America. The existing base in Atlanta will continue as a satellite office. Eddershaw said: “We have enjoyed long-term success in numerous broadcast, cable and network operations, and government facilities in the US and are now proud to further demonstrate our commitment to this key region by creating Shotoku USA. We are also delighted to announce that the Company is being staffed by two of the most dedicated and talented individuals in broadcast engineering. Matt Servis, held in high regard by clients and colleagues alike, is our service engineer located in the northeast, and Andy Parsons — former CNN engineering guru with over 35 years of broadcast robotics know-how, joined us in 2016 in Atlanta, GA and has been successfully working to expand the Company’s presence across the country.”

Customers for Robotics

Eddershaw continued with the importance of the ROI for camera robotics. “Customers buy robotics to reduce cost, provide reliability and consistency. It’s all about ROI.” In the newsroom one operator can manage five cameras with robotics. Over the ten-year life span of the robotics, that represents a huge saving in operator costs. It’s not just about reducing the number of operators, “any self-respecting cameraman is not going to want to work in a newsroom, just leaning on the pedestal. It is a job better done by robotics.”

The early business for Shotoku was upgrading older robotic systems to give them a new lease of life. The company has built upon that beginning with a number of new products to meet developing requirements. News producers searching for a different look for newscasts have started to use robotic track systems. Shotoku have responded to this market with SmartTrack. This can be configured as floor or ceiling-based, the latter leaves the floor clear for talent and guests to move around safely. However, suspending tracks from the ceiling that can handle a 50kg payload is a major technical challenge.


The SmartPed pedestal uses sensors to track movement of pedestal wheels, so doesn’t require floor markings. The three wheel smooth-steer pedestal features a new height column without any need of pneumatic balancing, multi-zone collision avoidance and detection systems, and precision-engineered, electro-mechanical steer/drive system for unparalleled levels of performance and reliability. Each camera has a home position where it starts every morning.

European studios are smaller than U.S. with just one set, with a weather corner. In U.S. it is normal to have very large studios with multiple sets; maybe a morning, lunch, evening and weather set. It can be 20 to 30 m between the sets with robotics moving cameras. The cameras may have to move robotically between sets in a commercial break. This mode of operation does not lend itself to using tracks, with robotic pedestals being the primary choice. Eddershaw said that it was SmartPed that has been the driver in the U.S market.

Outside the broadcast market, Shotoku also serves government markets for cameras in legislative chambers. Although PTZ cameras can be used in such situations, for important chambers, broadcast cameras are required, with broadcast lenses to give the expected picture quality. Eddershaw added that “While cost is a big driver, we have seen in robotic market that over the decades no-one compromises on quality.”

VR or VR?

The virtual and augmented reality (VR/AR) markets are another sector that Shotoku serves. The Shotoku camera supports provide all the tracking information for the VR graphics engines including position on floor, height, pan, tilt and roll values. The company works with key players including Viz, Avid (Orad), Brainstorm, RT Software, and Wasp 3-D.

I broadcasting VR used to refer to virtual studios, but now has acquired a different meaning inherited from the games sector. For twenty year or more VR has been taken to mean a green screen wall, with possibly a basic set, then replacing the studio with virtual 3D graphics. Now VR is taken to mean immersive video with 360 camera acquisition, viewed on a headset. Augmented reality (AR) falls more into the traditional VR/virtual set world, which lies in the area of Shotoku’s expertise. This can be done using the robotics or manual operation tracking the physical movements of the camera support.

Another set of products which comes from Shotoku in Japan. This comprises a full range of pedestals and cranes for manual operation. The products sufficed “VR” all have tracking sensors built in.


New at the 2017 NAB Show is Graphica, which blends the company’s VR technology with the engineering know-how of crane maker, CamMate. The result is a range of manual VR/AR tracked camera cranes in a package that is portable, scalable, stable and, most importantly, repeatable. Graphica calculates positional data output from embedded physical rotary encoders designed specifically for VR applications. Free of the jitters, external markers, and area limitations often associated with other positional tracking systems, Shotoku encoders seamlessly process data to provide real-time data output, in the studio or on location. There are seven Graphica models of varying lengths are available to create dynamic camerawork in the smallest studios or the largest of outdoor sporting events.

Traditionally, companies have bolted encoders onto an existing crane. However, backlash and slop in mechanisms leads to positional errors. “We did a close partnership with CamMate to embed our technology for VR tracking into the CamMate crane. This is going to give a different level of accuracy,” said Eddershaw. “It is a range of high performance cranes at a price which we think will interest new markets.”.

Free d2

This is a product that spun out of BBC Research & Development. It is tracking system that does not require physical encoder devices attached to the camera support’s moving axes.

Virtual studios came and went in the 1990s, but returned in 2010 with the availability of low-cost, high-performance GPU cards. In early days, graphics workstations were very expensive, maybe $500k per camera. It just became unaffordable. Tracking systems have now become the main investment while graphics have got much lower cost — a prime example of Moore’s law for electronics, changing the balance in cost between mechanical engineering and electronics.

The quality of the tracking is key to the illusion. It doesn’t matter how good the graphics are, if the camera tracking is inaccurate the illusion is spoilt.

“We can do it with robotics, with tracking encoders, or completely handheld” said Eddershaw. “we revisited the Free-D project, and have relaunched it as Free-d2 (free-dee-two), with the same algorithms at the core.”

The Free-d2 system, which is ideal for VR/AR news, sports and current affairs live studio productions, uses advanced video processing algorithms and simple ceiling markers to precisely determine the exact position and orientation of the studio camera, thus providing highly accurate and constantly referenced (absolute) position tracking. A small upward-facing tracking camera provides a view of the markers in the lighting grid. Lens data is combined with the video image and presented to the Free-d² processor which precisely calculates the camera`s 3D position and provides industry standard, frame-synchronized tracking data for any graphics engine. No concept of a home or reference point exists for Free-d2 — wherever the camera is positioned is immediately known. The tracking data never drifts regardless of the number of moves or hours of operation.

Rounding Up

Shotoku continues to innovate to meet the demand for camera supports in a market that has grown from newsroom robotics to VR and AR. Whether automated motion or tracked human control, camera support is changing to meet the new demnads of program makers.