loader image
cap america productionhub featured 1

Digital Domain’s Hanzhi Tang Breaks Down the High-Flying VFX of Captain America: Brave New World

Featured in ProductionHUB

Marvel Studios' Captain America: Brave New World delivers one of its most exhilarating action sequences yet—an intense 8.5-minute, nearly all-CG dogfight brought to life by Digital Domain. Leading the charge in crafting this visually stunning spectacle is Hanzhi Tang, Digital Domain’s VFX Supervisor, whose team handled everything from previs to the final breathtaking shots. Their work included creating hyper-realistic CG digidoubles, developing a state-of-the-art cloud simulation tool for the immersive backdrop, and reintroducing the Celestial Island to the big screen. In this exclusive breakdown, Tang shares insights into the creative and technical innovations behind this ambitious sequence.

PH: The dogfight sequence in Captain America: Brave New World is almost entirely CG. What were the biggest challenges in making it feel realistic and immersive?

Hanzhi Tang: The biggest challenges are two-fold. Firstly, realistic and easy to follow action in the motion and secondly, visually telegraphing the great speed that this action is all happening at. The previz and anim teams really hammered out the action with production supervisor Alessandro Ongaro and the filmmakers into an edit that everyone was happy with. In lighting and effects, we focused on creating a believable ocean environment and a beautiful 3D cloudscape that we could fly through, not just a high-res backdrop of photographic clouds. Passing clouds and wisps and sweeping highlights help reinforce the intensity of the action.

PH: Digital Domain has a long-standing relationship with Marvel Studios. How did that prior experience influence your approach to this film’s VFX?

Hanzhi Tang: Our most recent work with Marvel on two films helped prepare us for this sequence specifically. We developed clouds in the skydiving sequence in Black Widow and an ocean environment in Black Panther: Wakanda Forever. This gave us a head start with the lessons learnt from those experiences.

PH: The previs team played a crucial role in shaping the dogfight sequence. What were some of the key ideas that emerged during this phase, and how did they evolve in the final cut?

Hanzhi Tang: Our previs team lead by Cameron Ward really did the heavy lifting when it came to working with the director on crafting cameras and shots that felt exciting and engaging but still made sense with the frenetic pace of events in the sequence. The whole dogfight sequence went through major revisions to speed up and simplify the action to make the number of missiles and aircrafts comprehensible and easy to track from shot to shot.

PH: How did you balance the need for high-speed action with clarity for the audience, ensuring that each beat of the battle remained visually engaging?

Hanzhi Tang: With the shots tending to be quick cuts, we looked at everything from clear posing of the characters to the aircraft, focusing on easily readable silhouettes to sell their travel direction as soon as you cut to the shot. At the same time, we didn’t want the camera to appear too perfect and locked to the action in an artificial way, so we kept that feeling handheld and just a hint of camera shake if it was really justified. In some shots, we also played with the motion blur shutter angle to get a sharper read on details, especially in whip pans, sometimes combining some sharper renders for a few frames so that it didn’t turn into a blurry blob.

PH: Creating full CG digidoubles for Captain America and Falcon is a major undertaking. Can you share how you ensured they looked as photorealistic as possible?

Hanzhi Tang: Captain America and Falcon were both new revised costumes for this movie, so they were both modeled and textured as new assets from the photoscans that we were provided. The wings were reused with some slight modifications from The Falcon and Winter Soldier assets, but the look dev was redone for our renderer.

PH: What specific challenges did you face in animating Captain America’s shield interactions and Falcon’s wings during this sequence?

Hanzhi Tang: When dealing with Captain America's shield interactions like mid-air throws or blocking gunfire attacks, we had to ensure the action made sense for the speed he was traveling and that there was enough weight in Sam's actions for it to be believable. When he was taking gunfire, we had to make sure there were subtle reverberations on the shield and with his arms to make it feel as though he was really struggling to block and defend. Some of our more challenging moments with Joaquin and Sam's wings were when they had to expand and collapse from the backpack. We had to create different rigs to be able to create the motion that bent in a way that folded them in and out properly without penetrating the backpack or his body; however, those same rigs weren't as effective for our basic flying maneuvers and hero poses.

 

captain america bnw digital trends 1

 

PH: The clouds played a significant role in shaping the visuals of this sequence. Can you explain the development of your new Cloud Shader and how it enhanced the realism?

Hanzhi Tang: For this movie, we wanted to improve the cloud volume light scatter and went for a fully path-traced internal scatter rather than using an illumination cheat that we had used previously. This technique made it easier to get the nice edge darkening that helps show small detail against the intense bright core of the cloud. Given the strength of the artistic grade being applied, we had to manually adjust highlight levels in compositing so they didn’t clamp values.

PH: Celestial Island makes a return in this film. How did you approach bringing it back while maintaining continuity with its Eternals appearance?

Hanzhi Tang: We started from the asset that had been made for its previous appearance. The first change we made was to cheat the model scale down to something less continental in size. The top of the Celestial was reduced to about 8500m, comparable to Mt. Everest. We kept a lot of the design elements like surface tattoo markings and displacements. However, the asset was made for wide and medium shots, and we needed to make it hold up in all new close-up areas that hadn’t been seen before. It really depended on where the previz would take us. Extra texturing and Z-brush work or procedural shading would be needed to add super close-up detail or entire new patches modeled. For example, when Captain America hangs off one of the fingers, we would have been looking at only a couple of polygons of the original model.

PH: With the speed and scale of the jets, missiles, and environments, what technical breakthroughs or tools did the team develop to handle rendering and animation efficiently?

Hanzhi Tang: When working at such a large scene scale, we had to come up with some extra publish workaround to prevent problems from running out of numerical accuracy in the software when we are so far from the origin. The symptoms of these rounding errors show up as motion blur artifacts and broken animation rigs. To solve these issues, we had to develop some pipeline workarounds to publish things correctly and move them to the right place in world space.

PH: The ocean in this sequence needed to work at multiple scales, from extreme close-ups to aerial shots. What techniques did you use to achieve consistency across different perspectives?

Hanzhi Tang: Our ocean was generated with multiple ocean spectrums for the waves, with some wind dialed in, but that’s a massive oversimplification. The biggest problem was rendering detail in the ocean all the way to the horizon, which would be very difficult to sample even at the highest settings. We had to blend different spectra at different distances or ramp things with z-depth. Ocean spectra from high altitudes can also look repetitive or turn into a fabric weave pattern quickly. Our standard setup would be run through each camera, and we would see what became problem shots and redial them.

PH: The camera movements in this sequence are dynamic and immersive. What considerations went into designing the virtual cinematography to enhance the storytelling?

Hanzhi Tang: We tried to keep as much realism to the camerawork as possible within the limits of what story needed to be told in each shot. Using the previz as a framework, we would ask if the lens and the motion were appropriate to how an aerial shot would be set up. Is it super telephoto? Is it from a chase plane or from a helicopter? Is it like a GoPro mount? Keeping an eye on these basics would steer us away from a camera that looks too contrived or adds an extra artificial feeling to an already all CG sequence.

PH: Were there any specific real-world aviation references or inspirations that influenced the dogfight choreography and jet behavior?

Hanzhi Tang: We referenced a lot of real-world clips of fighter jets from airshows and popular jet fighter training grounds like the Mach Loop in Wales in the UK. For myself, I actually went to the biggest airshow on the calendar, AirVenture in Oshkosh, Wisconsin. I took as much photography as I could since they had an F/A-18 up close and, to a lesser extent, an F-35A. This helped the texture artists with areas of the plane that are hard to see in most photos, like the undersides and small details that are often occluded. Both planes also took part in displays where we could reference wing tip trails and high-G turns. It was breathtaking, and I came back excited to shoehorn any element of that flight behavior into our animation.

PH: Lighting and reflections played a big role in realism, from the pilots' visors to the jet surfaces. What were some of the finer details the compositing team focused on to enhance authenticity?

Hanzhi Tang: While we aimed to keep the camera work realistic in animation, we also tried to add visual details like wing tip vortices and wing vapor in high-G turns. We also got a lot of help in compositing to dial the color and look of the afterburners and glowing engine interiors. Compositing also helped add an extra wrap of reflection from the background that was a little trick we had used on the movie Stealth with the help of some extra renders for reflection occlusion. Tricks like this help when the background environment is too expensive to render with the foreground, in this case, clouds and oceans, especially with these partly matte, semi-gloss finish surfaces

PH: What was the most complex shot in this sequence, and how did the team overcome its challenges?

Hanzhi Tang: There are a few tough shots in this sequence for us. One of the interesting ones was the ejection sequence, where we recreated the ejection seat rockets firing and ran cloth simulations for the parachute opening. Our CFX lead, Gabriela Mursch, dialed in the inflation behavior and the final expanded shape using a lot of references from ejection seat test footage. Simulating the rocket engines on the seat also proved more difficult than expected because both the camera and plane are traveling at 400mph in world space, and we hit problems with the size of time steps and frame-to-frame distances travelled.

PH: Looking back, what aspect of this sequence are you most proud of?

Hanzhi Tang: I really liked how closely our previz and final animation teams worked together on different versions of shots to give the filmmakers more options when it came to trying to get a story point over effectively. The final push months of the show were also a spectacular display of consistent hard work that gave us enough time to go back and address things that needed more polish.

PH: How do you see the advancements from this project influencing future work in large-scale action VFX?

Hanzhi Tang: For Digital Domain, it’s mostly the refinement of our USD and Solaris pipeline. Each show we have done shines a light on which publish pipelines are slow or need better integration. I think there’s more that can be done to bridge FX done in Houdini and the Houdini Solaris lighting side. Although they both live in Houdini, there needs to be more commonality.