3d News World is back


Monday, July 12, 2010

ILM's Elements for The Last Airbender

or The Last Airbender, director M. Night Shyamalan turned to Industrial Light & Magic to create dynamic fighting scenes featuring elemental manipulation, as well as complex creatures and environments for the film. VFX supe Pablo Helman and associate VFX supervisor Craig Hammack take us through some of ILM's main technical challenges. 


Creating 3D fire and other earthly elements


small
In the world of Airbender, each nation controls an element - Water, Air, Earth and Fire - under the supervision of the Avatar. When the Fire Nation seeks world domination, Young Avatar Aang, who can control all four elements, is charged with restoring the peace. Fire, in particular, was one of the elements ILM tackled early on, drawing on its previous work for Harry Potter and the Half-Blood Prince. "What we found in that movie," noted visual effects supervisor Pablo Helman, "was that the fire looked great, but it was only projected onto 3D cards. ForAirbender we needed a real 3D volume fire. The fire events are coming towards camera and the camera was also moving around so much that we would need to be going through the volume itself." 

Ultimately, ILM tweaked its pipeline so that fire could be simulated and rendered using Nvidia's GPU language Cuda as a complete 3D volume in hardware, rather than software. "I'm from a FX TD background and I've had to fake fire a few times," said associate visual effects supervisor Craig Hammack. "In the past, we've had to computationally through a simulation give you the proper temperature propagation and control. There's always a two-stage process where you're simulating massive amounts of data and you're visualising it in some GL diplays of points or vectors. Then there's the tuning of the render. They're coupled so tightly that as an effects artist it's really difficult to keep going through these hoops of simulation-render-refine, and then decide you don't have the right simulation and have to go back." 

Rather than have artists visualise the simulation data, all of the fire sims went directly to a render which meant that iterations were essentially judged as the final product within ILM's proprietary Zeno package. "That ended up being a huge advantage to us," added Hammack. "The render gets passed down the line basically to a visual effects supervisor - several iterations a day - without having to ask them to extrapolate what they think the final look will be." Some practical fire elements were shot on set and used for the beginning and end of fire 'events', but in the end the fire bending sequences were realised entirely in CG. "We tried doing some of the fire as a special effect," said Helman, "but because of the behaviour of the fire, the camera moves and safety precautions, it couldn't really be done on set. We also played with the frame rates - some of it was shot at 96 frames per second or some 24 frames per second. Even if the performance was real-time sometimes we wanted the fire to feel like it was in slow motion." 

For Earth effects, ILM relied on technology developed for Indiana Jones and the Kingdom of the Crystal Skull and other shows for shots of the ground shattering and fracturing. There was also development of the tools used for Water and Air effects. "We leveraged the fire pipeline for some of the water and air shots," said Hammack, "and early on we did development to build up our smoke tools because we knew it would take some form of vapour. We built an up-res pipeline to get a tremendously more rich look to our smoke." 

One particular sequence featuring Aang battling Prince Zuko of the Fire Nation in a storage room took advantage of the fact that ILM's airbending pipeline shared a core simulation with the fire pipeline. The sequence was filmed using some practical fire at Zuko's feet and interactive lighting on set. ILM blocked in the animation of the fire and air with simple animated tentacles or spheres to judge the timing. "The shots sometimes ramped into slow motion," said Hammack, "which really highlighted the effects, and because the fire and air share the base fluid simulation we were able to really see how they integrate together and see all the nice flowlines as one starts to impact the other."

Artists relied on Nuke to integrate the water, air, earth and fire elements into live action plates. "We go to the trouble of match-animating to the elements quite closely," said Hammack, "so that then we can get decent hold-outs from it. For the water we can project the live action elements to get the refraction and reflection in a true three-dimensional space. We also got a true valid alpha out of the fire, so it's very easy to deal with in compositing. We build in the true geometrical depths into the renders and you get a very clean, very valid image to comp with. In the case of the storage room shots, there was quite a large amount of comp work to get across the idea that the fire is illuminating all of the air. The thing with the airbending is that the visual part of it is drawn from the elements that are around Aang at the time. In the dusty storeroom you get a beautiful scattered glow through the air and in slow motion you can watch it all resolve as it floats away."

small
small


Long shots


ILM's air, water, earth and fire effects also had to stand up to long shots planned by director M. Night Shyamalan. "In the storyboards and previs, I could see that here was a movie where there weren't going to be lots of cuts, even in the action sequences," recalled Helman. "The director would envision one shot in which the camera connects the characters and is revealing things in that one shot. So in the end, an average shot for us ended being between about 250 and 300 frames, which are really long!" 

One particularly long shot - more than 1000 frames or 40 seconds - follows Aang as he fights six soldiers, zooming in and out of the action, and all with CG elements added by ILM. "In the shot," explains Hammack, "Aang is able to use half a dozen techniques to fight the solders. In one he does a flip-over and melts the ground underneath the soldiers. In another he pulls a slab of ice out from the ground underneath the soldiers' feet and launches them in the air. And in another he pulls water out of the ground and freezes a soldier against the wall. There are probably five or six beats to that one scene where we follow him. It ramps in and out of slow motion and ends up being a beautiful shot."

DP Andrew Lesnie filmed the scene with two cameras side by side with different lenses, one long 160mm lens and one wide 24mm lens, at 96 frames per second. The nested lenses allowed ILM to later zoom in and out of the action to focus attention on certain areas, then go back to a wide shot of the whole battle. Apart from adding in simulations of the element effects, other challenges of the shot included match-animating and rotoscoping all of the obstacles and interaction from both cameras and dealing with a multi-axis waist rig worn by the actor and stunt performer. Added Helman: "We also changed the background so that it was a market scene as opposed to a regular fight terrain and then we put snow throughout the shot - the whole thing took about 8 months!"

Fur, water and digital doubles


small
CG creatures feature heavily in The Last Airbender, including a 16 feet tall furry Sky Bison named Appa who has six legs and is also capable of flying. "Design-wise, Appa was a challenge because he's so big and hairy and needs to express in his face," explained Hammack. "Being an actor's director, Night wanted to feel every emotion on screen. In the scene where we find him, he's in this ice crater and the shots are framed so that he's so large that on a wide shot where the actors are quite small, he's still big. Then when you go in tight on an actor, you're seeing a very small part of him. He's basically all hair and it all has to resolve in the wide and the tight shots."

ILM's fur pipeline was re-purposed so that the existing hair interpolation scheme moved to a node graph-based system, allowing artists access to the data at any point in the process of interpolation. "We were able to extract data from that," said Hammack, "and insert data such as particles so that we could bury snow and ice chunks into his fur which had previously been somewhat impossible for our hair pipeline. We found that really helped sell Appa's size on screen."

small
In one shot, Appa tires from flying and resolves to swimming in the ocean with children riding on his back. Plates for the swimming scenes, which also feature large icebergs, were shot in Greenland. "We replaced all the water first of all so that we were driving the fluid sims," said Hammack. "Appa's motion is generating quite a large wake and we were able to derive a water line from that to get a matte and merge between two sims, before we got to the interpolation. We used the water line to drive the tufting that goes on as the hair gets wet and turns into tight tufts. We were also able to accurately drip water off the hair because we had knowledge in the hair graph of where the tips of the hair were. Then you do this kind of flight history to use the water line to fade off after time as the hair comes out of the water. It remains tufted and wet and then fades off after time."

ILM took a new approach to digital doubles seen in the film, including for the children seen on Appa's back and mostly for shots of Aang fighting or doing stunt work. "For digital double work it's becoming more and more important to have the actor much bigger on the screen," noted Helman. "Once you do that and you start transitioning between real and digital by hand, you sometimes start making up the shapes and you can actually start to lose the actor from one static pose to another." 

small
To avoid this problem, Helman and animation supervisor Tim Harrington, working with ILM's R&D group, developed a proprietary face system based on motion capture sessions and camera reference of the real actors. "We brought in the actors and had six monitors around them in which we would play a specific shot where we needed to replace the face, say. We had six HD cameras and another little camera in front of the face of the actor. The actor would look at a looped shot and do the performance over and over until we felt we had got it. Then later on, we basically decoded that performance and put it into the shot, as opposed to developing shapes for transitions. That meant we had a digital likeness of the actor right from the beginning as opposed to us trying to find transitions between one expression and another one and then losing the actor in between." 


No comments:

Post a Comment