Search form

Driving ‘Mortal Engines’: Weta Crafts a Dystopian World for Universal’s YA Adaptation

VFX supervisor Ken McGaugh and animation supervisor Dennis Yoo face the challenges of creating a steampunk-inflected dystopian world inhabited by giant, predator cities on wheels for Oscar-winning VFX artist-turned-director Christian Rivers’ feature adaptation.

Directed by Oscar-winning VFX artist Christian Rivers, ‘Mortal Engines’ is based on the YA dystopian adventure novel from British author Philip Reeve. All images © 2018 Universal Pictures.

Released in the runup to Christmas, Universal Pictures’ Mortal Engines is a wild and crazy ride through a dystopian world where giant, predator cities on wheels roam the landscape preying on smaller cities and devouring everything in their path in search of natural resources.

Directed by Oscar-winning visual effects artist Christian Rivers (King Kong), the decidedly steampunk-inflected film, which is based on the YA dystopian adventure novel from British author Philip Reeve, stars Hera Hilmar as Hester Shaw -- the only person who can stop London in its tracks.

Mortal Engines was mostly shot at Stone Street Studios in Wellington, New Zealand, in early 2017, and visual effects were primarily handled by Weta, known for its work on Lord of the Rings, The Hobbit, King Kong and The Planet of the Apes.

Ken McGaugh, who won a 2003 VES Award for his work on The Lord of the Rings: The Two Towers, as well as well as two Academy Technical Achievement Awards in 2004 and 2010, explains that he came on board as production visual effects supervisor about three years ago, but previs for the film had actually started as early as 2009. He worked closely with Weta VFX supervisors Kevin Smith and Luke Millar to craft the film’s monumental visual effects.

“This type of shoot was quite a bit different to a lot of the other shoots where you’re putting a CG character into a photographic scene,” McGaugh explains. “In this case, it’s kind of the opposite where we’re basically taking the photography and inserting it into a digital world.”

He explained that this meant that there was a lot of camera metadata to capture on set. “Largely, we’re there in these types of situations to capture the data and to make sure that the data -- the tracking markers, the greenscreens -- that everything is suitable for our purposes in postproduction,” he says. “We’re also there to advise on what is going to go in those large greenscreens and, in particular, when you’re working with partial sets for air ships that are docked and things like that, you tend to have CG that’s going to be really close to camera and it’s going to effect the lighting quite a bit and so we get heavily involved with the cinematographer Simon Raby in trying to make sure that the lighting is going to work when we actually put all the CG around the characters.”

McGaugh also worked closely production designer Dan Hennah and the rest of the art department to determine what was going to be created in CG and what was going to be built on set. “At the end of the day, we prefer as much to be built in camera as possible to minimize the amount of visual effects work and that’s their goal as well,” he explains. “Obviously they are constraints on the size of stages and the number of stages and we can’t always use the perfect stage for each set because of scheduling. Usually the conversation goes: ‘Well because we physically don’t have the space or because of health-and-safety reasons, we can’t build past this point, so it has to be CG from there on.’”

McGaugh recalls that while the film gave the visual effects team the opportunity to create a wide variety of very unique worlds, “the most daunting challenge in pre-production was just how do we sell these giant moving cities and make it believable? There was such a huge risk that we could fall into cartoony or just straight silliness and it wasn’t mean to be a cartoon or a silly film.”

He added that initially, their top priority was to try to maintain a consistent sense of scale for the mechanized cities like London. “But as we got into it, it evolved, and we came to the conclusion that it wasn’t so much maintaining sense of scale; it was maintaining a sense of scale without having the scale distract the audience. It was more important that we maintain the suspension of disbelief and there were times where we could use some tricks that help really sell how big these cities are and make it feel massive and immersive, but there are other times we do the same tricks and it really pulls you out of the film because these things are moving at 300 kilometers-per-hour, which is just ludicrous and you start putting these scale references in and all of a sudden you’re like, ‘whoa, that tree’s enormous looking,’ or ‘why is that tree moving so fast past the wheels?’ and so it became a real balancing act.”

Animation supervisor Dennis Yoo, who has previously won VES Awards for his work on both War for the Planet of the Apes and Jungle Book, oversaw the mechanized city motion, as well as the airships and the CG character Shrike. He explained that while some of the challenges with scale could be solved in-camera, for example having cameras lower to the ground “always gave that presence” others were more challenging, requiring shot-by-shot, hands-on, grunt work.

“There was definitely an issue with scale,” Yoo acknowledges. “I mean initially, we were getting them to move at 100 kilometers an hour, which is a ridiculous speed if you really think about a city moving that fast. But that ended up looking like two snails racing. It was quite boring to watch. So we actually had to boost up their speed for shots like that, so that we actually see much more movement. We bumped those speeds up to 300 kilometers an hour, only because we needed to sell that dynamic kind of feel you need in an actual film, because that scale just blows everything out, where it just starts feeling dull and slow.”

Yoo reports that one of the biggest challenges was that London had countless moving parts that had to be animated, but the first question was: “how do we make this into a scene file where an animator can start animating it? That was a whole process in itself,” he notes.

“So one of the things we developed is something called a layout puppet,” Yoo elaborates. “It’s literally a moving environment that we can animate. And from there, we started devising ways to configure this layout, so that we could actually get animating.”

Yoo, who said that he appreciates tackling a good challenge, explained that the team at Weta developed procedural animation techniques to make it easier for the animators to focus on parts that really needed attention, letting computers do some of the heavy lifting when it came to animating things like the tracks of the city and making sure that they always touched the ground.

“After the main bulk of the motion, the effects team would come in and they would put in the dust, the exhaust and all that detail,” he reports. “We also developed this dynamic caching system which would help us with the secondary motion of our buildings. So the base of the city would move, we would animate all that. And then the little buildings on top, they would actually start following our main movement. So, again, we wouldn’t have to animate every little building on that city to a blocking point, so that animators could actually concentrate more on performance of the city rather than trying to do every little detail that they needed to do.”

Another key challenge was the menacing cyborg Shrike, played by Stephen Lang. Initially, the character was designed with more of steampunk robotic face, without eyebrows or much control over its facial expressions, but after they started shooting, they realized that his face would need to emote a lot more to pull off his performance. And so, they decided to put a full facial rig onto Shrike, after they had actually shot the footage of Lang.

“His motion is definitely based off of Stephen Lang’s performance because the voice is all Stephen,” says Yoo. “[However], Christian really wanted to push the character, and he felt that keyframe would be the answer, and so, we did a lot of tests in the beginning.... We started referencing ourselves and we started performing ourselves as animators, and once we started finding this creature that we liked, we would push that in keyframe and just make him look super heavy and menacing in those type of performance beats.”

He added that while there is no motion capture in the character, they used Lang’s actual performance as reference as much as possible.

“Shrike was an evolving character,” Yoo sums up. “We had to find him in animation and [that was] a bit scary because you don’t actually know what you’re looking at. It wasn’t like a performance that was nailed down [in motion capture]… We had this process of constantly showing the director ideas and ways he might move, the way he might speak.”

When it came to creating London’s Medusa super laser, which has the ability to destroy an entire city, McGough said that director Christian Rivers, himself an accomplished visual effects supervisor, was very hands on, working closely VFX supervisor Kevin Smith.

“Fortunately, we had a very good post-visualization team at Weta. They were co-located on site in the production offices,” he explains. “And because Christian himself had one of the machines he did some of it himself and was there every day directing the team.”

“Because the general effect was designed in postvis, when it came time for us to start refining it, it became very much a layered, iterative approach,” he recalls. “The first thing our effects TDs did is they took the post-vis animator’s scene, took all the key elements and kind of did them again in video game quality, but using the proper effects tools.”

“When they got all the layers in there, Christian was able to say, ‘well I don’t like that,’ ‘you can remove that,’ ‘you can kind of change it a little bit.’ But it was all done in a way where one very senior effects TD could manage the whole process in blocking out the effect until the blocking was generally bought off on and then all the different layers were split across multiple effects TDs to refine and to do the proper rendered versions.”

McGough added that managing the interdependencies between the effects, environment and animated elements so they all converged into the final version at the end was tricky and somewhat laborious. In the end, he says that what really helped sell the effect was the sound in terms of the magnitude of the explosion and the scale of what was being blown up.

For McGaugh, “the great thing about this job, both on set as well as in post, is every job is different and on set every day is different. The requirements for every setup are different so it really keeps you on your toes. There’s no [formula] to fall back on.”

Overall, McGaugh reports that he was “immensely thrilled with the teams’ work. We were under a particularly tight schedule and I’m so amazed at how good the work looks in general, but especially considering the time constraints we had and morale was actually really high through the whole thing,” he says.

“There are obviously things that I wish I could spend more time on, but that’s always going to be the case,” McGaugh concludes. “You’re so close to it. You really, honestly, think it’s good; You know it looks good, but you just don’t know how people are going to react to it.”

Scott Lehane's picture

Scott Lehane is a Toronto-based journalist who has covered the film and TV industry for 30 years. He recently launched VRNation.tv -- an online community for VR enthusiasts.