Search form

Chad Nelson Talks ‘Critterz’ and the Future Role of AI

The writer and director of the first animated short whose imagery was entirely generated with OpenAI’s DALL-E takes a deep dive into the nuts and bolts of AI production and what this hard charging, disruptive technology could mean for M&E.

As the mega-disruptor that is AI continues its gradual, but apparently inexorable, march into every corner of our lives, the M&E community, as much as any sector of the economic and cultural world, is grappling with the implications of the looming new paradigm. From the promise of ever-greater efficiency to the conundrums associated with the exploitation and protection of IP, the innumerable uncertainties and conflicting scenarios often get reduced to the more basic question of “WTF?” The current WGA and SAG-AFTRA strikes all-too-clearly illustrate the seriousness of these and other issues surrounding that inexorable march.

On the animation front, among those both pioneering the use of AI in content creation and spending a lot of time thinking about what it all means is Chad Nelson. Currently Chief Creative Director for Topgolf Media in San Francisco, Nelson is a serial entrepreneur and longtime game developer, whose involvement with online technology goes back to 1997, when he co-founded Eight Cylinders, Inc., a company dedicated to bringing broadcast-quality visuals to web content.

Most recently, with Nik Kleverov, co-founder and Executive Creative Director of creative agency and production company Native Foreign, Nelson produced a first-of-its-kind short, Critterz, whose imagery was entirely generated with OpenAI’s DALL-E, which creates realistic images and art from a description in natural language. By using DALL-E to create all the background settings and characters, Nelson was able to produce hundreds of visuals per day. Once the stills and characters were created, animators and designers were brought in to make these 2D environments into a 3D world.

At last month’s Annecy Festival, Nelson screened Critterz and talked about its production as part of a conference dedicated to exploring the ramifications of real-time filmmaking and AI. In a follow-up interview, he expanded on the creation of Critterz and offered further thoughts on the future of animation.

But first, take a few minutes to watch Critterz:

AWN: You have a long creative history in animation, production, design tools, and game development, among other areas. How did you come to make Critterz?

Chad Nelson: Critterz was a journey that unfolded over a few months – the key date being April 6, 2022, when OpenAI unveiled DALL-E to the world. In a way, I think of that as the BC to AD moment in this industry, because before that there were research papers and think-tank experiments, where people were generating art through large-language models or generative AI tools. But it was all done in a theoretical, media lab kind of way. I was one of the lucky early users that was granted access.

AWN: Did you have a prior relationship with the company?

CN: I had no relationship prior. The one common thread in my career is I've always taken new technology and figured out how to apply it to entertainment, whether for interactive entertainment, for linear storytelling, for social media, etc. In 1995, I was consulting with the studios about how to build movie pages for AOL.

I contacted OpenAI and gave them a reason to say yes to me. What I was most fascinated with was large IP holders, the Disneys, the Marvels. If I had every Star Wars prop, and every character, and every location and set piece, what could I do with these tools as a storyteller utilizing AI? And that’s what made me different from a lot of other people who had been granted access, who were more like singular artists using it for sculpture or fashion design.

When I got it, I spent probably six to eight hours the first day just generating character after character. And the reason was, in the opening months, OpenAI did not want you to generate realistic humans, like celebrities. They were very concerned about deepfakes and people abusing it, and then all of a sudden that becoming a runaway freight train with the media. But I was very concerned with, could AI express emotion? My first test was a red, furry monster looking in wonder at a burning candle. The character itself was very primitive by today's standards, but I was immediately impressed with how this AI captured what could be described as wonder.

When I taught animation way back when, and you asked someone to do an assignment, you would have good, serious students who showed promise, along with many hobbyists who thought it was going to be fun and interesting, and then they eventually burnt out or fizzled out. I was amazed that, out of the gate, Gen-1 DALL-E was producing work that showed promise. And I thought, wow, if this is the ground floor of where this industry is going, we're starting at a pretty fascinating place. And once the technology caught up a little further, that's when I pitched to OpenAI, let's make a movie, let's tell a story. Let's do something that would show not only a real case study of what this AI is capable of, but also hopefully serve as a point of inspiration for creators. It knocked down all the barriers of time or money or resources – you’d been given a major turboboost in your ability to tell stories.

I mean, AI isn’t going to generate a Pixar film yet, but what it can do, even in its current form, is give you a visual foundation from which you can build a story.

AWN: Let’s talk about how you actually made this. You had your script. Then you created your environments, your backgrounds, your characters, all in DALL-E. And then you brought it into…

CN: After Effects, for the background animations. Unreal Engine for the facial performance capture, ultimately, is what drove it. My son wrote that system for me. I mean, it was based on a lot of shareware that you can get, but he put it together in a way that made it really production-ready. And then we edited in Premiere. So in a way, it was very traditional production.

In this first production, every stage was led by traditional techniques. And then we used the AI specifically for the conceptuals, and specifically for the design of the characters. What I think is fascinating is that I'm already looking at ways that I can integrate the AI to increase the production value. Because in a world where time is finite and budgets are finite, ultimately, it's like, how much can you get on screen? If AI can help me, then I'm exploring it.

AWN: All told, how long did the production take?

CN: I would say it was about four weeks’ worth of work, in terms of my time specifically. And obviously there was my son's time to program the production path. He did that for about two weeks. The two animators we brought on to help with all the After Effects work were on it for about a week and a half. Voiceover was all done in a day. I think the first animatic, which I did all the voices for, was done in roughly three days, with stills. No animation.

AWN: So now OpenAI is looking to you to help them better understand where their technologies could really be used in the media and entertainment world. The last thing the big studios would want is for any of their IP to be part of a public data set in a library that any type of AI tool could use. The flip side is they want their folks to have access to an internal IP base. How do you see that happening, with what you know of the tools as they stand now?

CN: I think inasmuch as we're in the Wild West phase of this development, there's some very sloppy land grabbing that's been happening. Some of the tools scraped the entire web – Getty Images, ArtStation. That definitely broaches some ethics questions. And then you have Adobe saying, "No, no, we're only using licensed images that we own. There's no data from the public web." And so in a way, it's the most ethical set to date. Also, they want to create a compensation model for those who choose to allow their artwork to be used in training.

So I think what we're going to find is that there's going to be two different versions. There'll be the public generic version that anyone can use, based on mostly public domain imagery. So if you wanted to type Mickey Mouse, it might not result in anything, or it will result in a pretty generic-looking mouse. I think that will be, say, 90% of the use for the public.

And then there are the corporations that will want closed sets, and they're going to want to train these systems on only their IP. So if I was the Marvel Universe, or I was Toyota Motor Corporation, I could take my entire inventory of photos and videos and drawings and sketches and diagrams and incorporate it into a database that only my employees could access. But it would be a very powerful archive, like a Marvel Universe Wikipedia. But it could also create. And I think that's where it's going to be fascinating.

What I also think is interesting is the breadth of what you can now achieve with IP beyond just the film – interactive toys, video games, and other experiences. Even being able to give users or audiences the tools to make content themselves with your IP. It's almost like the sandbox continues into the land of the audience.

AWN: So what's next for you?

CN: Specifically, with Critterz, we've made the investment in a production line that now is turnkey. I can write more scripts. I could have social content, I could have a whole Instagram channel. The only thing that's preventing me from doing so right now is the voice talent and figuring out the rights usage. Once I have that, I have eight episodes ready to go, and then a whole host of social content that I'd like to showcase and release.

As I mentioned before, part of my goal is to inspire and educate with regard to what these tools can do. So I really want to meet individuals, companies, storytellers, and show them what I've learned, hear what their concerns are, and then bring that back to companies like OpenAI and say, "Hey, this is what they're concerned about, and here's what they would love to do with the tech. And if you modified it, we could probably deliver on a whole host of features we haven't even thought of yet.” So, for me, it’s continuing to advance the technology to benefit the creatives out there, to allow them to do their jobs more efficiently.

AWN: And of course there are a lot of ethical and practical issues surrounding AI, which is a whole other can of worms.

CN: It's interesting, because we've been dealing with this issue for years now in terms of, for example, conceptual art. If you hire an artist to design a new spaceship or a new prop or a new character, the inspiration came from somewhere. And so there's always that risk of, "Wait, did you go online and did you find something?" And are we going to get sued three years from now, because someone can pull up an image that looks similar? Now, what’s striking is that AI tools are just doing it at a pace that's so much faster. Hopefully, the new paradigms will help to resolve some of these gray-area issues. Time will tell.

Jon Hofferman's picture
Jon Hofferman is a freelance writer and editor based in Los Angeles. He is also the creator of the Classical Composers Poster, an educational and decorative music timeline chart that makes a wonderful gift.