Nicolas Neubert did not set out to make international news, but that’s exactly what happened after he sat down at his home desktop computer in Cologne, Germany, near the end of June 2023 and began playing around with Gen2, a new generative AI video creation tool from well-funded New York City startup RunwayML.
The 29-year-old senior product designer at Elli, a subsidiary of automaker giant Volkswagen Group focused on electrification and charging experiences, had already been using his free time to generate sci-fi inspired images with Midjourney, a separate, popular text-to-image AI tool. When Neubert caught wind of Runway’s Gen2, which allows users to upload a limited number of images and converts them freely, automatically, into short 4-second animations that contain realistic depth and movement, he decided to turn some of his Midjourney imagery into a concept film trailer.
He posted the result, “Genesis,” a thrilling, cinematic, 45-second-long video that sketches out a variation of the age-old sci-fi theme of man vs. machine — this time, with humanoid robots that have taken over the world and a human rebellion fighting back against them, reminiscent of the Terminator franchise or the upcoming major motion picture The Creator — on his account on the social network X (formerly Twitter). Neubert didn’t expect much in the way of a response, maybe some attention from the highly active community there around AI art. Instead, the trailer quickly went viral, clocking in 1.5 million views at the time of this article’s publication just a week and a half later, and earning him coverage on CNN and in Forbes.
Neubert recently joined VentureBeat for an interview about his process for creating the trailer, his inspirations, his thoughts on the current debate in Hollywood and the arts over the use of AI, and what he has planned next. The following transcript of our question-and-answer (QA) session has been edited for length and clarity.
Neubert: I think the feedback has been overwhelmingly positive. It was definitely not meant to blow up like this.
I’m a generally curious person who likes to try out tools. When Runway announced they had an image-to-video tool, Gen2, of course I thought, ‘let’s try it out.’
I had these pictures lying around from previous Midjourney explorations which gave me a good base to get started. I told myself: ‘Why not? Let’s try to do a 60-second movie trailer and share it? What’s the worst that can happen?’
I guess people quite liked that a lot. I think it’s a great tech demo to see where we’re heading. And I think it definitely opened some discussions as to where AI can already be utilized to some extent, from a professional standpoint.
Let me back up a little bit and ask you about your job. You’re at Volkswagen subsidiary, is that right?
Exactly. I’ve always had a full time job but I’ve enjoyed side ventures as well. Prior to this year, I always freelanced on the side working with startups helping those scale. And then beginning of this year, I kind of replaced that side hustle with getting invested into AI. Product design is my main job — I’ve been doing it for eight years — and the artistic, creative part has always always been a hobby.
Since I was a child, I always liked sketching, arts, music all of it. So when Midjourney came out in public beta [July 2022], it was kind of like a dream come true, right? You could suddenly visualize your thoughts and your creativity like never before. And that’s when I built my Twitter [X] platform around it, and I started growing that and then kind of always looked at how to combine different tools.
In your in your role as a product designer over the past year at Volkswagen and then even prior to that, what tools were you using?
I explore every hardware on the market, but I think you can really boil the toolset of a product designer down. I would say 95% of all creation comes from Figma. We spend our days creating screens, creating prototypes, designing pretty user interfaces and all of that. Of course, if you’re working with advanced animations, or you need certain graphics, you might go out into a different tool. But 95% also means most of the job currently doesn’t involve a lot of AI. I would say that Midjourney is entering the ring as a more and more attractive option now for brainstorming, ideation, or illustration, but I would still label that as playing around.
What was the time frame and process for making the Genesis trailer? Did you make all the images beforehand not knowing about Gen2, or did you make some specifically for the trailer?
The week prior to having the idea of the trailer, I posted three photo series on Twitter [X]. And those photos series were so to say already in that world. I already had those themes of robot versus humans in a dystopian world. I already had a prompt that went very much in that direction. So when I decided to do the trailer, I realized I already had prompts and a great fundament, which I then quickly tweaked. Sitting down on my computer, it took seven hours from the beginning to the end.
All in one time frame? Or did you have to take a break for your day job and go back to it? What was the kind of burst of work that you were able to do?
I’m a night owl, so I did the first five hours at night, at some point then the responsibility factor kicked in and I had to cut it off for the day job. But I would say I finished everything at night except for the last edits. It was just one or two scenes that were missing. Everything else was finalized. And then on the next day, after work quickly made those scenes published it all up and then posted it. So I would say it was like a five and a two hour session.
And you primarily used Midjourney to create still images and then animated them in Runway? Or did you use any other tools, such as CapCut, or something else for the music?
To go back a step, one of the goals of not only this trailer, but what I do with Midjourney, is to show the accessibility of it — of all the tools I use. And AI is a fascinating technology. For people who are not that confident in their creativity, it’s finely tuned for them to actually get to a result. They can draw, maybe they can visualize something, but then they can take their ideas further with these tools. This is a very important point for me personally.
So with this trailer, I wanted to demo making the entry barrier as low as possible. I wanted to show people they only need a couple of tools, and beyond that, all you need is your imagination. So we have Midjourney and Runway, those are the two paid applications. And then to keep everything else low barrier, for music, I went to Pixabay and took something out of their commercially free pool of soundtracks.
For the editing, I used CapCut because it’s free, and I did not have Adobe Premier installed on the machine I was working on. It was surprisingly good, and I was surprised how much you can do in in the graphics editor. It all just kind of came in perfectly together.
How long do you think it would have taken you if you had not had artificial intelligence? Would it have even been possible for you to create the Genesis trailer, if you had to edit it and animate it manually?
Without AI, would I have had the skills to do it today? No. Is it possible for someone else? Yes. Of course. But you would have a much higher effort, right? You would probably approach it differently. Because right now with AI, we work with a couple of restrictions.
We’re working with images and we’re animating those images. If I would approach this from a non AI standpoint, I would certainly consider using engines gaming engines to get 3D stuff, where they’re using Blender and Cinema 4D, and building it completely differently from the ground up.
That method results in higher quality and it has more control, but it also takes a considerably longer amount of time to do. And if I may add, a lot of those tools can also get very expensive with their licensing.
So, I think this is a perfect example of just opening this field of creating original videos for a very low entry barrier.