Pressman Film CEO Sam Pressman is diving into artificial intelligence (AI) to explore the technology’s use in storytelling.
Pressman’s (Daliland) short film In Search Of Time co-created by Pierre Zandrowicz and Mathew Tierney premiered in Tribeca Festival on June 8 and is the first AI-generated film to play at a major film festival.
It combines imagery from an iPhone with open source AI platform Stable Diffusion to create a meditation on memory and loss in honour of Pressman’s father Ed Pressman (American Psycho, The Crow, Wall Street), the pioneering independent producer who died in January aged 79.
Pressman and Tierney are also behind immersive experience Human After All: The Surreal Matrix Of AI, Art, And The Motion Picture, a symposium about the intersection of AI, art and cinema which can be seen at The Canvas 3.0 at Oculus NYC through June 12. It presents conversations, events and installations including a roundtable discussion on the potential of AI in cinema involving Pressman, academics and other filmmakers, and a talk about AI, law and intellectual property.
Pressman Film also produced the feature thriller Catching Dust, starring Erin Moriarty and Jai Courtney, which premieres at Tribeca Festival on June 11. The company is in post-production on the reimagining of The Crow directed by Rupert Sanders and starring Bill Skarsgard and FKA Twigs.
What’s been the appeal of AI to you?
Sam Pressman: More than anything it’s wanting to understand. To me, it would be a great loss to reject this emerging field. As an artist, it has really inspired me. The more we talked about AI and its actual technical capabilities, the more I was impressed by the way it revolutionized the single creator [the more we wanted to experiment]. Matt and Pierre created this film, which is the first AI-based film to be accepted into a major film festivals. The “why” of
was to see what positive art could be made, and to embrace the fact that it’s still an artist working with the machine. This is terrifying. But the only way to really understand what it is is to play with it.[of exploring AI]Source: Pressman Film
In Search Of Time come about?Tierney:
Sam introduced me to Pierre who was a founder at Atlas V the French VR, XR, AR company. Atlas V has two projects at Tribeca Immersive, and Pierre has been working in the immersive field for a very long time. Sam and I had many, long conversations about the possibilities of AI and cinema over the past year and a quarter. We wanted to avoid the tropes people have used with AI and explore sci-fi. We wanted to tell the most human story we could, so we decided upon memory and then we made it a story about childhood memory and how we have to grapple with ageing and time disappearing from our lives.[Zandrowicz]Pressman:
My father was in the hospital at the time we started working on it. It was originally cut from a child’s perspective, but we decided to keep it that way. It’s six minutes long, but it’s like a poem you can just melt into. In many ways,
In Search Of Time opens another possibility: There’s great potential to do a series of projects because shared memories are universal, especially when they are captured through great cinema. This technology actually is a utility that in the hands of someone’s imagination can unlock a much more democratic, open production for them.Tell us about the imageryPressman:
We use the metaphor at the beginning of the film of a tree stump and how the intricate layers of our memory are like the rings of a tree. We used iPhone footage of a child playing on a stump, and Stable Diffusion with text prompting. The tree stump is transformed into a painted forest. The hole in the stump becomes a beautiful waterfall. Characters start to populate it. You see squirrels running about. As soon as you realize that this painting is unfolding, it’s gone. It’s the little ideas that trigger memories. I don’t think that would have been possible six months ago.
What camera did you use?Tierney:
We used an iPhone. We wanted to show that a kid with a cameraphone and a computer in Oklahoma could do something similar. That was our goal: to prove that you can make cinema if you have an idea and a few tools.
Did you shoot original footage?Pressman:
We didn’t shoot
for the film; it was digital memories. We all have a canvas that has receded into the netherworld of clouds. So we wanted to see if we could reanimate and make it art, and give it this spirit. A computer and a photograph can bring back the memories of the time.How does the AI diffusion model work?[content]Tierney
: The tool we used most is called Stable Diffusion and the beauty is it’s open source. It has created a community of people who share everything they know and everything they are building. You can use different models that other users and creators built. You can go to the community and say you need this or that tool.
Pressman:
It takes each frame and processes it however you want to augment the image. This can often result in a very disjointed frame-to-frame effect, which is surreal by nature but does not feel very coherent. Frames A, B, and C are radically different because light fell on the subject in a totally different way than the frame before it. The machine-learned model doesn’t know how to make them consistent so you reprocess it and refine it as with a sculpture. Frames A, B, and C, are radically different because light fell on the subject here in a totally different way than the frame before it and the machine-learned model doesn’t know how to make them consistent, so you reprocess it and as with a sculpture you continue to refine it.And the fascinating thing is how far that’s come in six months, because the first version we made felt beautiful and surreal although it felt like it had the hallucinations of the machine.
What was the idea behindHuman After All: The Surreal Matrix of AI, Art
at
Oculus NYC? Tierney: It was based on all these conversations Sam and myself were having. It was based on all the conversations Sam and I were having. Sometimes they may talk on Twitter and sometimes their conversations overlap, but they are rarely in the same room. Sam and I agreed that the best thing we’ve ever done together was to go out and talk to people from different fields and learn. We thought we’d invite people we knew and see who would come, and put creators and technologist in the same room. I think we’re all just going learn something from it and a lot of things will get built on these connections that that exist in person. What are your next plans with AI?
Pressman: There are a couple of filmmakers we’re developing projects with who have used
DALL-E and Midjourney to create storyboards and pre-visualisations. I think it’s a useful tool that is already showing its value. I think that people who work in VFX or post-production are already reliant on a lot AI that is built into editing and other post-production applications, whether it’s motion tracking. There’s a lot at stake when it comes to how AI will impact Hollywood and creators. It’s a key part of the Hollywood Guild contract negotiations, so what do you see as the uses of AI?
Pressman: Where things are undefined is the questions of how actors will be appropriated. It’s a dangerous space, and SAG-AFTRA is concerned about it because the likeness of an actor can be reproduced so easily. This raises questions about ownership of one’s own person. So the question is how do filmmakers use these various tools and respect both how the artists are respected and how the subjects in the films are respected as humans.[AI programmes and deep learning models]Tierney:
Our main intention was to say instead of relying on these tools to save us some time in the writing or make things for us and let’s do the opposite and write everything ourselves, direct everything ourselves, do the sound design, do the score and then just take one of these tools.
Let’s just take the base thing that everyone has living on their phone, and use the tool to anonymise it and to make it a universal story. Sam had an idea about Coachella . People have millions and millions of memories on their phones, but they live on drives and are lost. This is an example of how you could take footage from Instagram and use Stable Diffusion to create beautiful new memories. You could also make art with all the stuff that ends up on the cutting floor. We all capture memories throughout our lives, and now we have the chance to explore and expand on them.