- Shy Kids sheds light on OpenAI’s Sora video generation tool.
- Precise control over elements remains elusive, requiring workarounds.
- Copyright protection mechanisms prevent mimicking popular styles.
While OpenAI’s video generation tool Sora captivated the tech world with its fluid, lifelike footage, the company’s carefully curated debut left many questions unanswered.
Now, Toronto-based digital production team Shy Kids, handpicked by OpenAI to create a promotional short using Sora, has pulled back the curtain on the tool’s capabilities and limitations.
A delicate dance of artistic vision
According to Shy Kids’ post-production artist Patrick Cederberg, precise control over elements like wardrobe, props, and character movements remains elusive with Sora.
Elaborate prompting and post-production work were necessary to maintain consistency and remove unwanted elements generated by the AI model.
From timing gestures to executing camera movements, Cederberg revealed that translating traditional filmmaking techniques into Sora’s language often yielded unexpected results. The team resorted to workarounds, such as rendering portrait shots for later cropping, to achieve their desired compositions.
https://www.youtube.com/watch?v=9oryIMNVtto
Hundreds of 10-to-20-second generations were required to produce just a handful of usable clips for the final cut.
Copyright conundrums in the AI realm
Interestingly, Sora refused to generate content resembling popular franchises or filmmaking styles, suggesting a built-in mechanism to recognize and prevent potential copyright infringement.
This revelation raises questions about the training data used by OpenAI, a closely guarded secret the company is unlikely to divulge.