Adobe’s AI video model is here, and it’s already inside Premiere Pro

Adobe is taking the plunge into generative AI videos. The company's Firefly video model, touted since the beginning of the year, launches today with a handful of new tools, including some built directly into Premiere Pro, that allow creators to expand footage and create videos from still images and text prompts.

The first tool – Generative Extend – launches in beta for Premiere Pro. It can be used to extend the end or beginning of footage that is slightly too short, or to make adjustments during recording, such as correcting shifted eye lines or unexpected movements.

Clips can only be extended by two seconds, so Generative Extend is really only good for small tweaks, but this could replace the need to re-record footage to fix small problems. Extended clips can be generated at either 720p or 1080p at 24 FPS. It can also be used with audio to make editing easier, but with limitations. For example, it extends sound effects and ambient “room tones” by up to ten seconds, but not spoken dialogue or music.

The new Generative Extend tool in Premiere Pro can fill gaps in footage that would normally require a full reshoot, such as: B. by adding a few extra steps to that person walking next to a car.
Image: Adobe

Two more video generation tools are launching on the Internet. Adobe's text-to-video and image-to-video tools, first announced in September, are now rolling out as a limited public beta in the Firefly web app.

Text-to-video works similarly to other video generators like OpenAI's Runway and Sora – users simply need to enter a text description for what they want to generate. It can emulate a variety of styles such as regular “real” films, 3D animation and stop motion, and the generated clips can be further refined using a selection of “camera controls” that simulate things like camera angles, movement and shot distance.

Here's what some camera control options look like to adjust the output produced.
Image: Adobe

Image-to-Video takes this a step further, allowing users to add a reference image alongside a text prompt for greater control over the results. Adobe suggests using it to create B-rolls from images and photos, or to visualize reshoots by uploading a still image from an existing video. However, the before and after example below shows that this isn't really capable of directly replacing reshoots, as several errors such as shaking cables and shifting backgrounds are visible in the results.

Here is the original clip…
Video: Adobe

...and this is what it looks like: Image-to-Video “reconstructs” the film material. Have you noticed that the yellow cable is shaking for no reason?
Video: Adobe

You won't be able to shoot entire films with this technology in the foreseeable future either. The maximum length of text-to-video and image-to-video clips is currently five seconds, the quality is 720p and 24 frames per second. By comparison, OpenAI says Sora can generate videos up to a minute long, “while maintaining visual quality and adhering to user instructions” – but that's not yet available to the public, despite being months ahead of the company's tools Adobe was announced.

The model is limited to producing clips that are about four seconds long, like this example of an AI-generated baby dragon crawling around in the magma.
Video: Adobe

Text-to-video, image-to-video, and Generative Extend each take about 90 seconds to generate, but Adobe says it's working on a “turbo mode” to shorten that time. And as limited as it may be, Adobe says its tools based on its AI video model are “commercially safe” because they are based on content the creative software giant was allowed to use. Given that models from other outlets like Runway are under scrutiny for supposedly being trained on thousands of YouTube videos – or in Meta's case, maybe even yours personal Videos – commercial viability could be a deciding factor for some users.

Another benefit is that videos created or edited using Adobe's Firefly video model can be embedded with content credentials to disclose AI usage and ownership when published online. It's not clear when these tools will be out of beta, but at least they're publicly available – more than we can say for OpenAI's Sora generators, Meta's Movie Gen, and Google's Veo generators.

The launch of AI video was announced today at Adobe's MAX conference, where the company is also unveiling a number of other AI-powered features in its creative apps.

Leave a Comment