We've made significant performance improvements across the platform to deliver a faster and smoother editing experience:
The mobile preview now loads faster and runs more smoothly across all devices:
We've enhanced the export pipeline to make video exports more reliable and consistent. Exports now complete more successfully, with better error handling and recovery for edge cases.
We've fixed a bug in our AI avatar generation, that was blocking some users from seeing their avatar in the editor. We also made it easier to decide whether or not an avatar should be generated for a video by adding a "Narration mode" option in the "Overview" page.
VideoGen 3.0 transforms our platform into a full-featured video editor powered by AI. This release introduces a redesigned three-stage creation flow (Overview, Outline, Editor), a brand-new interactive canvas, and an enhanced timeline editor. We've rebuilt our rendering pipeline for perfect preview-to-export accuracy, added a background task queue for reliable long-running operations, and expanded our stock library with over 12 million new assets. Together, these updates create a more visual, intuitive, and powerful editing experience.
We introduced a redesigned video creation flow built around three stages — Overview, Outline, and Editor — to make project setup and AI collaboration more structured and predictable.
In the Overview page, you can upload images, videos, and audio files that you want the AI agent to use when generating your video. These assets serve as context for the AI — they can appear directly as visuals, help guide topic understanding, or be referenced when building the script and outline.
You can also specify your media sources, such as Free Stock, Wikimedia, iStock, AI Images, or Music. The AI agent will draw from these sources during generation, combining your uploaded assets with external visuals and audio to produce the most relevant media for each scene.
You also have more fine-grained controls, allowing you to define your aspect ratio, duration range, and language.
After submitting your brief, the AI agent creates a structured outline that breaks your video into sections.
Each section is assigned a section type based on its audio handling:
You can review and edit these sections before moving into the editor.
Within each section, you can set featured media, which takes priority over AI-selected b-roll. Featured media ensures specific visuals (like brand clips, demo videos, or uploaded footage) always appear in the final render for that section.
This new three-stage workflow creates a clearer separation between planning, structure, and editing — while giving the AI a stronger context for generating accurate visuals and narration.
We introduced a new layout system that gives users more control over how text and visuals are arranged within each section.
Layouts determine the visual structure of a scene — how the title, subtitle, and media appear on screen — making it easier to match the presentation style to the content type.
The following layouts are now available in the editor:
We've added a brand-new interactive canvas that allows direct manipulation of elements in your video:
These controls are powered by our unified rendering engine, meaning you can see exact, real-time changes to your final composition as you work.
This brings a much more visual and intuitive editing experience — you can now fine-tune positioning, scaling, and animations directly on the canvas without manually entering numbers.
We've redesigned the timeline editor to give you more precise control over your video's timing and structure:
The timeline syncs in real-time with the canvas preview, so every change you make is immediately reflected in your composition. You can scrub through the timeline to preview specific moments, making it easy to fine-tune transitions and timing across your entire video.
We've overhauled our video rendering pipeline so that both preview and final export now run through the same underlying renderer. Previously, previews and exports used slightly different rendering code paths, which could occasionally lead to inconsistencies between what you saw while editing and the final output.
By consolidating them into a unified pipeline:
What you see is what you get – the export will now perfectly match your preview.
Rendering bugs are easier to track and fix because there’s only one rendering path to maintain.
We can introduce advanced editing features more quickly, since any improvements to the renderer apply to both preview and export automatically.
This foundation makes video editing more reliable today and faster to evolve in the future.
We implemented a new background task queue to make long tasks run more reliably, even if you close the tab before the process completes. The following actions will always be executed as background tasks:
With minimal latency, automatic retries, and multiple fallbacks, this new system was built from the ground up to make the video generation experience as seamless as possible for our users.
We’ve expanded the built-in stock media library with over 12 million new assets, including integrations with Pexels Images and Wikimedia Commons. This update provides broader visual coverage across topics, giving the AI agent access to both high-quality stock footage and educational media such as diagrams and public figures.
We overhauled our UX for dealing with failed subscription payments across the entire app. Now, when you attempt to use any paid feature while your subscription is inactive, a modal appears with clear instructions on how to reactivate your subscription. From here, you can view the incomplete invoice, manage your subscription, or contact our customer support team (with the relevant details of your account automatically included in the conversation). There is also a clear warning that your subscription is inactive on the main dashboard with a button to open this modal.
You can now share a copy of your project with your teammates. Click "Share" in top-right corner of the project editor, click "Share a copy", and then enter a comma-separated list of emails you'd like to share the project with. Each recipient will receive a full copy of your project in their inbox, allowing them to edit, generate, and export the video from their own account. Recipients who are not already part of your team will be added to your team upon acceptance of the invitation.
We introduced a new "Generate video clip" tool that fully synthesizes an 8-second video based on a prompt, powered by Google's state-of-the-art Veo 3 model. It may take a few minutes to generate, and results are best for well-structured prompts with specific subjects, actions, and settings. We are currently offering this tool exclusively to Business subscribers.
We converted all personal workspaces to single-member teams, making it easier than ever to create videos alongside your teammates. To invite your teammates, simply click "Invite teammates" on the top-right corner of the dashboard and enter their emails. To see a list of all of your team members and modify their permissions, visit the Teams page.
Media tools are a set of flows to create and generate assets in the project editor. You can access these tools in the right side panel by clicking on the asset in the timeline. For a blank asset, the list of available tools will appear directly in the side bar. For a populated non-transcript asset, click "Replace" to replace the asset with the output of a media tool.
The following tools are currently available:
Many more generative AI tools are coming soon!
All videos are now generated with a background music track to complement the content of your video. To power this system, we built an AI music agent that intelligently analyzes your video outline and automatically selects the perfect track from our music library. We also enhanced our music library with many more tracks to cover a wide range of different genres, moods, and tempos.
We reimplemented our timeline and preview to only load what's necessary for the visible portion of your video, allowing for optimized playback of long videos in the project editor. Previously, videos over 10 minutes long could be somewhat laggy.
When you include your own media assets in the video generation form, VideoGen places each of these assets where they are most relevant to the voice-over script. We overhauled our system for this with a new AI agent that understands the content of each asset and intelligently edits together the entire b-roll track. The agent will also choose different animation styles depending on its categorization of the asset (e.g., screenshot, icon, infographic).
You can now generate an AI avatar on top of your video to present your voice-over script with matching lip movements. Choose from our library of over 100 lifelike presenters to make your videos more engaging and personal. Avatars are currently only available to Business and Enterprise subscribers.
To add an AI avatar to an existing AI voice section, click on the speaker name, click on the avatar button at the top of the popover, select your favorite avatar presenter, and then click generate. Your avatar will be ready to preview and export within a few minutes!
We extended the timeline to have multiple layers to allow for more flexibility and customization in your videos. The bottom layer shows the background assets, which you can trim, split, replace, and rearrange. The middle layer shows the script asset, which corresponds to your AI voice and/or avatar. Finally, the top layer shows your title screen overlay, which you can customize in the "Theme" tab on the left side panel. In the timeline, you can also click on an asset to select it and view more advanced editing capabilities in the right side panel.