Google DeepMind, the company’s AI research lab recently unveiled V2A, a new model that can generate audio from videos.
In a blog post, the tech giant’s AI research lab unveiled V2A (Video-to-audio), a new work-in-progress AI model that “combines video pixels with natural language text prompts to generate rich soundscapes for the on-screen action.”
Compatible with Veo, a text-to-video model the company introduced at the recently concluded Gooogle I/O 2024, V2A can be used to add dramatic music, realistic sound effects and dialogue that matches the tone of the video. Google says the new large language model also works with “traditional footage” like silent films and archival material.
The new V2A model can generate an “unlimited number of soundtracks” for any video and features an optional ‘positive prompt’ and ‘negative prompt’, which can be used to tune the output to your preferences. It also watermarks the generated audio with SynthID technology.
DeepMind’s V2A technology takes the description of a sound as input and uses a diffusion model trained on a combination of sounds, dialogue transcripts and videos. Since the model wasn’t trained on a lot of videos, the output can be distorted at times. Google also says it won’t release V2A to the public to prevent misuse anytime soon.