Thursday, December 26, 2024

NVIDIA unveils new AI model for generating audio

Software DevelopmentNVIDIA unveils new AI model for generating audio


NVIDIA has announced that its researchers have developed a new generative AI model capable of creating audio from text or audio prompts.

Fugatto, which is short for Foundational Generative Audio Transformer Opus 1, can create music from text prompts, remove or add instruments from existing audio, or even change the accent or emotion in a voice.

For instance, a promo video by NVIDIA shows a user prompting Fugatto to create “Deep, rumbling bass pulses paired with intermittent, high-pitched digital chirps, like the sound of a massive, sentient machine waking up.” Another example was to provide an audio clip of a person saying a short sentence and asking to change the tone from calm to angry. 

According to NVIDIA, Fugatto builds on the research team’s previous work in areas like speech modeling, audio vocoding, and audio understanding.

It was developed by a diverse group of researchers around the world — including India, Brazil, China, Jordan, and South Korea — which NVIDIA says makes the model’s multi-accent and multilingual capabilities better. According to the team, one of the hardest challenges in building Fugatto was “generating a blended dataset that contains millions of audio samples used for training.” To achieve this, the team used a strategy in which they generated data and instructions that expanded the range of tasks the model could perform, which improves performance and also allows it to take on new tasks without needing additional data.

The team also meticulously studied existing datasets to try to uncover any potential new relationships among the data. 

According to NVIDIA, during inference the model uses a technique called ComposableART, which allows them to combine instructions that during training were only seen separately. For instance, a prompt could ask for an audio snippet spoken in a sad tone in a French accent. 

“I wanted to let users combine attributes in a subjective or artistic way, selecting how much emphasis they put on each one,” said Rohan Badlani, one of the AI researchers who built Fugatto.

The model can also generate sounds that can change over time, such as a thunderstorm moving through an area. It can also generate soundscapes of sounds it hasn’t heard together during training, like a thunderstorm transitioning into birds singing in the morning. 

“Fugatto is our first step toward a future where unsupervised multitask learning in audio synthesis and transformation emerges from data and model scale,” said Rafael Valle, manager of applied audio research at NVIDIA and another member of the research team that developed the model. 

Check out our other content

Check out other tags:

Most Popular Articles