Skip to content Skip to sidebar Skip to footer

Meta unveils five AI models for multi-modal processing, music generation, and more

Meta has unveiled five major new AI models and research, including multi-modal systems that can process both text and images, next-gen language models, music generation, AI speech detection, and efforts to improve diversity in AI systems.

The releases come from Metaโ€™s Fundamental AI Research (FAIR) team which has focused on advancing AI through open research and collaboration for over a decade. As AI rapidly innovates, Meta believes working with the global community is crucial.

โ€œBy publicly sharing this research, we hope to inspire iterations and ultimately help advance AI in a responsible way,โ€ said Meta.

Chameleon: Multi-modal text and image processing

Among the releases are key components of Metaโ€™s โ€˜Chameleonโ€™ models under a research license. Chameleon is a family of multi-modal models that can understand and generate both text and images simultaneouslyโ€”unlike most large language models which are typically unimodal.

โ€œJust as humans can process the words and images simultaneously, Chameleon can process and deliver both image and text at the same time,โ€ explained Meta. โ€œChameleon can take any combination of text and images as input and also output any combination of text and images.โ€

Potential use cases are virtually limitless from generating creative captions to prompting new scenes with text and images.

Multi-token prediction for faster language model training

Meta has also released pretrained models for code completion that use โ€˜multi-token predictionโ€™ under a non-commercial research license. Traditional language model training is inefficient by predicting just the next word. Multi-token models can predict multiple future words simultaneously to train faster.

โ€œWhile [the one-word] approach is simple and scalable, itโ€™s also inefficient. It requires several orders of magnitude more text than what children need to learn the same degree of language fluency,โ€ said Meta.

JASCO: Enhanced text-to-music model

On the creative side, Metaโ€™s JASCO allows generating music clips from text while affording more control by accepting inputs like chords and beats.

โ€œWhile existing text-to-music models like MusicGen rely mainly on text inputs for music generation, our new model, JASCO, is capable of accepting various inputs, such as chords or beat, to improve control over generated music outputs,โ€ explained Meta.

AudioSeal: Detecting AI-generated speech

Meta claims AudioSeal is the first audio watermarking system designed to detect AI-generated speech. It can pinpoint the specific segments generated by AI within larger audio clips up to 485x faster than previous methods.

โ€œAudioSeal is being released under a commercial license. Itโ€™s just one of several lines of responsible research we have shared to help prevent the misuse of generative AI tools,โ€ said Meta.

Improving text-to-image diversity

Another important release aims to improve the diversity of text-to-image models which can often exhibit geographical and cultural biases.

Meta developed automatic indicators to evaluate potential geographical disparities and conducted a large 65,000+ annotation study to understand how people globally perceive geographic representation.

โ€œThis enables more diversity and better representation in AI-generated images,โ€ said Meta. The relevant code and annotations have been released to help improve diversity across generative models.

By publicly sharing these groundbreaking models, Meta says it hopes to foster collaboration and drive innovation within the AI community.

Leave a comment