Adobe’s GenAI Turns Text Prompts Into Music: Here’s a First Peek

Amid the AI music buzz, Adobe is developing its own AI music generator called GenAI. This tool will allow users to create music from a text prompt and fine-tune the results. The burning question is: where is GenAI sourcing its audio data?

Adobe Unveils Project Music GenAI Control

Adobe announced an AI powerhouse, Project Music GenAI Control, that promises a more efficient and fun way to incorporate music into content creation. GenAI will allow users to generate music with a simple text prompt or reference melody, and then further enhance and fine-tune their creations using a variety of editing features.

The Project Music GenAI Control prototype is under development in collaboration with researchers at the University of California and Carnegie Mellon University. We’re still in the dark about the official launch and whether it will be its own app or integrated into Adobe software such as Premiere Pro.

As with Adobe’s generative AI model Firefly, you write a text prompt, such as “smooth jazz”, or upload a reference melody, and wait for the AI to produce your audio. Following the AI audio generation, you can use parameters to modify the tempo, structure, and intensity, and also lengthen clips and create seamless loops.

As Nicholas Bryan, Senior Research Scientist at Adobe Research, says in the press release:

One of the exciting things about these new tools is that they aren’t just about generating audio—they’re taking it to the level of Photoshop by giving creatives the same kind of deep control to shape, tweak, and edit their audio. It’s a kind of pixel-level control for music.

Not only do these integrated controls outshine other AI music tools such as MusicLM, but they also scrap the need for external audio editing software. GenAI can be a game-changer for YouTubers, podcasters, and various content creators looking for simple loops and basic audio adjustments.

Where Does GenAI Get Its Source Material From?

In the blog post, Adobe mentions its commitment to sticking to its AI ethics principles, ensuring that the AI technology is developed responsibly. We can assume this to mean that GenAI is being trained on data that is fair game, such as in the public domain or properly licensed work. Adobe is also reportedly developing watermarking technology so we can identify whether audio is generated through GenAI.

But the reference melody feature is being left out of the conversation, so it’s unclear which songs users can upload to the tool. Adobe would need to tread carefully because throwing in copyright-protected music could land them in hot water with music labels and artists.

Whether AI-generated music violates the intellectual property of artists is still up in the air. However, platforms like TikTok and Spotify have already removed some deepfake tracks that use vocal samples from artists without their consent. A prime example is the AI song Heart on My Sleeve featuring fake vocals by The Weeknd and Drake, as reported by The Guardian.

Not every artist is fretting about the problems that AI music poses. Grimes, for instance, embraces the concept on the condition that she gets her fair share of the royalties for any successful AI-generated song with her voice.

In the world of ever-growing generative AI tools, GenAI can potentially become the go-to for creators who are looking for an easy way to make music—while also posing a headache for musicians if it ends up snagging their copyrighted work. Overall, AI is beginning to establish its position to provide effective and efficient upgrades to overall enhance our creative abilities therefore it is believed that GenAI can also prove to become a great tool for many creators.