Search

Gen AI Music Hits 20% Success Rate in Commercial Briefs

Gen AI Music Hits 20% Success Rate in Commercial Briefs

Currently, 20% of music created by generative AI (gen AI) is accurate when it comes to writing music for particular commercial briefs.

A study conducted by sound testing company SoundOut and sound branding agency Stephen Arnold Music (SAM) supports this. According to the study, people can still write music that is more appealing and emotionally true than artificial intelligence. Nonetheless, AI can assist in the ideation stage quite well in its current state.

Also read: Camera Strips People Naked to Spark Debate on AI and Privacy 

Researchers gave four briefs to Stable Audio’s gen-AI platform: to create music that was bold and daring, inspirational, funny and quirky, sentimental and compassionate. Each brief that the researchers looked at was then produced by the platform five times.

On short briefs

Overall, the AI performed “reasonably well,” according to the researchers, and it was “largely successful for most compositions” in terms of direction.

Specifically, short, “consistent,” and “well-aligned” briefs and requests for music that aim to elicit correlated emotional attributes were the main factors contributing to AI’s success. The AI was more likely to fail when given more intricate or nuanced briefs.

SoundOut CEO David Courtier-Dutton stated that this study has revealed that AI ‘composing by numbers’ is already not far behind, even though humans still outperform AI on the emotional front. He added that AI wasn’t inferior to humans; it was simply not as good, and they anticipated that AI would eventually be able to match most human composers with a little emotional fine-tuning.

Courtier-Dutton went on to say that the AI only needs to know how to elicit emotions in people; it doesn’t need to comprehend emotions itself.

He stated,

“AI can compose music to move us emotionally. It can now do this precisely enough for commercial use; it just needs a little more technical empathy.”

Using AI in audio

By utilizing human perception, a previously underutilized tool, researchers have created a new deep-learning model that promises to improve audio quality in real-world scenarios significantly.

Researchers discovered that they could improve speech quality as determined by objective metrics by combining a speech enhancement model with people’s subjective assessments of sound quality.

The new model performed better than other conventional methods at reducing the amount of noisy audio—unwanted noises that could interfere with what the listener is trying to hear. Most notably, it was discovered that there was a strong correlation between the quality scores that the model predicted and the decisions that people would make. 

The goal of this study was to enhance speech that originates from a single audio channel, such as a single microphone or monaural speech.

However, AI has already been applied in the audio industry. For instance, Spotify recently introduced “AI DJs.” AI is also helpful in the development of synthetic voices, hyper-personalized creativity, sophisticated targeting, and many other areas.

Colortext founder Jason Brownlee told The Media Leader last week that if clever audio creatives can bottle their knowledge with AI and scale it into a super-efficient, self-learning, and self-serve ad production platform, the sky is the limit.

As of right now, SAM and SoundOut advise employing AI during the ideation stage of developing sound branding. 

“When developing commercial-ready music for leading brands, there are additional considerations for evoking the proper emotion at the right time,” stated Chad Cook, president of creative and marketing at SAM. 

The human touch has a noticeable influence on performance, emotional timing, production quality, mixing, and mastering, among other aspects.

RELATED TOPICS: AI, GenAI, Music
Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×