Google Announces MusicLM, an AI Model to Generate Music from Text

【Google Announces MusicLM, a Text-to-Music Generator, but Will Not Release It】
https://www.itmedia.co.jp/news/articles/2301/28/news056.html

 

・Google Announces AI Model “MusicLM casts the conditional music generation process as a hierarchical inter-array modeling task, generating 24 kHz music that remains consistent over several minutes.”

・280,000 hours of training data

MusicLM generates music from sentences and words

・Google cites the risks of this model as the potential for cultural bias since it reflects the bias of the training data, and copyright infringement of the original songs. It claims to have found that about 1% of the songs generated have “accurate memory” of the original songs. It “strongly asserts the need for further work to address these risks” and has no plans to release the model at this time.

 

 

These are the quotes from the article

 

 



 

After the transformation of listeners, is the transformation of creators next?

 

In my last blog, I talked about AI generating voice from 3-second voice samples, and this time I will talk about AI generating music from sentences and words.

 

I like these stories and I am interested in them, so the same kind of stories keep coming up.^^;

 

↓Here you can actually listen to the music generated by MusicLM.

 

【MusicLM:Generating Music From Text】
https://google-research.github.io/seanet/musiclm/examples/

 

It’s quite a quality when you listen to it, isn’t it?

 

・The sentence “slow-tempo, ****-like music.”

・A sentence (caption) describing the flow of the music, such as “The first half is ****, then it goes to △△ in the middle, and finally it’s like □□□.”

・Review of paintings (text)

 

MusicLM generates music from these texts and elements.

 

Music generated based on reviews of paintings, interesting.

 

I personally thought the music for Munch’s Scream was very AphexTwin-like.

 

By the way,

I also make quite a bit of commercial music,

During meetings, I often receive abstract orders (sentences) from the director, such as “this kind of music,” “soft,” “stylish,” or “uplifting.

 

I wondered if there is much difference between the work that MusicLM and I do in terms of making music out of text.

 

Due to bias in the training data and copyright issues, the MusicLM tool has not been released to the public, but if the problems are cleared up, it may one day be made available to the public.

 

Now there are copyright issues, etc,

but,

As MusicLM continues to evolve, one day when MusicLM generates music, it will be easier and easier for AI to generate music while clearing copyright issues, as long as the conditions are set in the text “not in violation of copyright.

 

In the past 10 or 20 years, the way music is listened to and the listeners have changed completely. Now, the way music is made and the way musicians and creators work may be transformed.

 

I was filled with trepidation as I imagined all kinds of things, but I was inspired to do my best to do what was right in front of me.

 

See you then,

 

I think that music and artists that attract fans are always valuable, so I sometimes wondered if it is important for people to want to listen to music created by Makoto Ogata.”I’m a fan of the music MusicLM makes!” is not likely to happen, so

 

 

 

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *