CMU School of Drama


Thursday, August 22, 2024

How to Tell If That Song Was Made With AI

Lifehacker: Of all the AI-generated content out there, AI music might be weirdest. It doesn't feel like it should be possible to ask a computer to produce a full song from nothing, the same way you ask ChatGPT to write you an essay, but it is: Apps like Suno can generate a song for you from a simple prompt, complete with vocals, instrumentals, melodies, and rhythm, some of which are way too convincing. The better this technology gets, the harder its going to be to spot AI music when you stumble across it.

5 comments:

Marion Mongello said...

This article was incredibly interesting. I even went ahead and listened to some of the sampled tracks that the article refers to, and it is truly mind-boggling how realistic many of the songs sound. the way that a computer is able to nearly perfectly replicate the patterns of a human for singing, instrumentation, etc is wild and continues To beg the question of what the future holds and how this industry will continue to change because of AI. Luckily, because we want to value artists' work, there are still some traits of a human-made product that cannot be replicated by AI. Something this article makes me think of is the new TikTok feature that allows an AI voice to sound exactly like a human, and this is free to any TikTok user. There are definitely times where I am I'm fooled by these fabricated voices.

Rachel L said...

What fascinated me the most about this article was actually brought up in one of the comments (on the article site). This commenter mentioned that they use AI to help them sing songs that they write. AI doesn’t help them write the songs, but it does generate the sound of the vocals, instrumentals, etc., so that the song makes it off the page and turns into actual sound waves. This brings up a lot of questions to me about the grey areas of AI music. Completely AI generated music, as the article describes, does not tell a human story – which I find to be one of the most beautiful parts of music – and is pretty obvious about it. However, how do we categorize and view music that was written by a person, therefore telling a human story, and performed/recorded using AI? Would such music still be considered AI music, or can it be categorized under the standard music categories? My gut reaction is a combination of the two (e.g. “Rock Song” by NAME with audio from AI), but I am still pondering the potential grey area here.

Sharon Alcorn said...

The first line stuck with me as I read through the article because it really summarized my feelings about AI generated music. It is a strange and disturbing concept, however, there do seem to be pros and cons.

I feel that one of the potential pros of AI generated covers is hearing artists ‘cover’ songs and genres they don’t usually work with. However, I've listened to these AI covers before, and just like the author stated, there is a lack of inflection and humanity in the AI voice. It is very robotic and always reminds me of how voice recordings are utilized by scammers. There is also the concern that it will diminish the impact of the actual artist if they choose to actually record a cover of the song.

I have intentionally not had much experience with AI generated music meant to resemble original tracks, because it makes me uncomfortable. For example, I remember coming across an AI generated Taylor Swift song on YouTube made to look like it was from her most recent album, before the album was released. It felt wrong for someone to trick listeners into thinking it was actually a song released by Swift.

I have not read or heard many discussions about AI generated music, but I am curious about whether my feelings are shared by others.

Sophia Rowles said...

I think this is an extremely important skill for people to start learning about with how much AI content is currently running rampant and unrestricted. I find it unfortunate that these sorts of skills are necessary in our modern society, however I worry about how many more gullible or naive people would easily be convinced by AI generated voices. Right now we’re hearing joke songs from the voices of celebrities, political figures, and even TV show characters, but how long until it's not just singing and jokes? With the lack of media literacy among people, especially the older generations I am rather concerned with the idea of social media posts using an AI generated voice of a politician spreading misinformation. I hope in the future more media literacy skills start being taught in public schools, adding lessons about identifying AI in text, image, and audio form. Long term I think governmental legislations need to be passed restricting AI content, I just hope it gets done before something tragic occurs due to AI.

Josh Hillers said...

What I find most useful about this article is that while it is prescriptive in what it is discussing, telling the audience to pay attention to audio quality, the passion of the singer, and the overall lyrical and melodic sense of a potentially AI made song, it ultimately is more about teaching the audience to have more media literacy. It highlights a broader need for us as an audience to be critical and even slightly skeptical when interacting with media nowadays, moreso than ever with innovation in AI generated content. Throughout all of our grade school we’ve been taught to read and analyze text, but increasingly we are left to our own methods of consuming and analyzing media as we now constantly interact with videos, images, and songs. This article demonstrates a broader need for more education in media literacy as a whole, which includes the fantastic lessons taught in this article that inform the audience on what to look for when listening to new music.