CMU School of Drama


Friday, April 26, 2024

AI meets opera: A new blended class at CMU yields insights on music and flow

Pittsburgh Post-Gazette: Inside a rehearsal room overlooking the verdant green of Schenley Park, countertenor Ricky Owens burst into song. His dazzling Russian opera was full of power and emotion. It also contained crucial data for a team of engineering students studying focus, distraction and flow.

5 comments:

Abby Brunner said...

It’s very cool how people are continuing to find different ways to use AI to better our way of living and creating art as artists. However, it still brings into question the originality of the material AI is helping to create and how influential AI is to industry standards. The aspect of this article which talks about using AI to identify when an artist might not be fully paying attention is scary to think about. Sometimes, actions like turning a page or paying attention in class or during a rehearsal should be left to the human being to do and experience. I will say that it’s important to study these things, like how someone responds to a room’s environment but it’s not necessary for that response to come from an AI device. Although this class sounds cool and informative, it makes me scared for what future art industries could look like, especially if we start adding cameras on stage that help determine the right time to turn a page for a piano player.

Claire M. said...

This is interesting because AI isn't thought of frequently in these nuanced, individualized, areas. We quickly discount the applications of personalized AI in favor of models trained on huge swaths of data. I think one of the keys for AI to progress is figuring out how to get really good results from small models that can even be described with human readable weights. I think that the potential for smaller AI is incredibly valuable, both in terms of environmental impact and impact on training. If we can get good results with less stolen work, it could lead down some really interesting roads with discovering how humans can better synthesize information themselves. Learning about AI can teach us a lot about how humans learn, and in the case of this article, can teach us a lot about how humans create music, and what can be useful for them to use in their performances.

Ellie Yonchak said...

This was an interesting premise for an experiment, but I disagree with some of their variable definitions. I don’t think the moments of faltering or tension in the back as observed by the professor would have been a sign of inattention. Sometimes mistakes are made, or chords are harder to spontaneously harmonize to (which would be especially true in a medium like opera for me at least, since so much is typically done in between sight reading and the final project in which boredom could be confirmed. Assuming that this hesitation is from boredom and not confusion or some small amount of processing seems to be a mistake on my end. That’s often the problem with projects like this one: It’s hard to describe- for the purpose of qualification- what focus actually looks like on a psychological level. I would love to see their reasoning to see if there was some other reason why they decided to quantify this as inattentiveness.

John E said...

This was a very interesting article to read seeing that both of these topics are ones that are very interesting to me. I am not as familiar with opera, but I still find it quite interesting. The guy singing in the picture at the top of the article was in the Spring opera that was in the Chosky and he was really good so I’m sure the performance that he is giving in that photo is just as excellent. I thought that the integration of AI and opera described in this article was very strange and interesting. I am not sure what use someone would have to know whether their performers were “dialed in” but I guess someone did. I also would be terrified as a performer and I would be constantly thinking about how the AI thought my performance was doing and if I was “dialed in” or not and then I would be not dialed in because I would be thinking about the AI instead of my performance and I feel like that would not help anyone.

Abigail Lytar said...

This was a fascinating article, prior to reading this I had no idea this was happening at CMU. I am very interested to see where it goes because if we study this flow enough, we may be able to figure out how to trigger it and if we figure out how to trigger it then we will all be more efficient. I know that when I was playing my instruments and I got in a flow, hours would pass before I realized how much time had gone by. However, I know that with pretty much anything, once I get in the flow and have that pure concentration, I can do things so much faster and everything just kind of falls into place. But accessing that flow is difficult. But if we can figure out how to trigger it, I would be a much better designer, writer, student because I would become more time efficient. I am very interested to see what comes from the rest of this study.