Community, Leadership, Experimentation, Diversity, & Education
Pittsburgh Arts, Regional Theatre, New Work, Producing, Copyright, Labor Unions,
New Products, Coping Skills, J-O-Bs...
Theatre industry news, University & School of Drama Announcements, plus occasional course support for
Carnegie Mellon School of Drama Faculty, Staff, Students, and Alumni.
CMU School of Drama
Wednesday, April 24, 2024
Microsoft AI creates scary real talkie videos from a single photo
newatlas.com/technology: Microsoft Research Asia has revealed an AI model that can generate frighteningly realistic deepfake videos from a single still image and an audio track. How will we be able to trust what we see and hear online from here on in?
Subscribe to:
Post Comments (Atom)
3 comments:
Wow, I really hate this. I don’t like that a deep fake video can be made off of one still image so realistically. I think when this technology is leaked or copied into the public it will have terrible ramifications for political leaders, government officials, celebrities, and criminal activities. I am really not excited to live in the future because I feel like AI is the new portable mobile device–people thought it was cool and new but now it has evolved to become half of our identity and way of living in western society. I think AI is scary now, but soon enough it will take over half of the world’s way of operating. I hope measures are taken to not let this level of technology be free to the public with no checks on videos uploaded to the internet. Of course I think there won’t be though so when AI takes over what we perceive as real and fake I’m going to go live in the mountains far away.
This is actually really terrifying technology. My very first thought when I read the headline was about the number of bad ways this could be used. Being able to generate a video of someone talking simply from a single picture of them and an audio clip just seems far too easy and has the potential to do real harm. I appreciate that the researchers are being cautious and noting that it could be potentially dangerous technology and are not planning on releasing it to the public "until we are certain that the technology will be used responsibly and in accordance with proper regulations," but I just feel like there isn't really a way of guaranteeing this. I feel like the technology has inherent danger no matter what, and the few benefits of the tech simply do not outweigh the issues it could pose in the future. They talk about human-AI interactions and I can see the benefit of some of these, but overall it just simply does not feel worth it and even the interactions they describe don't seem fully positive.
Well… this is quite unnerving. I remember when I learned of deepfake many years ago and thought it was terrifying, but I comforted myself with the fact that it was pretty hard to do. Now with this it is apparently as simple as having one static photo. I personally have never been a huge fan of AI because I just do not feel like we need it and the dangers of it are real. I appreciate the programing brilliance it has taken to make this a reality, but I am not sure in the end if it will do more harm than good. Sure, I suppose it makes life a little easier not having to research everything and just being to ask AI questions but now that we have taken the innovation farther and farther, I am getting more and more concerned about its ramifications. I just think that we need to tread very carefully with AI because the more reliant we become on it the more danger we are in. We are perfectly capable of running things without it and have been for thousands of years. Why do we need it now, just let it be.
Post a Comment