CMU School of Drama


Thursday, September 13, 2018

Beyond Deep Fakes

Carnegie Mellon School of Computer Science: Researchers at Carnegie Mellon University have devised a way to automatically transform the content of one video into the style of another, making it possible to transfer the facial expressions of comedian John Oliver to those of a cartoon character, or to make a daffodil bloom in much the same way a hibiscus would.

4 comments:

Julian G said...

I wonder if people’s reaction to this is similar than people’s initial reaction to photo manipulation. We are now used to the idea that images can be photoshoped and that just because there is a picture of something doesn’t mean it is real. At this point we are desensitized to photoshopped images and understand that pictures in magazines are often fabrications. However, there was a time where photographs only represented snapshots of reality, and people were likely somewhat shocked when technology began to hit the point where photos could no longer be trusted. I think eventually we will hit the point where we perceive videos the same way as we perceive photos, they will sometimes be accurate representations of reality, and they will sometimes be manipulated images and simply fabrications. Just as there are many conversations about the ethics of photo manipulation today, particularly when it comes to its effect on body images, I suspect we will see similar conversations in the future regarding video manipulation.

Yma Hernandez-Theisen said...

In “Beyond Deep Fakes, Byron Spice gives us news that Carnegie Mellon researches have found a way to automatically transform the content of one video into the style in another video. He gives us the example that with this method it is “possible to transfer the facial expressions of comedian John Oliver to those of a cartoon character”. He mentions good uses for this technology, such as to “convert black-and-white films to color” and to help make new content, as a student from CMU Auyush Bansal “there are a lot of stories to be told”, and that “it’s a tool for the artist that gives them an initial model that they can then improve”, this technology could be used in the pursuit of content when it comes to viurtual reality. As they bring up, and what I first though of when reading this article, this technology could be used in many ways, create “deep fakes”. It could be used in video to make a person, without their permission, seem as though they did something they have not done. This reminded me of another recent technology that has the potential to aid “deep fakes”, that aids not the visual end but the auditory end as well. Lyrebird, making it easier to make it sound like the person said something it didn’t. I agree that this is an amazing technology and can be used in so many ways, but I wish the article talked more about deep fakes, like how they can try to conteract using it in negative ways. In another article I found some (still very little) information on scientist trying to figure out ways of detecting deep fakes. With every progress it is wise to understand the conciquences.

Unknown said...

I find it a fascinating fact about humans that we have the ability to create amazing technology like this. And yet, we know that one of the first things people will want to use it for is to lie and manipulate other people. I can't stop being equally in love with and disgusted by the human race. I think Julian's comment relating this to the emergence of photoshopped images is interesting but also a little unfair, mostly because a photoshopped image of a person does not hold the same power to manipulate, even if you believe it to be real, as a video of they saying something. The other thing I found extremely interesting about the process of creating this technology is that in the article Aayush Bansal, one of the people working on the project, says that film/entertainment was his primary motivation in helping create this technology. And yet, we know that it will be able to be used by things such as self-driving cars that need to navigate at night/in bad weather. Usually, the process runs the opposite way, where a technology is created and then the entertainment industry co-ops it for our own needs. But here is something that was created with us in mind that will be useful to things across the board.

Kaylie C. said...

As listed in the article, there are some great uses for this technology, but I believe that this is dangerous enough that it should not have been released to the public so quickly. Not only has this software been used to create realistic explicit content by copying and pasting another person's face without their permission, but this can and has been used to alter footage from the news. This kind of alteration makes fact checking impossible and could not have come shown up in a worse political climate. The government is simply incapable to staying ahead of these things. Not long after 3D printers were available to the public, detailed blueprints for 3D printed lethal weapons appeared online. Regulation is so important and I wish these companies would think about that before allowing everyone to know what they are doing and releasing it for our use. Considering this content comes from CMU researchers, I would have thought they would realize the impact this can have on our already turbulent news programs. I believe that CMU should be more proactive about teaching their students the importance of regulating the technology they create when the government can't do it for them.