CMU School of Drama


Friday, August 22, 2025

Scientists hid secret codes in light to combat video fakes

Ars Technica: It's easier than ever to manipulate video footage to deceive the viewer and increasingly difficult for fact checkers to detect such manipulations. Cornell University scientists developed a new weapon in this ongoing arms race: software that codes a "watermark" into light fluctuations, which in turn can reveal when the footage has been tampered with.

1 comment:

DogBlog said...

I think it is really interesting how light fluctuations and noise can be used to encrypt and better distinguish artificially generated video from actual video. Something I have noticed online is that it is getting harder and harder for me to tell the difference between artificially generated video and actual video which is really concerning. Something I saw so often when working with senior citizens as a tech support person at my local library was the prevalence of artificially generated video on their facebook feeds. Early on I honestly thought it was a bit crazy that they were unable to distinguish the two, however the technology is getting so good that even some of the most technically savvy people I know are struggling to differentiate. This also makes me concerned about the possibilities for the creation of blackmail, especially using deepfakes to create revenge porn. It actually was a problem at my high school where a few boys took the photo of a face of a girl at the school and used AI to make deepfakes of her doing lewd acts. I hope this technology can help mitigate the dangers of this content.