Professor honored for research detecting ’deepfake’ videos
Professor Yu Chen is exploring a unique angle to figure out altered footage
False information on the internet is nothing new, but advances in digital technology are making it increasingly difficult to spot whatâs fake and whatâs real.
One researcher who is exploring a unique angle for figuring out âdeepfakesâ â manipulated videos of people saying things they did not say â is ÂÌñÉç Professor Yu Chen.
A faculty member at the Thomas J. Watson College of Engineering and Applied Scienceâs Department of Electrical and Computer Engineering, Chen was recently honored for his contributions to the security, privacy and authentication of optical imagery by , the international professional society for optical engineering. He and 46 others were elected for 2024 as fellows of the organization, which represents 258,000 people from 184 countries.
Chen started as an SPIE student member 20 years ago, while studying for his PhD at the University of Southern California. Since then, his research has been funded by the National Science Foundation, the U.S. Department of Defense, the Air Force Office of Scientific Research (AFOSR), the Air Force Research Lab (AFRL), New York state and various industrial partners. He also has authored or co-authored more than 200 scientific papers.
For his latest deepfake research, Chen drilled down into video files to find âfingerprintsâ such as background noise and electrical frequency that canât be changed without destroying the file itself.
âWe are living in a world where more fake things are mingled with real things,â he said. âItâs raised the bar for each of us to make sense of it all and make decisions about which one you want to believe. Our research is about finding anchor points so that we can have a better sense that something is suspicious.â
Chen believes his research bypasses the need to develop better AIs to fight âbadâ AIs, which he sees as an âendless arms race.â
âPeople look back two to three years ago when deepfakes started, and they can easily tell itâs fake because someoneâs eyes are not symmetric, or theyâre smiling in a way thatâs not natural,â he said. âThe next generation of deepfake tools are really good, and you canât tell that itâs a fake.â
The problems only will multiply as we move into a âmetaverseâ of augmented reality using products like Google Glass or Apple Vision Pro. What happens when we canât trust our own eyes?
âWe will start have the physical world â the real world â closely interwoven with a cyber world,â Chen said. âLook at the new Apple goggles that will enable people to leverage cyberspace in their daily lives. Deepfakes will be a huge issue â how can you tell something is real or something is faked?â
One aspect of deepfakes and their spread may be the most difficult to control, and thatâs the human factor.
âSocial media makes the situation even worse because itâs an echo chamber,â Chen said. âPeople believe what they want to believe, so they see something they like and they say, âOh, I know thatâs true.â Some influencers try to harvest that for their own purposes.â
Chen will receive his honor in April at the SPIE Defense + Commercial Sensing (DCS) Conference in Washington, D.C., and heâs been invited to speak about his deepfake research at an SPIE conference later this year in Portugal.
âI envision our research paving the way for lives in the future that intricately blend the realms of reality and virtuality,â he said. âI also hope I can help to promote the visibility of ÂÌñÉç in the SPIE community.â