Skip Navigation

Produced by the Office of Marketing and Communications

Subscribe Now

System Fights Fakes—Deep or Shallow

Researcher’s Tool Can Verify Authenticity of Audiovisual Recordings for Politicians, Celebrities and the Rest of Us

By Georgia Jiang

photo illustration of women's face overlain by computer imagery that evokes AI processing

A UMD researcher is developing a a cryptographic QR code-based system that can verify whether content has been edited from its original form to prevent audio of video fakes from being used for nefarious purposes.

Photo illustration by Unsplash

In the early days of Russia’s 2022 full-scale invasion of Ukraine, Ukrainian President Volodymyr Zelenskyy appeared in a video telling his soldiers to lay down their arms and surrender. At least, that’s what it looked like.

The clip was a high-tech fake, illustrating the danger a University of Maryland researcher is fighting with a new system designed to ferret out audio and video that has been altered from their original form.

“The clip was debunked, but there was already an impact on morale, on democracy, on people,” Assistant Professor of Computer Science Nirupam Roy said. “You can imagine the consequences if it had stayed up for longer, or if viewers couldn’t verify its authenticity.”

Roy is developing TalkLock, a cryptographic QR code-based system that can verify whether content has been edited from its original form—whether it’s an amusing take on a politician singing a popular song, an attempt to sway the public through misinformation or the malicious spreading of fake, sexually explicit images of Taylor Swift on X and other social media platforms. It is designed to work on deepfakes—fake but convincing content generated by artificial intelligence; it finds shallowfakes as well, in which skillful conventional editing is used to say something the speaker never intended.

Commentators have spoken years about the potential of deepfakery to sway the elections process, but the Zelenskyy video was the first high-profile attempt to win a war using it; after observing the fallout from the viral video and others like it, Roy realized that fighting deepfakes and shallowfakes was essential to preventing the rapid spread of dangerous disinformation.

TalkLock generates a QR code capable of protecting the authenticity of speeches and likeness. The system runs on a device like a smartphone or tablet to continuously generate cryptographic sequences created from the live speech, forming a dynamic QR code displayed on the device. Users can verify the authenticity of recordings via a server accessible through a website or downloadable apps and web plugins.

“Because the QR code will be displayed on the device’s screen with the speaker, any authentic recordings of the speaker will also contain the QR code,” he said. “The presence of the QR code marks the verifiability of the live recording, even if it’s posted in different formats, uploaded on different social media platforms or shown on TV.”

In addition to its ability to place a unique marker on a video or audio clip, TalkLock can also systematically analyze features from a recording and check them against the code sequence generated from the original live version. Any discrepancies found by TalkLock would indicate that the content was altered.

“As long as the generated QR code is recorded along with the speaker, political leaders, public figures and celebrities would be able to protect their likenesses from being exploited,” Roy said. “It’s the first step to preserving the integrity of our information—protecting people from crimes like targeted defamation.”

To address the need for protection at the individual level and to ensure publicly posted photos and videos on social media aren’t exploited by deep- or shallowfakers, his team is developing a mobile app version of TalkLock, which will be more tailored to the average person’s needs and can be used by anyone who owns a smartphone. He expects the app to be completed this summer.

“People can just hold their phone nearby with the app on as they speak, and just doing that will create a layer of protection from malicious editing,” he explained. “Users will be able to control their own audio-video footprint online with just their phones.”

Roy hopes that similar protections will be available to the public as default settings on all mobile devices soon. Computer science Ph.D. students Irtaza Shahid and Nakul Garg and undergraduates Robert Estan and Aditya Chattopadhyay are working with Roy to develop an open-source implementation of the TalkLock software stack and the mobile app. The team recently published a paper explaining the key concept of the project in the proceedings of MobiSys '23, the International Conference on Mobile Systems, Applications and Services.

“Our ultimate goal is to make sure that everyone can have equal access to real, genuine information,” Roy said. “Only then can we make a step closer to a truly equitable and democratic society.”



Maryland Today is produced by the Office of Marketing and Communications for the University of Maryland community on weekdays during the academic year, except for university holidays.