With the 2024 U.S. presidential election looming, a new report from Kapwing examines deepfakes of public figures. Deepfakes are artificial intelligence (AI) that uses machine learning to create or manipulate videos or audio to make it look or sound like someone is saying or doing something they never actually did. What they found is alarming for Democracy. Their research found that Donald Trump and his fellow traveler Elon Musk are the most frequently deepfaked politicians. The Kapwing study tracked deepfake video requests using text-to-video AI tools. It found that 64% of the deepfaked videos were of politicians and business leaders.
The most deepfakes
The Kapwing video content platform analyzed deepfaked politicians. The platform’s top deepfaked politician is Donald Trump. The Republican candidate topped the list with 12,384 deepfake videos. Trump was followed closely by Elon Musk, the CEO of Tesla and X (formerly Twitter), with over 9,500 deepfakes. Current US President Joe Biden ranked third with 7,596 deepfakes.
The prominence of Trump as a deepfake target underscores the growing risk this technology poses to Democracy. Attackers can weaponize deepfake politicians to spread misinformation, influence, or deceive voters. Eric Lu, the co-founder of Kapwing, says weaponization is already occurring, “The findings of our study clearly show that video deepfakes have already gone mainstream…”
Social media’s role
Social media platforms are often the primary channels for deepfakes, boosting their popularity. Kapwing’s study urges platforms to take responsibility for disseminating deepfaked media. Lu, who conducted the study, blames the social media companies, saying, “Social media platforms like YouTube, Instagram, Facebook, and X have an important responsibility to prevent fake news or financial scams early on before the posts go viral.”
When deepfakes attack
Deepfake attacks have already occurred. These are some prominent examples. First, in September 2024, during a Zoom call, Senator Ben Cardin, chair of the United States Foreign Relations Committee, was the victim of a sophisticated deepfake impersonation. The impersonator posed as Dmytro Kuleba, Ukraine’s former Foreign Affairs Minister, and attempted to elicit politically charged responses regarding the upcoming U.S. Presidential election.
Then, in January 2024, voters in New Hampshire received a deepfake robocall purporting to be from President Joe Biden. The New Hampshire attorney general’s office released a statement debunking the hoax. The Feds later traced the calls to a political consultant.
Another incident took place in November 2023. A deepfake audio of London Mayor Sadiq Khan’s voice making remarks critical of Armistice Day, which marks the end of World War One, was leaked. Finally, a video emerged in April 2018 of former U.S. President Barack Obama where the so-called ‘Obama’ utters uncharacteristic profanities.
The deepfakes regulatory challenge
Efforts to regulate deepfakes face hurdles. For instance, in October 2024, a federal judge blocked AB 2839, a California law allowing individuals to sue over election-related deepfakes on the grounds of First Amendment concerns.
Another attempt at regulating deepfakes came in April 2024. The Federal Communications Commission outlawed robocalls that contained voices generated by artificial intelligence. This decision conveys that exploiting technology to scam people and mislead voters will not be tolerated.
This legal challenge highlights the difficulty of crafting effective regulations that address the threats posed by deepfake technology without infringing on free speech.
However, due to the increased sophistication of Generative AI, tech platforms and regulators must balance innovation and security.
How to stop deepfakes
Lu proposes several steps to combat deepfakes. First, he calls for watermarked AI-generated content. This would involve integrating built-in encrypted timestamps on all recording devices to create a watermark at the moment of capture. The encrypted watermarks can be based on the highly secure Public Key Infrastructure (PKI) to distinguish authentic content from deepfakes. Next, the CEO suggests that social media platforms add clear labels on deepfake videos. He also laments that a comprehensive solution remains elusive.
To spot deepfakes, Lu says: “My top three tips are looking for a blurry mouth area or inconsistent movement of the teeth, watching out for unnatural blinking or lack of blinking, and listening for monotone voices and unnatural breathing patterns.”
rb-
The biggest problem with deepfakes is the software. The perverse thing is that candidates can now deceive voters by claiming that actual events are AI-manufactured deep fakes and discredit facts.
The Kapwing report paints a concerning picture of deepfakes targeting politicians, particularly Donald Trump. These manipulated videos and audio significantly risk Democracy by spreading misinformation and swaying voters. While legal regulations to curb deepfakes face free speech challenges, there’s still hope.
The fight against deepfakes requires a multi-pronged approach. It’s a race against continuously evolving AI, but by combining technological solutions, responsible social media practices, and public awareness, we can safeguard Democracy from the manipulative power of deepfakes. After all, a well-informed public is the first line of defense against misinformation.
Related article
Ralph Bach has been in IT for a while and has blogged from the Bach Seat about IT, careers, and anything else that has caught my attention since 2005. You can follow me on Facebook or Mastodon. Email the Bach Seat here.