Deepfakes pose significant risks globally, particularly in nonconsensual pornography cases. One cause of this alarming issue is the accessibility of AI tools, such as DALL-E and Stable Diffusion, which have enabled individuals with limited technical knowledge to create deepfakes.
Generated through the power of artificial intelligence, deepfakes are images, video, or audio recordings that have been modified using an algorithm to replace the individual with a popular figure in a manner that seems real.
A 2019 study titled The State of Deepfakes by Deep Trace revealed that 96% of deepfake videos were pornographic, impacting millions globally, according to a Business Insider report. The most recent deepfake controversy was the circulation of deepfake porn images of Taylor Swift identified on a celebrity porn website in January. This incident was condemned by the pop superstar’s followers and tech leaders and prompted government actions to address rising concerns over AI risks.
Moreover, deepfakes may falsely incriminate, influence public opinion, or blackmail. As AI technology advances and becomes easily accessible, deepfakes have become more hazardous and more challenging to identify.
Addressing this concern, the Biden administration endorses digital watermarks, while Google and Meta implement digital credentials to label AI-generated content, enhancing public awareness and facilitating easier removal.
OpenAI is developing detection tools, including visual watermarks and hidden metadata, aligning with Coalition for Content Provenance and Authenticity (C2PA) standards. Additionally, dedicated platforms like Sensity, designed to verify content origins, provide added layers of protection.
Emerging defensive tools like Nightshade aim to safeguard against image manipulation. They add imperceptible signals that disrupt AI processing while preserving the image’s intended appearance for human viewers.
Legislative efforts are underway, with at least 10 states implementing legal protections against deepfakes. The Federal Communications Commission has banned AI-generated robocalls, and a bipartisan federal bill, the DEFIANCE Act, seeks civil remedies for victims of sexual deepfakes.
TechTimes previously reported on a proposed modification of the Federal Trade Commission’s (FTC) deepfake ban that is reportedly imminent, aiming to extend coverage to all consumers against AI-generated impersonation. The existing ban primarily targets businesses and government agencies. The FTC acknowledges the need for broader protection due to the increasing number of complaints about impersonation fraud.
Alongside this proposed extension of protection, the FTC is contemplating making it illegal for AI platforms to generate text, video, or images to offer products or services known or suspected of being used to deceive customers through impersonation.
Critical updates to the current government and business impersonation rules provide more robust tools to combat con artists. The rule enables filing federal court cases directly against con artists to reclaim earnings from their government or business impersonation schemes.
Despite legislative efforts, debates around free speech complicate the issue, particularly in the UK, where the Online Safety Act criminalizes the distribution of deepfake porn but not its creation. Criminalizing deepfake creation acts as a deterrent, emphasizing the importance of introducing obstacles and legal deterrents to reduce the intentional creation and distribution of deepfakes.
Currently, victims of deepfake porn have the option to take down offensive AI-generated content. To report deepfake porn, victims or their representatives can utilize Google’s removal request page, the sole platform currently offering this service.
While generative AI has enabled easy content creation with both positive and negative consequences, affected individuals can seek removal from search engines. Google has committed to implementing additional safeguards to address the issue on its platform.