Google Introduces Clear Disclosure Requirement for AI-Generated Political Ads
Google has announced a new rule that will require political advertisements featuring artificial intelligence (AI) to include a prominent disclaimer if imagery or sounds have been synthetically altered. The update to Google’s political content policy will go into effect in mid-November, almost a year ahead of the U.S. presidential election. This rule will also impact campaign ads leading up to next year’s elections in India, South Africa, the European Union, and other regions where Google already verifies election advertisers.
The use of generative AI tools in political advertising is not new, but it is becoming increasingly easier and more realistic. Some presidential campaigns in the 2024 race, including Florida GOP Governor Ron DeSantis, have already utilized this technology. The Republican National Committee, for instance, released an entirely AI-generated ad depicting a dystopian future if President Joe Biden were to be reelected. The ad featured convincing fake images of boarded-up storefronts, military patrols, and waves of panicked immigrants.
In response to the rise of AI-generated political ads, Google’s new policy aims to ensure transparency and accountability. Political advertisements on Google-owned platforms, particularly YouTube, will now be required to have a clear disclosure located where users are likely to notice it. This aims to inform viewers if the content has been altered using AI.
While Google’s announcement is seen as a positive step, there are those who believe voluntary commitments alone are insufficient. Democratic U.S. Senator Amy Klobuchar, who co-sponsors legislation mandating disclaimers on deceptive AI-generated political ads, expressed cautious approval of Google’s move. She emphasized the need for stronger regulations and called for additional measures to protect against misinformation.
It is worth noting that Google’s new policy does not ban the use of AI in political advertising entirely. Exceptions to the ban include AI-altered or generated content that is inconsequential to the claims made in the ad. Additionally, AI can be utilized in various editing techniques such as image resizing, cropping, color correction, defect correction, and background edits.
This move by Google comes at a time when the Federal Election Commission is also considering regulations on AI-generated deepfakes in political ads ahead of the 2024 election. Deepfakes can involve synthetic voices of political figures saying things they never actually said. Several states have also discussed or passed legislation related to deepfake technology, highlighting the growing concerns surrounding the potential misuse of AI in political campaigns.
As the world grapples with the emergence of AI-generated political content, the measures taken by tech giants like Google will play a crucial role in ensuring transparency and safeguarding the integrity of democratic processes. By requiring clear disclosure for AI-altered political ads, Google aims to empower users to make informed decisions and protect against the spread of misinformation. However, the ongoing debate around regulation and accountability in the AI-driven political landscape is far from over.