Google Announces New Rules on AI-Generated Content of Deceased Minors on YouTube
In a recent policy update, Google has revealed stricter regulations on AI-generated content featuring deceased minors on YouTube. The move comes as content creators exploit artificial intelligence to recreate victims of violent crimes, causing distress among the victims’ families and sparking widespread criticism.
Effective January 16, the new rules aim to prevent content creators from using virtual recreations of minors who have lost their lives in violent crimes. Google’s YouTube policy update states that content realistically simulates deceased minors or victims of deadly or well-documented major violent events describing their death or violence experienced will be struck down.
Failure to adhere to the anti-harassment rules currently in place results in a warning and the removal of the offensive content. Violators can also undergo policy training, which allows the warning to expire after 90 days. However, a second violation within 90 days earns the user a strike. If three strikes are accumulated within the same period, the creator’s channel will be terminated.
While these consequences already exist, Google maintains that extreme policy abuses may warrant even harsher measures. We may terminate your channel or account for repeated violations of the Community Guidelines or Terms of Service, states Google’s harassment policy. We may also terminate your channel or account after a single case of severe abuse, or when the channel is dedicated to a policy violation.
This policy update is a response to a concerning trend observed among true crime content creators on various social media platforms. In some cases, these creators have begun utilizing AI-generated images and voices to mimic victims of violent crimes in their content. The Washington Post highlighted a specific instance in August 2023, noting the circulation of a video on TikTok that depicted James Bulger, a young boy who was kidnapped and murdered in the UK in 1993.
Although TikTok has expressed its disapproval and stated it was removing such videos upon discovery, many still remain accessible on YouTube. Denise Fergus, James Bulger’s mother, expressed her disgust in an interview with the Daily Mirror, stating, It is one thing to tell the story, I have not got a problem with that. Everyone knows the story of James anyway. But to actually put a dead child’s face, speaking about what happened to him, is absolutely disgusting. It is bringing a dead child back to life. It is wrong.
With these new rules in effect, Google aims to curb the inappropriate use of AI-generated content involving deceased minors. By taking a firmer stance, the company seeks to safeguard the well-being of victims’ families and prevent the exploitation of tragic events for entertainment purposes.
As concerns grow regarding the potential risks and ethical implications surrounding AI-generated content, it remains crucial for platforms like YouTube to mitigate the spread of harmful or insensitive material. The regulation of such content will continue to be a delicate balancing act, as platforms strive to protect both freedom of expression and the rights of those affected by violent crimes.
As we move forward, it is essential for content creators, platform administrators, and users alike to engage in responsible and considerate practices, ensuring that AI technologies are utilized responsibly and with empathy for the victims and their families.