Companies Deploy Controversial AI Chatbot In Snapchat, Exposing Millions of Teens: Risks and Solutions

Date:

Updated: [falahcoin_post_modified_date]

Ever since the launch of ChatGPT in late 2022, companies have been racing to deploy their own versions of generative AI tools — sometimes integrating them into existing products that are used by children and teens. For example, the experimental integration of an AI chatbot into Snapchat — a messenger app popular with teens (and which has just been issued a preliminary enforcement notice by the U.K. Information Commissioner) — exposes over 109 million children between the ages of 13 and 17 to this chatbot daily. Moreover, in the free version of the app, the AI chatbot is, by default, the first friend in everyone’s conversation list.

As such, these children and teens inadvertently become the test subjects of technologies whose risks haven’t been fully studied and understood, let alone mitigated. Building on my prior article focused on plagiarism and cyberbullying, I explore the risks of mis- and disinformation and age-inappropriate advice, what the tech industry can do to address these risks and why this matters from a privacy regulation perspective.

Three characteristics of generative AI increase the issue and potential for harm arising from mis- and disinformation. One is the ease and incredible efficiency of content creation, and the second one is the well-polished and authoritative-sounding form of the output, whether ChatGPT played fast and loose with reality or was as accurate as truth itself.

Third, generative AI has the ability to appear human, form emotional connections and become a trusted friend like a conventional search engine never could. This is because ChatGPT’s output appears conspicuously human in its conversation style, as it mimics the input on which it was trained. This input includes chat histories on Reddit, fictional conversations from books and who knows what else. In combination, these three characteristics may significantly increase the likelihood that the output produced by ChatGPT is taken for sound information.

Here’s what the tech industry can do to protect against mis- and disinformation:
Real-Time Fact-Checking And Grounding: A viable solution for enhancing the trustworthiness of generative AI could be the development of models that incorporate real-time fact-checking and grounding. Grounding in this context refers to anchoring the AI-generated information to validated and credible data sources. The goal is to provide real-time credibility assessments alongside the generated content, thereby minimizing the spread of misinformation.

Transparency Labels: Similar to food labeling, tech companies could implement tags that signify the nature of generated content. For instance, AI-Generated Advice or Unverified Information tags could be useful. This could counteract the impression of dealing with a human and encourage increased scrutiny.

As Aza Raskin, co-founder of the Center for Humane Technology, and others have demonstrated, even when a chatbot is informed that the conversation partner is underage, this information can quickly be disregarded, and conversations can take the form of advice on how to hide the smell of alcohol and marijuana to how to conceal a forbidden app from the user’s parents.

The concerns that surfaced in my previous article and this one aren’t new, but they’re addressable, and critically so, as we showed how they’re exacerbated in the context of children’s use of generative AI tools such as ChatGPT.

Utah has recently passed legislation requiring social media companies to implement age verification. The law will come into effect in March 2024.

As it concerns many digital services provided directly to children, the consent of minors is only valid if given or authorized by a parental authority as per Art. 8 of the GDPR. Other privacy laws are even stricter and require parental approval for any processing of children’s personal data, such as section 14 of Quebec’s Law 25.

In practice, this consent requirement may be hard to implement, as it isn’t immediately obvious in all instances whether personal data pertains to children, including the data originally scraped from the internet to train ChatGPT in addition to that provided in a prompt by a registered account with OpenAI.

These regulatory requirements and the difficulties around obtaining valid consent from children emphasize the need for technological solutions to prevent the collection of personal information from children and to protect them from the risks that ensue from the interaction with AI.

[single_post_faqs]
Neha Sharma
Neha Sharma
Neha Sharma is a tech-savvy author at The Reportify who delves into the ever-evolving world of technology. With her expertise in the latest gadgets, innovations, and tech trends, Neha keeps you informed about all things tech in the Technology category. She can be reached at neha@thereportify.com for any inquiries or further information.

Share post:

Subscribe

Popular

More like this
Related

Revolutionary Small Business Exchange Network Connects Sellers and Buyers

Revolutionary SBEN connects small business sellers and buyers, transforming the way businesses are bought and sold in the U.S.

District 1 Commissioner Race Results Delayed by Recounts & Ballot Reviews, US

District 1 Commissioner Race in Orange County faces delays with recounts and ballot reviews. Find out who will come out on top in this close election.

Fed Minutes Hint at Potential Rate Cut in September amid Economic Uncertainty, US

Federal Reserve minutes suggest potential rate cut in September amid economic uncertainty. Find out more about the upcoming policy decisions.

Baltimore Orioles Host First-Ever ‘Faith Night’ with Players Sharing Testimonies, US

Experience the powerful testimonies of Baltimore Orioles players on their first-ever 'Faith Night.' Hear how their faith impacts their lives on and off the field.