The chatbot will offer multilingual support in English and three regional languages including Hindi, Tamil, and Telugu, said Meta.
The Misinformation Combat Alliance (MCA) and Meta on Monday announced a collaboration towards launching a fact-checking helpline on WhatsApp to tackle Artificial Intelligence (AI) generated deepfakes.
The helpline, which is expected to be available for public use in March this year, will allow users to flag deepfakes by sending them to a dedicated WhatsApp chatbot.
The chatbot will offer multilingual support in English and three regional languages including Hindi, Tamil, and Telugu, said Meta.
We recognise the concerns around AI-generated misinformation and believe combatting this requires concrete and cooperative measures across the industry, said Shivnath Thukral, director, Public Policy India, Meta.
Our collaboration with MCA to launch a WhatsApp helpline dedicated to debunking deepfakes that can materially deceive people is consistent with our pledge under the Tech Accord to Combat Deceptive Use of AI in 2024 Elections, added Thukral.
Last week, a group of 20 leading tech companies across the globe including Microsoft, Meta, Google, Amazon, IBM, signed an accord to combat AI misinformation ahead of the 2024 elections.
In November last year, the Union Minister of Electronics and Information Technology (MeitY) Ashwini Vaishnaw had stressed on watermarking and labelling of content as an approach to tackle deepfakes, after holding two rounds of discussions with intermediaries on the issue of deepfakes and misinformation.
The Minister had said that though watermarking and labelling were the basic requirements, many miscreants had found a way to go around them.
The Indian government has also said that it will bring stringent provisions to deal with deepfakes under IT rules, 2021 through a fresh amendment.
As a part of the collaboration with Meta in India, MCA will set up a central ‘deepfake analysis unit’ to manage all inbound messages they receive on the WhatsApp helpline.
Further, it will work closely with other member fact-checking organisations as well as industry partners and digital labs to assess and verify the content and respond to the messages accordingly, quashing false claims and misinformation, according to a press release.
The Deepfakes Analysis Unit (DAU) will serve as a critical and timely intervention to arrest the spread of AI-enabled disinformation among social media and internet users in India, said Bharat Gupta, president, Misinformation Combat Alliance.
The initiative will see International Fact-Checking Network (IFCN) signatory fact-checkers, journalists, civic tech professionals, research labs and forensic experts come together, with Meta’s support, he added.
The program will follow a four-pillar approach including detection, prevention, reporting and driving awareness, in order to stop the spread of deepfakes, said a press release.
Early this month, the social media giant also announced an ‘AI labelling policy’, where it plans to collaborate with other industry partners and develop ‘common technical standards’ that will help in tagging AI generated content.
Since AI-generated content appears across the internet, we’ve been working with other companies in our industry to develop common standards for identifying it through forums like the Partnership on AI (PAI), said the company in a blog post.
Meta, which owns large social media platforms including WhatsApp, Facebook, and Instagram, had also introduced a fact-checking programme in India in partnership with 11 independent partners, in the year 2022.