New regulation, penalties for both creator and platform in offing to deal with deepfakes
New Delhi, Nov 23 – The Indian government is set to introduce new regulations aimed at curbing the spread of deepfake technology, with penalties proposed for both the creators and platforms hosting such content. IT and Telecom Minister, Ashwini Vaishaw, emphasized the potential threat deepfakes pose to democracy and the need for action during a meeting with stakeholders from social media platforms, Nasscom, and AI experts.
Vaishaw stated, Deepfakes have emerged as a new threat to democracy. These (can) weaken trust in society and its institutions. As a response to this concern, the government is considering implementing a range of protective measures. These include the implementation of watermarking for AI-generated content, the development of deepfake detection mechanisms, rules to address data bias, privacy regulations, and safeguards against monopolistic practices.
The rise of deepfakes has raised serious concerns in recent years, particularly with regards to how manipulated videos can deceive viewers and impact public opinion. By introducing penalties for both the creators and the platforms hosting this content, the government aims to discourage the creation and dissemination of deepfakes. The proposed regulations aim to protect the sanctity of democratic processes and institutions, thereby preserving public trust.
The regulations being considered are expected to tackle various aspects related to deepfakes, including the creation, distribution, and storage of such content. The implementation of watermarking techniques, which involve embedding unique markers into AI-generated content, will facilitate easy identification and traceability. Deepfake detection mechanisms will play a crucial role in identifying and flagging manipulated videos.
Professor Navin Gulia, an AI expert and attendee of the stakeholder meeting, emphasized the need for stringent regulations. To effectively combat the growing threat of deepfakes, we need a comprehensive framework that addresses not only the creation and distribution but also the ethical implications of such technology, stated Gulia.
Furthermore, the government intends to address concerns surrounding data bias in AI algorithms utilized for creating deepfakes. These regulations will ensure that the technology is not exploited to manipulate public opinion or promote false narratives. Additionally, privacy regulations will be implemented to safeguard individuals from being targeted or defamed through the use of manipulated videos.
The government’s approach reflects a commitment to maintaining a fair and transparent digital landscape. By introducing penalties for both creators and platforms, it aims to create a deterrent that will discourage the creation and dissemination of deceptive deepfake content. This multifaceted approach, with regulations covering watermarking, detection, data bias, privacy, and anti-concentration measures, will help combat the negative impact of deepfakes on society and democracy.
While combating deepfakes poses numerous challenges, the Indian government is taking a proactive stance to address this issue. By collaborating with social media platforms, AI experts, and industry bodies like Nasscom, it aims to strike the right balance between technological innovation and ethical responsibility. These proposed regulations affirm the government’s commitment to preserving the integrity of democratic processes and protecting society from the threats posed by deepfake technology.
In conclusion, as deepfake technology becomes increasingly advanced and accessible, governments worldwide are recognizing the need for robust regulations. The Indian government’s plans to introduce new regulations, backed by penalties for both creators and platforms, signify a significant step towards combating the rising threat of deepfakes. Through a comprehensive framework focusing on watermarking, detection, data bias, privacy, and anti-concentration measures, the government aims to safeguard democracy, rebuild trust in institutions, and mitigate the potential damage caused by manipulated videos.