Increase in Deepfake Blackmail and Morphed Image Threats Targeting Regular Citizens
Privacy experts have raised concerns about the rise in deepfake blackmail and morphed image threats targeting regular citizens. What was once primarily limited to celebrities and public figures is now increasingly being used to exploit and manipulate everyday individuals. According to cyber experts, there has been a significant increase in complaints related to morphed images or deep nudes created using advanced tools.
Many influencers and public personalities have reported being blackmailed with threats of releasing their deepfake images unless they pay a ransom. This alarming trend has prompted a closer look at the accessibility and scale of these technologies. Websites and apps like Undress AI, Magic Eraser AI, and Soulgen AI have witnessed a surge in searches, indicating the growing interest in AI-powered picture apps that can bring people’s imaginations to life.
Some of these apps allow users to upload a photo and receive a nude version of themselves either for free or at a cost. Furthermore, certain websites even enable users to input specifications like height, skin tone, and body type to create a custom deepfake image. This widespread availability coupled with ease of access is causing concern among privacy experts.
The blurred line between public and private life further exacerbates the issue. Public policy expert Kanishk Gaur warned against the dangers of sharenting, where parents share sensitive content about their children online. Children between the ages of 11 and 16 are particularly vulnerable to manipulation through advanced tools that can morph or generate deepfakes with their images.
These technologies have also been exploited by fraudsters who use morphed nude images obtained from a person’s gallery to extort money. Experts emphasize the need for a two-pronged solution involving technology and regulation. They propose the development of classifiers that can distinguish between genuine and fake content. Additionally, they recommend that AI-generated content be clearly labeled as such, a precaution that could be incorporated into legislation like the Digital Personal Data Protection Bill.
While some chatbots caution against misuse and unauthorized explicit content creation, there is currently no standardized verification process or checks in place to protect individuals’ privacy or ensure consent. Politicians, in particular, are vulnerable targets as troll armies actively seek to tarnish reputations and exploit vulnerabilities.
The mass availability of deepfake technologies requires proactive measures to prevent their misuse. Government regulations and technological advancements must collaboratively address these challenges. It is crucial to strike a balance between the potential creative applications of AI and protecting individuals from harm. Users should exercise caution when sharing personal information, especially photos of children, in an increasingly interconnected and privacy-sensitive world.
In essence, the rise of deepfake blackmail and morphed image threats targeting regular citizens poses significant risks, requiring greater awareness, responsible usage, and comprehensive protective measures from both individuals and authorities alike.