Government Warns of AI Threats: Terrorist Advancements, Voice Cloning, and Toxic Medicine
Artificial Intelligence (AI) is expected to pose potential threats within the next 18 months, according to official government papers. The newly released documents reveal concerns that AI could be exploited by terrorists, leading to the development of chemical, biological, and radiological weapons by 2025.
Ministers are already preparing for a possible unemployment crisis resulting from widespread job losses caused by AI by the end of the decade. British Chancellor Rishi Sunak emphasized the urgent need for action to ensure citizens’ safety and peace of mind at a recent speech.
The UK government has published papers outlining the risks and opportunities associated with AI, shedding light on the potential dangers it presents. The reports highlighted cyber-attacks, fraud, and the production of child sexual abuse images as some of the biggest risks within the next two years.
One of the major concerns raised in the documents is the rise of voice cloning. Criminals are predicted to become early adopters of this technology, exploiting it to perpetrate sophisticated scams, fraud, impersonation, ransomware attacks, currency theft, data harvesting, child sexual abuse exploitation, and voice cloning. By using AI to clone someone’s voice, scammers can make calls impersonating the individuals, contacting their relatives to either extort money or acquire sensitive information.
Furthermore, the government papers highlight the potential for AI to enhance terrorist capabilities, including propaganda dissemination, recruitment facilitation, attack planning, and the development of chemical, biological, and radiological weapons. While existing barriers restrict terrorists from acquiring the necessary components and equipment for such weapons, the government acknowledged that generative AI could accelerate the dismantling of these barriers.
The erosion of trust in online information due to the prevalence of deepfakes is another significant concern. The dissemination of fake news, personalized disinformation, the manipulation of financial markets, and the undermining of the criminal justice system are all risks associated with the pollution of information caused by deepfakes. The report predicts that synthetic media may constitute a substantial portion of online content by 2026, potentially eroding public trust in the government, exacerbating polarization, and fueling extremism. Furthermore, misinformation spread by AI could lead individuals to make dangerous decisions, such as using toxic substances as medicine.
In terms of economic consequences, the government highlighted the potential for societal unrest and economic damage resulting from the misuse of AI. Trade secrets being stolen on a large scale and organized crime victimizing members of the public could contribute to an unemployment crisis, outweighing the potential job creation AI may bring.
The government papers also discuss the escalating harm caused by cyber-attacks, including monetary, mental, and emotional damages. With further advancements in AI, these attacks could result in novel harms, such as emotional distress caused by fake kidnapping or sextortion scams. The use of AI to deceive victims, as demonstrated by a recent case in Arizona where a woman received a call supposedly from her daughter who had been kidnapped, reinforces the need for proactive measures against emerging threats.
In light of these risks, the UK government is taking proactive steps to address AI safety. An international summit on AI safety will be hosted by the Prime Minister at Bletchley Park, aiming to inform discussions and promote collaboration in tackling the challenges posed by AI.
It is crucial for governments, organizations, and technology developers to be aware of the potential dangers and take immediate action to mitigate the risks associated with AI. By doing so, we can strive to harness the benefits of AI while safeguarding the well-being and security of individuals and societies.