Federal agency bans AI-voice robocalls
NEW YORK — The Federal Communications Commission (FCC) has unanimously passed a ruling outlawing robocalls that utilize artificial intelligence-generated voices. The decision highlights the determination to crack down on the exploitation of this technology for scams and voter deception.
Under the Telephone Consumer Protection Act, the new regulation targets robocalls that employ AI voice-cloning tools. These tools have been used in recent instances, such as AI-generated robocalls during New Hampshire’s primary election to imitate President Joe Biden’s voice and discourage voters.
Effective immediately, the FCC now has the authority to penalize companies that employ AI voices in their calls or block the service providers facilitating them. The ruling also enables call recipients to file lawsuits and provides state attorneys general with additional mechanisms to combat violators.
FCC Chairperson Jessica Rosenworcel emphasized that bad actors have been leveraging AI-generated voices in robocalls to misinform voters, impersonate celebrities, and extort family members. The regulation aims to address these issues promptly.
The new ruling classifies AI-generated voices in robocalls as artificial, subjecting them to the same standards outlined in the consumer protection law. Violators may face substantial fines, with a maximum penalty exceeding $23,000 per call. The FCC has previously utilized this law to combat robocallers interfering in elections, imposing a $5 million fine on two conservative hoaxers who falsely warned individuals in predominantly Black areas about the risks associated with voting by mail.
Moreover, call recipients have the right to take legal action and potentially receive up to $1,500 in damages for each unwanted call.
However, experts have cautioned that despite the FCC ruling, personalized spam via phone calls, text messages, and social media targeting voters remains a potential concern. Josh Lawson, Director of AI and Democracy at the Aspen Institute, notes that bad actors will continue to test the limits and rattle cages.
While the technology to detect AI abuse of voice technology currently exists, it may become more challenging as the technology improves. Kathleen Carley, a Carnegie Mellon professor specializing in computational disinformation, remarks that AI tools like voice-cloning software and image generators have already been employed in political campaigns worldwide.
Efforts to regulate AI in political campaigns have been bipartisan in Congress, but no federal legislation has been passed. Rep. Yvette Clarke, who introduced legislation pertaining to regulating AI in politics, commended the FCC’s ruling and urged Congress to take further action.
As the FCC cracks down on AI-generated robocalls, it is hoped that these measures will deter scammers and protect individuals from misleading and fraudulent calls. The ruling also sets an important precedent in the fight against exploiting AI for malicious purposes.