People Struggle to Detect Deepfake Speech, Posing Serious Crime Threat

Date:

Updated: [falahcoin_post_modified_date]

People are facing significant difficulty in detecting deepfake speech, making it a serious potential threat to committing crimes, according to a new study. The research, conducted in the UK, is the first to examine the human ability to identify artificially generated speech in a language other than English. Criminal gangs have been using deepfakes, which are synthetic media designed to mimic the voice or appearance of a real person, to deceive individuals and extract large sums of money from them.

Deepfakes fall under the umbrella of generative artificial intelligence (AI), a form of machine learning that trains algorithms to learn patterns and characteristics from datasets, such as audio or video recordings of real individuals. Earlier deepfake speech algorithms required thousands of voice samples to generate authentic audio. However, the latest pre-trained algorithms can reproduce a person’s voice using just a three-second clip of their speech. Recently, tech giant Apple unveiled software that allows users to create a replica of their voice using only 15 minutes of recordings.

To assess the detection ability of humans, researchers at University College London (UCL) utilized a text-to-speech (TTS) algorithm trained on two publicly available datasets in English and Mandarin. They then generated 50 deepfake speech samples in each language that differed from the ones used to train the algorithm to avoid replication of the original input. A total of 529 participants were played both the artificially generated and genuine samples to determine their ability to recognize authentic speech from deepfake speech.

The findings, published in the journal PLOS ONE, revealed that participants were able to identify only 73 percent of the fake speech samples. Even with training to recognize deepfake speech, the improvement in detection rates was minimal. English and Mandarin speakers had similar detection rates, although they used different speech features to distinguish between authentic and artificial speech. English speakers often referenced breathing, while Mandarin speakers more frequently mentioned cadence, pacing between words, and fluency.

The study’s lead author, Kimberly Mai, emphasized that the results confirm the inability of humans to reliably detect deepfake speech, even with training. Mai, a machine learning PhD student at UCL, also highlighted the possibility that humans may be even less capable of spotting deepfake speech created using newer, more advanced technology. The team behind the research plans to develop improved automated speech detectors to combat this growing threat.

While generative AI audio technology offers benefits such as increased accessibility for individuals with limited speech capabilities or those who have lost their voice due to illness, there are concerns about its potential misuse by criminals and nation-states to cause significant harm to individuals and societies. Recorded incidents of criminals employing deepfake speech include a case in 2019 where the CEO of a British energy company was tricked into transferring hundreds of thousands of pounds to a fraudulent supplier by a deepfake recording of his boss’s voice.

Professor Lewis Griffin, the senior author of the study from UCL, pointed out that as generative AI technology becomes more sophisticated and widely available, it is crucial for governments and organizations to develop strategies to prevent abuse. However, he also stressed the importance of recognizing the positive possibilities that these tools present. While caution is necessary, the potential benefits should not be overlooked.

In conclusion, the study’s findings underscore the struggles faced by individuals in detecting deepfake speech, highlighting a potential crime threat. The research team’s focus now turns to developing more effective automated speech detectors. As generative AI technology continues to advance, measures must be taken to address the risks associated with its misuse, while also acknowledging the positive advancements in the field.

[single_post_faqs]
Sophia Anderson
Sophia Anderson
Sophia Anderson is an accomplished crime reporter at The Reportify, specializing in investigative journalism and criminal justice. With an unwavering commitment to uncovering the truth, Sophia fearlessly delves into the depths of criminal cases to shed light on the darkest corners of society. Her keen analytical skills and attention to detail enable her to piece together complex narratives and provide comprehensive coverage of high-profile trials, crime scenes, and law enforcement developments. Sophia's dedication to justice and her ability to present facts with clarity and sensitivity make her articles an essential resource for readers seeking an in-depth understanding of the criminal landscape. She can be reached at sophia@thereportify.com for any inquiries or further information.

Share post:

Subscribe

Popular

More like this
Related

Revolutionary Small Business Exchange Network Connects Sellers and Buyers

Revolutionary SBEN connects small business sellers and buyers, transforming the way businesses are bought and sold in the U.S.

District 1 Commissioner Race Results Delayed by Recounts & Ballot Reviews, US

District 1 Commissioner Race in Orange County faces delays with recounts and ballot reviews. Find out who will come out on top in this close election.

Fed Minutes Hint at Potential Rate Cut in September amid Economic Uncertainty, US

Federal Reserve minutes suggest potential rate cut in September amid economic uncertainty. Find out more about the upcoming policy decisions.

Baltimore Orioles Host First-Ever ‘Faith Night’ with Players Sharing Testimonies, US

Experience the powerful testimonies of Baltimore Orioles players on their first-ever 'Faith Night.' Hear how their faith impacts their lives on and off the field.