A research group is calling for internet and social media moderators to strengthen their detection and intervention protocols for violent speech. Their study of language detection software found that algorithms struggle to differentiate anti-Asian violence-provoking speech from general hate speech. Left unchecked, threats of violence online can go unnoticed and turn into real-world attacks. Researchers from Georgia Tech and the Anti-Defamation League (ADL) teamed together for the study. They made their discovery while testing natural language processing (NLP) models trained on data they crowdsourced from Asian communities.
Researchers Discover Algorithms Struggle to Detect Anti-Asian Violence Speech
Date:
Updated: [falahcoin_post_modified_date]