Researchers at Virginia Tech have discovered potential geographic biases in the delivery of location-specific information about environmental justice issues by ChatGPT, a popular generative AI model developed by OpenAI. The study, published in the journal Telematics and Informatics, raises concerns about the efficacy of AI in providing contextually grounded knowledge. The research team, led by Assistant Professor Junghwan Kim, evaluated ChatGPT’s performance by prompting it to provide information on environmental justice issues in 3,108 counties in the US. The results revealed that while the AI model could identify challenges in high-density areas, it struggled to offer localized information on environmental justice issues. These findings highlight the need to address biases in AI models and refine their capabilities for more accurate and inclusive information provision. Assistant Professor Ismini Lourentzou emphasized the importance of ongoing research to enhance large-language models like ChatGPT, promoting reliable and unbiased AI applications.
Researchers Discover Geographic Biases in AI Models, Raising Questions about Effectiveness, US
Date:
Updated: [falahcoin_post_modified_date]