OpenAI’s new tool for determining AI-authored text has been unveiled, aiming to assist in identifying content written by artificial intelligence and distinguishing it from human-authored text. The tool, known as a classifier, is designed to flag AI-generated content, including that developed by OpenAI’s products as well as other AI authoring software. However, OpenAI has acknowledged that the tool has limitations and should be used alongside other methods to make source determinations.
In evaluations conducted by OpenAI, the classifier correctly identified only 26% of AI-written text, while also flagging 9% of human-written text as AI-generated. Despite this, the company believes that reliable classifiers can help address false claims of AI-generated text being human-written, such as in misinformation campaigns or academic dishonesty.
Teachers, students, and workers have increasingly turned to OpenAI’s ChatGPT chatbot for creating reports and content, raising concerns about authorship and the potential for spreading auto-generated misinformation. The popularity of ChatGPT has prompted the development of tools like GPTZero, created by a Princeton University student, to detect AI writing.
Some educational institutions and conferences have implemented restrictions on the use of ChatGPT. New York City’s public schools and the International Conference on Machine Learning have banned its use, except in specific cases.
OpenAI’s unveiling of the tool comes as efforts continue to address the challenges posed by AI-generated text and to ensure transparency and accountability.