Harvard Law School Professor Finds ChatGPT Invents Fake Law Less Than The Supreme Court
ChatGPT, a consumer-facing generative artificial intelligence tool, has been found to invent fake law in its attempts to fulfill requests. In a recent experiment conducted by Lawrence Lessig, a Harvard Law School professor, ChatGPT was tasked with delving into the realm of campaign finance law. While the AI tool accurately reflected existing caselaw, its initial response did not align with the constitutional history of the United States prior to 2010.
Lessig’s inquiry focused on the contributions aspect of campaign finance law, highlighting the importance of restrictions on unchecked money being spent on campaign ads within a 30-day window of an election. This provision, akin to limitations on placing signs near polling places, aimed at curbing corruption and maintaining integrity in elections. However, ChatGPT failed to grasp this crucial element, prompting Lessig to question its ability to define love until reaching its limits.
According to Lessig, the experiment exposed ChatGPT’s limitations, particularly regarding its underlying knowledge base. This highlights the potential for improvement through tailored legal products that can enhance the AI’s reasoning. However, there is also a risk associated with the tool relying too heavily on existing caselaw, potentially replicating nonsensical conclusions derived from the Supreme Court.
The experiment serves as a reminder that the quality of output from AI tools like ChatGPT relies heavily on the quality of input. Garbage in, garbage out. In an era where misinformation and questionable content abound, it is crucial to ensure that AI tools are provided with accurate and reliable information to avoid perpetuating falsehoods.
As the field of legal AI continues to evolve, researchers and developers must strike a balance between providing confident responses and avoiding the parroting of nonsensical or flawed legal arguments. This delicate balance allows for progress without stifling the pursuit of deeper understanding and exploration, such as that demonstrated by experts like Lessig.
In conclusion, while ChatGPT showcases the potential for integrating new input to enhance its reasoning, its reliance on existing caselaw poses challenges. With a plethora of misguided content circulating in society, it is imperative for AI tools to be guided by accurate and well-grounded legal principles. As the legal AI landscape progresses, careful consideration must be given to ensure that these tools provide insightful and reliable information, thereby assisting in the pursuit of justice and the advancement of legal knowledge.