ChatGPT developer OpenAI has responded to The New York Times’ lawsuit, calling it without merit and insisting that the paper is not revealing the whole truth. The legal action, launched by The New York Times in December, accuses OpenAI and Microsoft of copyright infringement. The lawsuit claims that millions of articles published by the newspaper were used to train automated chatbots that now compete with The New York Times as a reliable source of information. OpenAI has not yet formally responded to the lawsuit in court, but it has posted a blog disputing the claims made against it.
OpenAI argues that The New York Times is not providing the full context of the negotiations that took place until mid-December last year. According to OpenAI, the company had explained to The New York Times that their content did not significantly contribute to the training of their existing models, and that it would not have a substantial impact on future training either.
OpenAI claims that it was surprised and disappointed by The New York Times’ decision to file a lawsuit on December 27, which the company learned about by reading the very newspaper that was taking legal action against it. OpenAI states that throughout the process, The New York Times had mentioned seeing some instances of their content being replicated but refused to share any specific examples, despite OpenAI’s commitment to investigate and rectify any problems.
OpenAI emphasizes that it takes these matters seriously and points to its quick action in July, when it immediately took down a ChatGPT feature after discovering it could inadvertently reproduce real-time content. OpenAI also argues that the regurgitation of The New York Times’ content appears to be from years-old articles that have been widely disseminated on multiple third-party websites. OpenAI believes that The New York Times intentionally manipulated prompts, often including lengthy excerpts of articles, to elicit regurgitation from their AI model.
OpenAI underscores that the principle of training AI models using copyrighted content is permitted under US law, supported by well-established precedents and endorsed by various stakeholders, including academics, library associations, civil society groups, leading companies, creators, and others who recently submitted comments to the US Copyright Office. OpenAI further notes that other regions and countries, such as the European Union, Japan, Singapore, and Israel, also allow training models on copyrighted material, which it sees as advantageous for AI innovation and investment.
OpenAI acknowledges that legal rights are less important to the company than being good citizens. OpenAI has led the AI industry in implementing a simple opt-out process for publishers, which The New York Times adopted in August 2023, to prevent their tools from accessing specific websites.
In conclusion, OpenAI categorically denies the claims made by The New York Times in their lawsuit, asserting that the company adheres to legal principles and is committed to being responsible and addressing any concerns raised by content creators. The battle between OpenAI and The New York Times is likely to continue, shedding light on the complex legal and ethical issues surrounding the use of AI models and copyrighted material.