OpenAI Silently Modifies Policy on Military Use of Technology
OpenAI, the prominent artificial intelligence research organization, has made a subtle adjustment in its stance regarding the military utilization of its technology. The company has quietly removed explicit language that barred the use of its technology for military purposes from its usage policy. This revision comes after OpenAI had previously maintained a neutral public position on the matter.
The Verge reported that the elimination of terms such as weapons development and military and warfare from OpenAI’s usage policies is part of a broader rewrite aimed at enhancing clarity and readability of the document. While the updated policy retains a prohibition against using the service to cause harm to oneself or others, it uses the term harm without explicitly clarifying if it includes military use. However, any use of OpenAI technology, including by the military, to develop or use weapons, cause injury, or engage in unauthorized activities that violate the security of any service or system remains disallowed.
OpenAI spokesperson Niko Felix explained that the goal is to establish universal principles that can be easily applied in various contexts. The company aims to strike a balance between flexibility and adherence to compliance with the law.
Heidy Khlaaf, an engineering director at cybersecurity firm Trail of Bits and an expert in machine learning and autonomous systems safety, raised concerns about the potential risks associated with the use of OpenAI’s technology in military applications. She emphasized the known issues of bias and inaccuracies within Large Language Models (LLMs). Khlaaf cautioned that deploying LLMs in military warfare could lead to imprecise and biased operations, potentially exacerbating harm and civilian casualties.
The revised policy’s practical implications remain uncertain. OpenAI had previously been dodgy about enforcing its explicit military and warfare ban, particularly as the Pentagon and U.S. intelligence community showed heightened interest, according to a report by The Intercept.
Sarah Myers West, the managing director of the AI Now Institute and a former AI policy analyst at the Federal Trade Commission, drew attention to the timing of OpenAI’s decision to remove the terms military and warfare from its permissible use policy. West highlighted the use of AI systems in the targeting of civilians in Gaza and expressed concerns about the vague language in the revised policy, raising questions about OpenAI’s approach to enforcement.
While OpenAI’s current offerings do not directly cause harm in military operations or other contexts, it is important to recognize that military operations inherently involve activities related to the potential use of force. Although language models like ChatGPT may not engage in direct combat, they could enhance numerous non-combat tasks on the periphery of lethal actions, such as coding or handling procurement orders, as reported by TechCrunch.
OpenAI’s decision to silently modify its policy on the military use of technology raises questions about the impact and implications of its powerful AI models in sensitive areas such as warfare. The potential risks of bias, inaccuracies, and unintended consequences underscore the need for clear guidelines and responsible deployment to ensure the ethical use of AI technology.