ChatGPT Maker OpenAI Removes ‘Military and Warfare’ Ban from Permissible Use Policy
OpenAI, the creator of ChatGPT, has quietly revised its usage policy, eliminating the language that prohibited military use of its technology. The move has significant implications, particularly in light of the growing use of artificial intelligence (AI) in warfare, such as in the recent conflict in Gaza.
ChatGPT is a popular AI-powered tool that generates text or images based on user prompts. Prior to this update, OpenAI’s permissible uses policy explicitly prohibited weapons development and military and warfare activities. However, the revised policy no longer includes the ban on military use, which raises concerns about potential partnerships and contracts with military entities.
Sarah Myers West, managing director of the AI Now Institute, expressed her concern about the timing of this policy change, given AI’s role in targeting civilians in conflict zones like Gaza. She also noted that the revised language is vague, leaving room for interpretation and enforcement ambiguity.
When contacted by Common Dreams, an OpenAI spokesperson explained that while their tools should not be used to harm individuals, develop weapons, or assist in communications surveillance, they acknowledge that there are national security use cases that align with their mission. The spokesperson cited ongoing collaboration with the Defense Advanced Research Projects Agency (DARPA) to develop cybersecurity tools for critical infrastructure and industry.
The weaponization of AI is a growing concern within the international community. Lethal autonomous weapons systems, often referred to as killer robots, pose a potential existential threat, underscoring the need for arms control measures. The bipartisan Block Nuclear Launch by Autonomous Artificial Intelligence Act, introduced in the U.S. Congress, emphasizes the importance of human decision-making in matters of nuclear warfare.
As AI technology continues to advance, it becomes crucial to address its ethical implications and establish guidelines to prevent its misuse. OpenAI’s decision to remove the ban on military use raises questions about their commitment to responsible AI development and the potential consequences of AI deployment in warfare.
In conclusion, stakeholders and experts are calling for robust regulations and preventative measures to ensure the ethical use of AI technology. The weaponization of AI poses serious risks, and as the field progresses, it becomes imperative to prioritize human rights and international law when considering the deployment of AI in military contexts.