The company behind ChatGPT, OpenAI, has quietly updated its terms and conditions, softening the ban on using its artificial intelligence (AI) technology for military purposes. The initial usage policy prohibited any activity involving a high risk of physical harm, explicitly mentioning weapons development and military and warfare. However, OpenAI’s latest policy update removes the blanket ban on military use, while still emphasizing the prohibition of using AI to harm oneself or others. The company claimed that the changes were made to improve readability and provide service-specific guidance. The implications of this policy alteration remain uncertain, and OpenAI has yet to respond to requests for comments. Experts have previously warned about the risks associated with large language models like ChatGPT being used for warfare, including the potential for cyber attacks. Despite acknowledging these risks, OpenAI decided to remove the words military and warfare from its permissible use policy. As a result, critics argue that this decision is significant given the recent use of AI systems in targeting civilians during conflicts. The exact impact of OpenAI’s updated policy on military applications of its AI technology remains to be seen.
OpenAI Softens Ban on AI Military Use, Raises Concerns
Date:
Updated: [falahcoin_post_modified_date]