OpenAI removes military and warfare prohibitions from its policies


OpenAI may be paving the way toward finding out its AI‘s military potential.

First reported by the Intercept on Jan 12., a new company policy change has completely removed previous language that banned “activity that has high risk of physical harm,” including specific examples of “weapons development” and “military and warfare.”

As of Jan. 10, OpenAI’s usage guidelines no longer included a prohibition on “military and warfare” uses in existing language that obligates users to prevent harm. The policy now only notes a ban on utilizing OpenAI technology, like its Large Language Models (LLMs), to “develop or use weapons.”

Subsequent reporting on the policy edit pointed to the immediate possibility of lucrative partnerships between OpenAI and defense departments seeking to utilize generative AI in administrative or intelligence operations.

In Nov. 2023, the U.S. Department of Defense issued a statement on its mission to promote “the responsible military use of artificial intelligence and autonomous systems,” citing the country’s endorsement of the international Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy — an American-led “best practices” announced in Feb. 2023 that was developed to monitor and guide the development of AI military capabilities.

“Military AI capabilities includes not only weapons but also decision support systems that help defense leaders at all levels make better and more timely decisions, from the battlefield to the boardroom, and systems relating to everything from finance, payroll, and accounting, to the recruiting, retention, and promotion of personnel, to collection and fusion of intelligence, surveillance, and reconnaissance data,” the statement explains.

AI has already been utilized by the American military in the Russian-Ukrainian war and in the development of AI-powered autonomous military vehicles. Elsewhere, AI has been incorporated into military intelligence and targeting systems, including an AI system known as “The Gospel,” being used by Israeli forces to pinpoint targets and reportedly “reduce human casualties” in its attacks on Gaza.

AI watchdogs and activists have consistently expressed concern over the increasing incorporation of AI technologies in both cyber conflict and combat, fearing an escalation of arms conflict in addition to long-noted AI system biases.

In a statement to the Intercept, OpenAI spokesperson Niko Felix explained the change was intended to streamline the company’s guidelines: “We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs. A principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples.”

OpenAI introduces its usage policies in a similarly simplistic refrain: “We aim for our tools to be used safely and responsibly, while maximizing your control over how you use them.”





Source link