AI Ethics Under Fire: OpenAI's Deal with the US Military Sparks Outrage and Reflection
The world of artificial intelligence just got a lot more controversial. OpenAI, a leading AI research organization, has found itself in hot water after announcing changes to its agreement with the US military, which sparked a fierce backlash from users.
Just 7 minutes ago, technology reporters Chris Vallance and Laura Cress broke the news: OpenAI is amending its deal with the US government, calling their initial arrangement 'opportunistic and sloppy'. The agreement concerns the use of OpenAI's cutting-edge technology in classified military operations, a topic that has ignited intense debate.
But here's where it gets controversial: OpenAI's statement on Saturday claimed that their new agreement with the Pentagon has more safeguards than any previous deal for classified AI deployments, even surpassing Anthropic's. This bold assertion has raised eyebrows and sparked discussions about the ethical use of AI in warfare and the balance of power between governments and private companies.
On Monday, OpenAI's CEO, Altman, took to X to announce further changes. These include ensuring their AI system won't be used for domestic surveillance of US citizens and requiring intelligence agencies like the NSA to modify their contracts before accessing OpenAI's technology. Altman admitted that the company rushed the initial announcement, acknowledging the complexity of the issues at hand.
The backlash against OpenAI intensified after users learned of its partnership with the Pentagon. Data from Sensor Tower reveals a dramatic surge in ChatGPT uninstalls following the announcement on Friday. The daily average uninstall rate skyrocketed by 200%, indicating a strong public reaction.
Meanwhile, Anthropic's AI model, Claude, gained popularity and topped Apple's App Store ranking. This is despite Claude being blacklisted by the Trump administration due to Anthropic's refusal to compromise on its principle of not creating fully autonomous weapons. Interestingly, Claude has been used in the US-Israel war with Iran, according to CBS News, raising further ethical questions.
The Pentagon remains tight-lipped about its dealings with Anthropic, adding another layer of intrigue. This situation highlights the complex relationship between AI developers, governments, and the public, especially when it comes to military applications.
AI's role in the military is multifaceted. It's used to streamline logistics, process vast amounts of data, and even make strategic decisions. For instance, the US, Ukraine, and NATO utilize Palantir's technology for intelligence gathering, surveillance, counterterrorism, and military operations. The UK Ministry of Defence recently signed a substantial contract with Palantir, integrating its AI-powered platform Maven into NATO's operations.
However, AI isn't infallible. Large language models can make mistakes or even fabricate information, a phenomenon known as 'hallucinating'. Lieutenant Colonel Amanda Gustave, NATO's Task Force Maven chief data officer, emphasized the importance of human oversight, stating that AI would never make decisions without human input.
Palantir's stance on autonomous weapons differs from Anthropic's, advocating for a 'human in the loop' approach rather than a complete ban. But with Anthropic now excluded from the Pentagon, Oxford University's Professor Mariarosaria Taddeo warns that the most safety-conscious actor is no longer at the table, raising concerns about the future of AI ethics in military contexts.
This story is part of the BBC's AI Unpacked week, where we delve into the fascinating and complex world of artificial intelligence. To explore more about AI and its impact, visit AI Unpacked.