A hot potato: The proliferation of AI brings plenty of justifiable concerns, especially as the technology increasingly makes its way into the military. In what sounds worryingly like a cyberpunk dystopia, ChatGPT maker OpenAI has just partnered with a major defense contractor, a deal that could lead to anti-aerial defenses that use ChatGPT-like AI models to help decide if an enemy should be killed.

On Wednesday, Oculus founder Palmer Luckey's Anduril Industries defense technology company announced a "strategic partnership" with OpenAI to "develop and responsibly deploy advanced artificial intelligence (AI) solutions for national security missions."

The companies will initially be focused on developing anti-drone technologies. These defenses will mostly be used against unmanned drones and other aerial threats. The partnership will focus on improving the United States' counter-unmanned aircraft systems (CUAS) and their ability to detect, assess, and respond to potentially lethal aerial threats in real-time.

Using AI models to identify and destroy unmanned drones might not sound like a bad thing, but the statement also mentions the threats from legacy manned platforms, i.e., aircraft with human crews.

The AI models will rapidly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness, according to the companies.

OpenAI revealed it was collaborating with the United States Defense Department on cybersecurity projects in January, having modified its policies to allow for certain military applications of its technology. Sam Altman's firm continued to prohibit it from being used to develop weapons, but it appears that its strict stance against what is essentially ChatGPT-powered weaponry is wavering.

Other AI companies are rushing into lucrative defense sector partnerships, including Anthropic, which has partnered with Palantir. Google DeepMind also has contracts with military customers, something that 200 of its employees strongly opposed in a letter sent to Google execs earlier this year.

There have been calls to ban autonomous/AI weapons for years now. In 2015, Elon Musk, Stephen Hawking, and Steve Wozniak were just three of 1,000 high-profile names that signed an open letter calling for a ban on "offensive autonomous weapons."

AI has made huge advancements since then, appearing in more weapons and military vehicles, including AI-piloted jet fighters. The technology still makes mistakes, of course, which is a concern when it's controlling weapons of death.

The biggest fear has long been that AI could be used in nuclear missile systems. In May, the US said it would never happen and called on Russia and China to make the same pledge. But the Pentagon said last month that it wants AI to enhance nuclear command and control decision-making capabilities.