The United States military is not the unrivaled force it once was, but Alexandr Wang, CEO of startup Scale AI, told a congressional committee last week that it could establish a new advantage by harnessing artificial intelligence.
“We have the largest fleet of military hardware in the world,” Wang told the House Armed Services Subcommittee on Cyber, Information Technology and Innovation. “If we can properly set up and instrument this data that’s being generated … then we can create a pretty insurmountable data advantage when it comes to military use of artificial intelligence.”
Wang’s company has a vested interest in that vision, since it regularly works with the Pentagon processing large quantities of training data for AI projects. But there is a conviction within US military circles that increased use of AI and machine learning are virtually inevitable—and essential. I recently wrote about that growing movement and how one Pentagon unit is using off-the-shelf robotics and AI software to more efficiently surveil large swaths of the ocean in the Middle East.
Besides the country’s unparalleled military data, Wang told the congressional hearing that the US has the advantage of being home to the world’s most advanced AI chipmakers, like Nvidia, and the world’s best AI expertise. “America is the place of choice for the world’s most talented AI scientists,” he said.
Wang’s interest in military AI is also worth paying attention to because Scale AI is at the forefront of another AI revolution: the development of powerful large language models and advanced chatbots like ChatGPT.
No one is thinking of conscripting ChatGPT into military service just yet, although there have been a few experiments involving use of large language models in military war games. But observers see US companies’ recent leaps in AI performance as another key advantage that the Pentagon might exploit. Given how quickly the technology is developing—and how problematic it still is—this raises new questions about what safeguards might be needed around military AI.
This jump in AI capabilities comes as some people’s attitudes toward the military use of AI are changing. In 2017, Google faced a backlash for helping the US Air Force use AI to interpret aerial imagery through the Pentagon’s Project Maven. But Russia’s invasion of Ukraine has softened public and political attitudes toward private sector collaboration with tech companies and demonstrated the potential of cheap autonomous drones and of commercial AI for data analysis. Ukrainian forces are using neural deep learning algorithms to analyze aerial imagery and footage. The US company Palantir has said that it is providing targeting software to Ukraine. And Russia is increasingly focusing on AI for autonomous systems.
Despite widespread fears about “killer robots,” the technology is not yet reliable enough to be used in this way. And while reporting on the Pentagon’s AI ambitions, I did not come across anyone within the Department of Defense, US forces, or AI-focused startups eager to unleash fully autonomous weapons.
But greater use of AI will create a growing number of military encounters in which humans are removed or abstracted from the equation. And while some people have compared AI to nuclear weapons, the more immediate risk is less the destructive power of military AI systems than their potential to deepen the fog of war and make human errors more likely.
When I spoke to John Richardson, a retired four-star admiral who served as the US Navy’s chief of naval operations between 2015 and 2018, he was convinced that AI will have an effect on military power similar to the industrial revolution and the atomic age. And he pointed out that the side that harnessed those previous revolutions best won the past two world wars.
But Richardson also talked about the role of human connections in managing military interactions driven by powerful technology. While serving as Navy chief he went out of his way to get to know his counterparts in the fleets of other nations. “Every time we met or talked, we got a better sense of one another,” he says. “What I really wanted to do was make sure that should something happen—some kind of miscalculation or something—I could call them up on relatively short notice. You just don’t want that to be your first call.”
Now would be a good time for the world’s military leaders to start talking to each about the risks and limitations of AI, too.