Survival of the quickest: Military leaders aim to unleash, control AI

Survival of the quickest: Military leaders aim to unleash, control AI

PARIS — Artificial intelligence is massively accelerating military decision making, and armed forces that don’t keep up risk being outmatched, the NATO commander in charge of strategic transformation at the alliance said at the AI Action Summit in Paris this week.

Alliance members are now using AI in the decision-making loop of observe, orient, decide and act, NATO Supreme Allied Commander Transformation Adm. Pierre Vandier, said at a conference focused on military AI. Analysis that previously took hours or days, such as processing large amounts of sensor data, can now be done in a matter of seconds, he said.

“The speed of operations will dramatically change,” Vandier said at a press briefing on Monday. “You see that in Ukraine. If you do not adapt at speed and at scale, you die.”

The major powers have identified AI as a key enabler for future warfare, with the U.S. spending billions on AI for defense, while trying to limit China’s access to enablers such as hardware from Nvidia. Meanwhile, summit host France says it plans to become the leader in military AI in Europe.

AI brings “a huge acceleration of the speed of decision,” Vandier said. “A huge acceleration that overtakes a lot of things in our system, and the system of the enemy we intend to outpace.”

Vandier made a comparison to the movie The Matrix, where the main character Neo dodges bullets by having learned to move faster than his opponent’s projectiles. “The question for us is, are we already dead? So it’s a question of speed of change.”

The speediness of AI raises questions about whether having a human in the control loop improves the quality of decision making, said  Jeroen van der Vlugt, chief information officer at the Netherlands Ministry of Defence. He said AI can make decisions based on amounts of data that would be impossible for humans to manage, with analysis brought down to milliseconds.

A group of 25 countries at the Paris summit signed a declaration on AI-enabled weapon systems, pledging they won’t authorize life-and-death decisions by an autonomous weapon system operating completely outside human control. Summit co-chair India didn’t sign the declaration, nor did the U.K. or the United States.

“We already have militaries full of intelligent, autonomous agents – we call them soldiers or airmen or Marines,” said Gregory Allen, the director of the Wadhwani AI Center at the Center for Strategic and International Studies, a Washington-based think tank. “Just as military commanders are accountable, states are also responsible for the actions of their military forces, and nothing about the changing landscape of artificial intelligence is going to ever change those two facts.”

Germany’s Helsing and France’s Mistral AI on Monday announced an agreement to jointly develop AI systems for defense. Google owner Alphabet last week dropped a promise not to use AI for purposes such as developing weapons, while rival OpenAI in December announced a partnership with military technology company Anduril.

Frontier AI models will be useful in summarizing large intelligence reports and for war gaming and “red teaming,” said Ben Fawcett, product lead at Advai, which tests AI systems for vulnerabilities. “These kind of models will have a real utility in order to test commanders on how their plan will survive contact, especially if they’re able to update that based on what is the latest situation.”

The first AI-based simulation tools are arriving that allow commanders to test and refine plans before putting them into action, according to Vandier. He said AI doesn’t mean fewer human decisions but faster and better ones, at least in theory.

“AI is not a magic bullet,” Vandier said.”It gives solutions to go faster, better, more accurate, more lethal, but it won’t solve the war itself, because it’s a race between us and our competitors.”

Vandier and Van der Vlugt mentioned the importance of AI for autonomy and robotics, particularly swarming technology, which relies on AI to work. “The scalability and autonomy part of it is really changing our landscape at this moment,” Van der Vlugt said.

Success of AI depends on adoption, and Vandier has introduced a monthly learning package with required reading for officers at Allied Command Transformation in Norfolk, Virginia, after finding out his top brass didn’t know “that much” about AI.

“The technology goes so fast that ultimately, we realize that managers are not necessarily up to speed,” Vandier said. “So there is really a training challenge. If you want a head of capacity development, someone who defines the capacities of tomorrow, to be good, they need to have understood what is at stake with these technologies.”

Large language models over the past decade have been getting roughly 13 times better every year, and that trend is not expected to stop, meaning models might be more than 1,000 times better in three years and more than 1 million times better in 10 years, according to Allen at CSIS.

“What we aren’t seeing right now is large language models generating unique insights that would be relevant to say, planning a campaign of war, fighting operations,” Allen said. “Just because they are very far away from that level of performance today doesn’t mean that they are very far away in terms of time, because performance is improving so rapidly.”

Large language models will be transformative for national security capabilities, which helps explain why the U.S. stopped selling AI chips to China, according to Allen. “We see in the not too distant future, genuinely transformative AI capabilities, and it’s important that that is a party that China is not invited to.”

When asked at the press briefing whether machines will take control, Vandier said he didn’t know. He mentioned the 1983 movie WarGames, in which a computer decides to trigger nuclear war, and the Terminator series of movies, whose premise includes an AI launching a nuclear attack against humanity, saying “it could happen.”

The NATO commander said while fears around AI are understandable, citizens already have the technology in their pockets with smart phones. He said new technology is not inherently good or bad, what matters is the use case.

“What people want when they fight is not to be all destroyed, they want to win,” Vandier said. “As it has been for nuclear arms, one day we will have to find ways to control the AI, or we will lose control of everything.”

Rudy Ruitenberg is a Europe correspondent for Defense News. He started his career at Bloomberg News and has experience reporting on technology, commodity markets and politics.

Read the full article here