New AI-powered strike drone shows how quickly battlefield autonomy is evolving

New AI-powered strike drone shows how quickly battlefield autonomy is evolving

Small drones have been changing modern warfare at least since 2015, when Russia and Ukraine began to use them to great effect for rapid targeting. The latest addition is a strike-and-intelligence quadcopter that its builder hopes will do more things with a lot less operator attention. 

The point of the Bolt-M, revealed by Anduril today, is to make fewer demands on the operator and offer more information than, easy-to-produce first-person-view strike drones, the type that Ukraine is producing by the hundreds of thousands. The U.S. Army, too, is looking at FPV drones for infantry platoons. But they require special training to use and come with a lot of operational limits. The Bolt-M, according to an Anduril statement, works “without requiring specialized operators.” The company has a contract from the U.S. Marine Corps’ Organic Precision Fires – Light, or OPF-L, program to develop a strike variant. 

Bolt-M’s key selling feature is its autonomy-and-AI software powered by Anduril’s Lattice platform. The operator can draw a bounding box on a battlefield display, set a few specifications, and send the drone on its way.

“Once a target is identified in Lattice, an operator can specify a standoff position for Bolt-M to maintain, tasking the system to stalk the target from beyond visual or acoustic detection range even as the target moves and is occluded,” the company statement said. “When it’s time to strike, an operator can define the engagement angle to ensure the most effective strike, while onboard vision and guidance algorithms maintain terminal guidance even if connectivity is lost with the operator.”

But the system is also intended to handle some reconnaissance tasks that humans do but other small cheap strike drones don’t. In conversation with reporters on Wednesday, Anduril chief strategy officer Chris Brose said that the Bolt-M is intended to help its operator “to understand what’s happening on the battlefield, whether it’s kind of known targets or targets that are recognizable to the systems on board, or whether it’s unknown things that that operator can then select through its interaction with the autonomous system, tell it to track, tell it to follow, eventually, if so desired, based on the human saying ‘Go,’ to actually fly in and engage that target.” 

Brose said the Bolt-M might even be able to spot new variants of older weapons.

“If the Russians in this instance start modifying them and building these kinds of turtle tanks, maybe the computer vision hasn’t seen that already…it can still surface that insight back to an operator.” 

Over the next six months, the Marine Corps will put the Bolt-M’s munition variant through “a pretty rigorous test and evaluation campaign,” he said. 

The Bolt-M pushes right up to the limits of the Pentagon principle that robotic weapons should always have a person involved in lethal decisions.

Brose said that their efforts are guided by the company’s experiences in Ukraine, particularly feedback from Ukranians who are face-to-face with Russia’s electronic warfare. The drone can fly to GPS waypoints. But in places where GPS is under attack, operators can manually control it—and it can maintain custody of the target and execute previous operator-delivered commands even when links are broken.

In many ways, the Bolt-M derives its real value, and its intelligence, from the Lattice platform, which can integrate data from varioussensors and sources. Anduril is working to make sure that Lattice works with a variety of drones, even from other manufacturers, said Brose. That could give the company a key, central role as different forces buy different drones or make their own in the field. 

“What we are doing with Lattice is to deliver as much autonomy across that entire kill chain to put that human being on-the-loop so that they can make better decisions faster. They can make more decisions. They can take more actions because they have an intelligent system that’s incorporating…sensor data, platforms, vehicles,” he said. 

But what decisions? The Defense Department maintains a list of AI ethical principles that say that human beings must be able to “exercise appropriate levels of judgment and remain responsible for the development, deployment, use,” over AI-enabled weapons.

Last year, the Pentagon sought to clarify what is and is not allowable—while leaving room to adjust the rules if things change.

One of the key lessons from Ukraine’s battlefield is that battlefield conditions can move very rapidly. Different nations, allied and adversary, will have different policies around lethal autonomy. Those other policies will change rapidly, too, depending on what’s happening on the front lines. As attacks against the connections binding humans to drones become more effective, the need for more capable autonomy will increase. 

Brose said Anduril anticipates that U.S. policy will change, and they want to be ready to serve the Pentagon’s new needs when it happens. 

“We’re not going to go out and solve for every sort of hypothetical edge case,” he said. “Our focus is on making the system as capable as possible based on how we believe users want to and need to use it now and in the near future. Then, to the extent that that gets limited or governed or restrained by policy or rules of engagement, that is entirely the decision of the government. We want them to have that choice, rather than realize that they would like to have a more capable system, but, but, you know, we’re not capable of providing it to them.”



Read the full article here