There’s ‘still more work to be done’ for DOD on AI

There’s ‘still more work to be done’ for DOD on AI

Using AI to sift recruits’ medical records shows how the Pentagon is putting the technology to use—but there’s room to improve, Gen. C.Q. Brown, chairman of the Joint Chiefs of Staff, said Tuesday. 

“I think we’re better off than we were last year. I think last year it felt like we put AI on PowerPoint slides as if it was going to solve our problems. I felt the same way probably about 15 years ago with cyber. And now that we have a better understanding, I do see some use,” Brown said during a keynote at an AI and national security conference hosted by the Special Competitive Studies Project in Washington, D.C.

Take, for example, Military Entrance Processing Command. 

“Our MHS Genesis [system], which is our digital medical records, is using large language models to sort through the records that identify things that then you take a look at as you’re trying to bring in a new recruit,” Brown said. “We are making progress, but again, still more work to be done.”

Gathering intelligence is another use for AI, said Michael Collins, the acting chair of the National Intelligence Council. 

“I think there’s tremendous opportunity for what AI can do to ensure we are researching and understanding scientifically the factors that are driving the world in a certain way, what affects the disposition of a human being to align with something rather than something else,” Collins said during a panel discussion at the Ash Carter Exchange and AI Expo. “It won’t take away, of course, the role of the analyst in ensuring that we’re providing the best objective insight possible to the policymaker. Because we have to—at its core—understand empirically the basis for that algorithm, and how it’s built.”

Collins said the intelligence community depends on an algorithm’s “empirical objectivity” and its inner workings to support policy recommendations. 

“We especially depend on the empirical objectivity and knowing what the algorithm is based on when we make judgments of purpose for our policymaking. And frankly, I think that’s a role. And we’re trying to drive that,” he said.

For example, as part of an ongoing transparency initiative, the director of national intelligence released a report in April analyzing risks to global health security in the next decade.

“We’re trying to more openly share insights,” Collins said. “We need others to challenge us, we don’t want groupthink. We need insight and support and expertise from the community. But we take seriously the role we do in modeling objective, critical thinking, removed from politics, removed from partisanship, removed from bias. And I think that’s a critical role.” 

But there could come a time when intelligence analysts will have to challenge the AI tools used. 

“When the tool itself starts to predict and derive pattern without us understanding the basis for that, that is going to be a challenge,” Collins said. “And to whoever generated the algorithm, if you’re at the point where the AI is generating the algorithm without the input of the human, the testing and the validity of that become all the more critical. It’s a powerful challenge for sure.”

Read the full article here