New Pentagon science-and-innovation board arrives as administration cuts research funding

New Pentagon science-and-innovation board arrives as administration cuts research funding

The Pentagon’s new Science, Technology, and Innovation Board—a merger of the decade-old Defense Innovation Board and the 70-year-old Defense Science Board—is meant to “streamline” the department’s approach to the hardest technological and scientific national-security challenges. But it comes on the heels of Trump-administration cuts that could hinder those efforts.

Streamlining is a persistent target for the Pentagon. But it’s one that it has had trouble achieving in previous years, according to GAO reports, lawmakers, and military leaders across administrations. It is one reason why the so-called “valley of death,” as in the chasm between a cutting-edge research program and an actual weapon getting into the hands of soldiers, remains a common complaint—and one of the key reasons the Defense Innovation Board was created in the first place.

The DIB, established in 2016, was a civilian body whose members had included tech and finance leaders such as Eric Schmidt, Michael Bloomberg, and Neil deGrasse Tyson. It aimed to bring thought leadership from top business leaders into the Pentagon. The board produced a wide variety of key recommendations that the Pentagon later adopted, such as moving to large-scale enterprise cloud computing and adopting a long list of ethics principles for the development, testing, deployment, and operation of artificial intelligence across the military.

The Defense Science Board, meanwhile, largely produced reports for Congress and military leadership on specific Defense Department issues, such as how to reform testing and evaluation and bringing more digital engineering into the department.

The new Science, Technology and Innovation Board, or STIB, includes defense science experts in areas such next-generation autonomy, testing, advanced hypersonics, and acquisition. It also includes private-sector experts in fields such as advanced neural networks.

The new board emerges at a time when the military is keen to integrate artificial intelligence into more of what it does, reach new research breakthroughs more rapidly, and quickly produce large numbers of cheap, highly autonomous drones.

One former senior defense official said the board should look at “all aspects of AI, from energy requirements, ethics, and direction of research,” to “how to accelerate the fusing of the massive amounts of all-domain sensory data the department has to train multimodal AI for offensive and defensive operations.”

However, they pointed out that the board is uniformly white and largely male. “It misses the mark as far as representation goes, thereby handicapping its credibility with the American public,” they said.

Another question is whether the new board will conduct public meetings, as the Defense Innovation Board did, or meet in private, like the Defense Science Board. The STIB announcement makes no mention either way, and a Pentagon spokesperson did not respond to a request for comment Monday.

Shrinking the defense science, research, oversight footprint

The new board is the latest in a series of Pentagon moves to merge offices or activities launched over the last decade and accelerate the adoption of AI, particularly dual-use AI from companies that also sell to the public.

On January 12, the Pentagon announced an effort  to accelerate the use of large foundation models such as Google’s Gemini. They simultaneously announced an “overhaul” to more closely align offices like the Chief Digital and Artificial Intelligence Office, the Strategic Capabilities Office, and the Defense Innovation Unit under the under secretary of defense for research and engineering.

These moves align with what current and former military officials, government watchdogs, and lawmakers have long been urging Pentagon leaders to do: reform the way the department buys technology to be more like DIU. The reorganization of these offices and activities was taken by many observers across the political spectrum as a sign that Pentagon leadership had finally begun to do just that, and was on its way to busting down bureaucratic obstacles to buying ready-to-use commercial technology, especially software.

But the DSB-DIB merger comes as the Trump administration reduces funding for basic sciences. According to the most recent version of the 2026 National Defense Authorization Act, the department would cut support for basic research at U.S. universities by nearly 5%.

At the same time, the Pentagon is de-emphasizing, if not abandoning, the ethical use of AI. The AI-acceleration strategy released in January does not even mention the AI ethics principles the Defense Innovation Board proposed, and the Pentagon adopted, back in 2020. Instead, it advises the Defense Department to “incorporate standard ‘any lawful use’ language into any [DOD] contract through which AI services are procured within 180 days.”

The department’s ability to determine lawful use is also shrinking, at least at the highest levels, following the replacement of several top JAGs last February and the sidelining of other JAG officers when considering controversial moves, such as firing at unarmed boats or deploying the National Guard for immigration enforcement. The department has also significantly reduced its Office of Inspector General, which provides key oversight on issues like safety and policy effectiveness.

All of that is causing confusion and disagreement with some of the very AI companies the Pentagon is courting. Researchers at Anthropic, for instance, are concerned the Defense Department might ask them to change or modify AI tools against company guidelines in order to fit whatever definition of “lawful use” the Pentagon is working with at the moment, according to reporting from Reuters.

One current official said the public reporting on the disagreement between the department and Anthropic overstated the issue somewhat, and that the new policy simply says the Pentagon “should be the entity dictating lawful use and safeguards and not having companies specifying how products will be used within cases of lawful use. The conversation of who will write those safeguards is separate.”

A former senior defense official who worked on areas like deployed artificial intelligence agreed that some of the concerns about the Pentagon’s new approach to AI safety were a tad panicky.

However, they said, a larger issue is the testing and evaluation standards the Pentagon and the services use to field AI software. “As long as those standards remain high, the policy stuff surrounding them is window dressing… because terrible, unvetted software won’t get scaled and fielded.”

But the Pentagon has also been reducing the amount of money and staff dedicated to ensuring that testing and evaluation remains rigorous, essentially shrinking the office that oversees service testing by half last May. But the former official said that because the services themselves do the initial testing, and are far more committed to fielding safe AI that won’t harm service members than a Pentagon office would be, that reduction in and of itself doesn’t present a problem.

Still, while any particular merger or reduction may not be a cause for alarm by itself, the totality of the shrinking could be. Might these cuts and mergers affect core areas the Pentagon depends on to set standards for things like deploying AI? That’s worth keeping track of, said the second former official.

“If those standards decline, you have problems. The number of people you have doing AI safety, in particular, is potentially related to that, but not necessarily,” they said. The Pentagon leadership was right to cut the “waste and duplication in the [testing and evaluation] process,” they said. “And I also think they are probably implementing it in the dumbest way possible.”



Read the full article here