US must develop measures to counter Chinese artificial intelligence

US must develop measures to counter Chinese artificial intelligence

The rise of artificial intelligence in all things military — ranging from intelligence gathering and command and control to autonomous air combat maneuvering and advanced loitering munitions — has yielded a problem for the United States: While it is crucial to stay ahead of China in technological advancement and the fielding of improved weapons systems, it also is crucial to create a doctrine of AI counter measures, or AICM, to blunt AI systems out of Beijing.

Such a doctrine should take shape along four approaches: Polluting large language models for negative effects; using Conway’s Law for guidance to exploitable flaws; exploiting bias of adversary leadership to degrade AI systems; and using RF Weapons to cascade AI supporting computer hardware.

These systems might seem on their face to be insurmountable. Well, maybe not. As Mark Twain said, “History doesn’t repeat itself, but it does rhyme.”

Thus, perhaps a look into the past will help envision the future.

Polluting large language models to create negative effects

Generative AI can be expressed as the extraction of statistical patterns from an extremely large data set. It is important to understand that a large language model, or LLM, developed from such a data set using “Transformer” technology allows a user to access it via a “prompt” — a natural language text that describes the function which the AI must perform. The end result is a generative pre-trained, or GPT, large language model.

Thus, there are at least two approaches to degrade such an AI system: Pollute data or attack “Prompt Engineering” — a term of art within the AI community describing the process of structuring instructions that can be understood by the generative AI system. A programming error, as noted below, will cause the AI LLM system, in another AI term of art, to “Hallucinate.”

A historical analogy from World War II validates the crucial importance of countermeasures when an enemy has unilateral access to information about the battlespace.

The development of RADAR — Radio Azimuth Detecting And Ranging — was, in itself, a method of extracting patterns from an extremely large database. In the vastness of the sky, an echo from a radio pulse gave an accurate range and bearing of unseen aircraft.

To defeat it, as described by R.V. Jones in Most Secret War, it was necessary to put information into the German radar system, thus causing gross ambiguity. Jones turned to a physicist at the Technical Research Establishment, Joan Curran, who developed the optimum size and shape of aluminum foil strips — called “Window” by the Brits and “Chaff” by the Americans — used to create thousands of reflections which overloaded and blinded German radars.

In much the same way, the U.S. military and intelligence communities can create ambiguities and obscurations within generative AI systems, especially when trying to deny access to information about weapons and tactics.

This can be done by assigning names to said weapons and tactics, designed to be both ambiguous and non sequitur. For example, such “naturally occurring” search ambiguities include the following:

  • A search for “Flying Prostitute” reveals data about the B-26 Marauder medium bomber of World War II.
  • “Gilda” and “Atoll” retrieves a photo of the Mark III nuclear bomb that was dropped on Bikini Atoll in 1946, upon which was pasted a photo of Rita Hayworth.
  • “Tonopah” and “Goatsucker” retrieves the F-117 stealth fighter.

Since a contemporary computer search is easily fooled, it would be possible to grossly skew results of an LLM function by deliberately using nomenclature which occurs in very large iterations and is extremely ambiguous.

Perhaps the Next Generation Air Dominance (NGAD) fighter could, in such an attempt, be renamed something like “Stormy Daniels.” One can imagine the consternation Chinese officers and NCOs would experience when their young soldiers expend valuable time meticulously examining images that have no relation to the desired search.

Concept art from Boeing shows one concept for the Air Force’s Next Generation Air Dominance fighter. (Boeing)

Even “Air Gapped” systems like those being used by U.S. intelligence agencies can be affected when systems update information from online sources.

Such an effort must actively and continuously pollute data sets, much like chaff confusing a RADAR system, by generating content that would populate the model and force the adversary to consume it.

A more sophisticated approach would use key words like “eBay” or “Amazon” as a predicate, and then common words like “Tire” or “Bicycle” or “Shoe.” Contracting with a commercial media agency to promote the “items” across traditional and social media would tend to clog a Large Language Model.

Using Conway’s Law for guidance to exploitable flaws

Melvin Conway is an American computer scientist, who, in the 1960s, conceived the eponymous rule stating: “Organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.”

In response, de Caro’s Corollary states: “The more dogmatic the design team, the greater the opportunity to sabotage the whole design.”

Consider Google Gemini. The February 2024 launch of Google’s would-be answer to ChatGPT was an unmitigated disaster that dumped Google’s share price and left the company a laughingstock. As the Gemini launch went forward, its image generator “Hallucinated” — and created images of Black Nazi soldiers and female Asian Popes.

In retrospect, the event was the most egregious example of what happens when Conway’s Law collides with organizational dogma. Historically ignorant programmers myopically led their company into a debacle.

But, for those interested in confounding China’s AI systems, the Gemini disaster is an epiphany!

If the programmers at the “Googleplex” campus in Mountain View, California, can screw up so immensely, what kind of swirling vortex of programming snafu is being created by the indoctrinated young members of the People’s Liberation Army who work on AI?

(Photo Illustration by Omar Marques/SOPA Images/LightRocket via Getty Images)

A solution to beating China’s AI systems may be an epistemologist who specializes in the cultural communication of the PLA. By using de Caro’s Corollary, such an expert could lead a team of computer scientists to replicate the Chinese communication norms and find the weaknesses in their system — leaving it open to spoofing or outright collapse.

It also should be noted that when a technology creates an existential threat, the individual developers of that technology become strategic targets. For example, in 1943, Operation Hydra utilized the entirety of the RAF British Bomber Command of 596 bombers, with the stated mission of killing German rocket scientists at Peenemunde. The RAF had marginal success and was followed by three U.S. 8th Air Force raids in July and August of 1944.

In 1944, the Office of Strategic Services (OSS) dispatched multi-lingual agent and polymath Mo Berg to assassinate German scientist Werner Heisenberg, if Heisenberg seemed to be on the right path to build an atomic bomb. Berg decided, correctly, that the German was off track, and letting him live would keep the Nazis from any success.

Exploiting bias of adversary leadership to degrade AI systems

Often, entities funding research and development skew results because of bias. For example, aforementioned German scientist Werner Heisenberg was limited in the paths he might follow toward a Nazi A-Bomb because of Hitler’s perverse hatred of “Jewish Physics.”

This attitude was aided and abetted by two prominent and antisemitic German scientists, Phillip Lenard and Johannes Stark, both Nobel Prize winners who reinforced the myth of “Aryan Science.” The end result effectively prevented a successful German nuclear program.

Again, there is epiphany here: Bias from the top affects outcomes.

As Xi Jinping continues his move toward authoritarian rule under himself, he brings his biases with him. This eventually will affect, or infect, Chinese military power.

In 2023, Xi detailed the need for China to meet world class military standards by 2027, the 100th anniversary of the People’s Liberation Army. Xi also spoke of “informatization” (read AI) to accelerate building “a strong system of strong strategic forces, raise the presence of combat forces in new domains and of new qualities and promote combat oriented military training.”

It seems that Xi’s need for speed, especially in “informatization,” might be the bias that indicates a weakness that can be exploited.

Using gyrotrons to cascade chips in computers supporting AI

Artificial Intelligence is dependent on extremely fast computer chips whose capacities are approaching their physical limits. They are more vulnerable to lack of cooling and electromagnetic pulse.

In the case of large, Cloud-based data centers, cooling is an absolute necessity. Water cooling is the most economical and therefore the most prevalent; but pumps, backup pumps and inlet valves usually are not hardened, and thus are extremely vulnerable. No pumps, no water. No water, no cooling. No cooling, no Cloud.

The same for primary and secondary electrical power. No power, no Cloud. No generators, no Cloud. No fuel, no Cloud.

Autonomous airborne drones or ground mobile vehicles are moving targets — small and hard to hit. However, their chips are vulnerable to electromagnetic pulse. We now know that a lightning bolt with gigawatts of power isn’t the only way to knock out an AI robot. High-Power Microwave Systems such as Epirus, Leonidas and Thor can burn out AI systems at a range of about three miles.

An interesting technology not yet fielded is the gyrotron. It is a Cold War era Soviet-developed, high-power microwave source, halfway between a klystron and a free electron laser. It creates a cyclotron resonance in a strong magnetic field that can produce a customized energy bolt with a specific pulse width and amplitude. Theoretically, it could reach out and disable a specific chip, at greater ranges than a “You fly ‘em, We fry ‘em” high power microwave weapon now in early test stage.

Obviously, without functioning chips, AI doesn’t work.

The headlong Chinese AI development initiative could provide the PLA with an extraordinary military advantage in terms of the speed and sophistication of a future attack upon the homeland of the United States.

Thus, the need to develop AI counter measures — now — is paramount.

In World War I, the great Italian progenitor of airpower, General Giulio Douhet, very wisely stated, “Victory smiles upon those who anticipate the changes in the character of war, not upon those who wait to adapt themselves after the changes occur.”

In terms of the threat posed by Artificial Intelligence as it applies to warfare, Douhet’s words could not be truer today.

Read the full article here