Deep in The Pentagon, A Secret AI programme to Find Hidden Nuclear Missiles

208
SHARE
Advertisement

This project involves military and private researchers in the Washington D.C. area. It is pivoting off technological advances developed by commercial firms financed by In-Q-Tel, the intelligence community’s venture capital fund, officials said

In order to carry out the research, the project is tapping into the intelligence community’s commercial cloud service, searching for patterns and anomalies in data, including from sophisticated radar that can see through storms and penetrate foliage.

 

The Pentagon in Washington is seen from aboard Air Force One
The Pentagon in Washington, U.S., is seen from aboard Air Force One, March 29, 2018. REUTERS/Yuri Gripas

 

Budget documents reviewed by Reuters noted plans to expand the focus of the mobile missile launcher programme to “the remainder of the (Pentagon) 4+1 problem sets.” The Pentagon typically uses the 4+1 terminology to refer to China, Russia, Iran, North Korea and terrorist groups.

 

TURNING TURTLES INTO RIFLES

Both supporters and critics of using AI to hunt missiles agree that it carries major risks. It could accelerate decision-making in a nuclear crisis. It could increase the chances of computer-generated errors. It might also provoke an AI arms race with Russia and China that could upset the global nuclear balance.

U.S. Air Force General John Hyten, the top commander of U.S. nuclear forces, said once AI-driven systems become fully operational, the Pentagon will need to think about creating safeguards to ensure humans – not machines – control the pace of nuclear decision-making, the “escalation ladder” in Pentagon speak.

“(Artificial intelligence) could force you onto that ladder if you don’t put the safeguards in,” Hyten, head of the U.S. Strategic Command, said in an interview. “Once you’re on it, then everything starts moving.”

 

U.S. Defense Secretary Mattis visits the National Geospatial-Intelligence Agency in Springfield, Virginia
U.S. Defense Secretary Jim Mattis (R) walks with Director of the National Geospatial-Intelligence Agency Robert Cardillo (L) and Deputy Director Susan Gordon during a visit for a town hall in Springfield, Virginia, U.S., August 2, 2017. Picture taken August 2, 2017. U.S. Army Sgt. Amber Smith/Department of Defense via REUTERS

 

Experts at the Rand Corporation, a public policy research body, and elsewhere say there is a high probability that countries like China and Russia could try to trick an AI missile-hunting system, learning to hide their missiles from identification.

There is some evidence to suggest they could be successful.

An experiment by M.I.T. students showed how easy it was to  dupe an advanced Google image classifier, in which a computer identifies objects. In that case, students fooled the system into concluding a plastic turtle was actually a rifle.

Dr. Steven Walker, director of the Defense Advanced Research Projects Agency (DARPA), a pioneer in AI that initially funded what became the Internet, said the Pentagon still needs humans to review AI systems’ conclusions.

“Because these systems can be fooled,” Walker said in an interview.

DARPA is working on a project to make AI-driven systems capable of better explaining themselves to human analysts, something the agency believes will be critical for high stakes national security programs.

 

‘WE CAN’T BE WRONG’

Among those working to improve the effectiveness of AI is William “Buzz” Roberts, director for automation, AI and augmentation at the National Geospatial Agency. Roberts works on the front lines of the U.S. government’s efforts to develop AI to help analyse satellite imagery, a crucial source of data for missile hunters.

Last year, NGA said it used AI to scan and analyse 12 million images. So far, Roberts said, NGA researchers have made progress in getting AI to help identify the presence or absence of a target of interest, although he declined to discuss individual programs.

 

 

In trying to assess potential national security threats, the NGA researchers work under a different kind of pressure from their counterparts in the private sector.

“We can’t be wrong … A lot of the commercial advancements in AI, machine learning, computer vision – If they’re half right, they’re good,” said Roberts.

Although some officials believe elements of the AI missile programme could become viable in the early 2020s, others in the U.S. government and the U.S. Congress fear research efforts are too limited.

“The Russians and the Chinese are definitely pursuing these sorts of things,” Representative Mac Thornberry, the House Armed Services Committee’s chairman, told Reuters. “Probably with greater effort in some ways than we have.”

 

(Reporting by Phil Stewart; Editing by Ross Colvin)

Advertisements
1
2