Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
The Pentagon is seeking to make AI-powered cyber tools to identify infrastructure targets in China as part of an effort to improve US capabilities in any future military conflict with Beijing.
The department was in talks with leading AI companies about partnerships to conduct automated reconnaissance of China’s power grids, utilities and sensitive networks as well as those of other adversaries, said several people with knowledge of the plans.
The US has already created powerful cyber-espionage weapons but is seeking to deploy new AI-powered tools to identify software flaws in opponents’ systems that could then be exploited to enhance infiltration and degrade those systems in any conflict.
The proposed system would use AI to penetrate computer networks, map vulnerabilities and integrate potential targets into US war planning, the people added. The Pentagon declined to comment.
OpenAI, Anthropic, Google and Elon Musk’s xAI have been awarded contracts worth about $200mn to partner with the US government for military, cyber and security applications. Which companies will be involved in the new cyber initiative is yet to be determined.
A senior US official on Thursday warned the administration would rip up all its existing agreements with Anthropic if the company failed to reach a deal with the Pentagon, after chief Dario Amodei said he would reject a “final offer” for the terms under which it would work with the military.
The Pentagon’s effort reflects recognition in Washington about the increasing importance of cyber operations in any war with China and the view that AI could help tilt the balance in a conflict. But the move also comes at a time of heightened tensions with some of the country’s most advanced AI companies over how far their technology should be used in military operations.
Dennis Wilder, former head of China analysis at the CIA, said AI cyber tools would help solve the problem posed by the huge amount of manpower needed to scan and identify vulnerable infrastructure.
“It’s equivalent to the thief in the night who tries the front door to homes until they find one that has been left unlocked,” said Wilder, now at Georgetown University. “AI-assisted cyber hacking can exponentially increase the number of doors tested and thus allow for much more efficient and accurate mapping of targets for selection.”
Military cyber experts already conduct work identifying vulnerable targets, but the envisioned new AI tool would perform these tasks faster and at a higher volume with much less human involvement.
“They have been building cyber offence strategies for infiltrating power grids. You have to build both the offence and the defence, you can’t have one without the other,” said one of the people familiar with the plans.
Another person said power plants near data centres could be targeted to disrupt adversaries’ AI capabilities.
Defence secretary Pete Hegseth has sought access to powerful generative AI technology for what he describes as “lawful use”. But AI labs have hesitated to give the Pentagon open-ended control over the technology.
Anthropic had sought to block the use of its model Claude in lethal autonomous weapons and wanted restrictions on AI use for mass domestic surveillance, people close to the company said.
Amodei was told by Hegseth on Tuesday that his company could be branded a supply chain risk or have its technology co-opted by the government if it failed to agree to his terms by Friday.
Claude is currently the only AI model used in classified operations, but staff at other AI labs, including OpenAI and Google, have raised similar concerns, according to multiple people with knowledge of the matter.
“This is a response to what’s coming out of China and the lack of guardrails there,” said one person familiar with the Pentagon’s stance. “It’s open-ended use, we can’t have shackles on us when it all kicks off.”
Read the full article here


