Top House Armed Services lawmakers say they don’t have a clear picture of how the U.S. military is using AI in the ongoing war with Iran.
The U.S. military used Anthropic’s AI software to plan thousands of bombings in Iran late last month despite top Pentagon officials announcing a ban on the company, The Wall Street Journal reported. Anthropic’s software was reportedly paired with the military’s Palantir-developed tool,Maven Smart System, to leverage massive amounts of classified data to orchestrate attacks with the help of AI.
It’s the first time an AI model like Anthropic’s has been used by the U.S. for major war operations, but the specifics are unknown to the public — and to many lawmakers who oversee the military.
Sign Up for NOTUS’ Free Daily Newsletter
Rep. Adam Smith, the top Democrat on the Armed Services Committee, said he does not think Congress has enough visibility into how the military is using AI for these strikes.
“I think it’s something that we should pay more attention to and learn more about how they’re using AI in the battlefield,” Smith told NOTUS. “It’s something we need a lot more information on for sure.”
He said that “without question” he would “push and work on” obtaining classified briefings with senior officials about the use of AI on the battlefield.
Armed Services Committee Chair Mike Rogers told NOTUS Congress does not have a detailed enough view into how the Pentagon is using AI to know whether there is always a person involved in approving strikes.
“I don’t have that kind of fidelity into it,” Rogers said. “My plate is full already. I’m not looking for detail like that.”
In the aftermath of the initial strikes, U.S. officials told Reuters it is likely the U.S. military played a role in a drone strike of an Iranian girls’ school that killed over 170 civilians, most of them students, in southern Iran, according to local authorities. The human rights group Human Rights Activists News Agency hassaid that civilian targets like hospitals and parks have been hit since the U.S. and Israeli attacks began.
The Pentagon did not respond to requests for comment about the use of AI in these strikes.
Anthropic’s contracting dispute with the Pentagon has brought increased scrutiny into the use of AI on the battlefield. Anthropic CEO Dario Amodei refused to give the Pentagon unfettered access to the company’s AI tools over concerns of domestic mass surveillance and fully automated attacks.
The Pentagon has said it is not looking to eliminate human oversight over attacks, but would not let Anthropic impose conditions on the department. Anthropic was officially declared a supply-chain risk in retaliation on Thursday.
Lawmakers including Sen. Mark Kelly have started asking defense officials for transparency into how the military is using AI on the battlefield.
“Companies like Anthropic and others in the AI industry have published their own safety frameworks of how advanced AI systems should be deployed,” Kelly said in a hearing with senior defense officials on Thursday. “But Congress has not yet set any kind of clear statutory framework for how AI can be used in lethal military operations.”
“Before we rapidly scale up production and field more of these systems that have AI incorporated into their capability, we need a clear answer on this,” he added.
Some lawmakers are supportive of AI integration into defense. Republican Rep. Rob Wittman, a member of the Armed Services Committee, told NOTUS he is confident that there is a human in the loop during military strikes.
“There’s programs, like Maven and others, that use AI to help gather information in helping the decision-making process. There’s never a situation where a human’s not in the loop to make a decision about the deployment of a weapon,” Wittman said.
He described AI as “an enabler for folks in the battle space,” adding: “It’ll never replace a human being.”
Sign in
Log into your free account with your email. Don’t have one?
Check your email for a one-time code.
We sent a 4-digit code to . Enter the pin to confirm your account.
New code will be available in 1:00
Let’s try this again.
We encountered an error with the passcode sent to . Please reenter your email.