The Food and Drug Administration has been ground zero for health secretary Robert F. Kennedy Jr.’s push to implement artificial intelligence tools into the federal public health infrastructure.
On paper, it makes sense: The FDA is notoriously slow, built upon grinding research and product reviews that can drag on for years under strictly human control. Tech leaders have heralded a coming AI “strike team” for the FDA, with promises that industry partnerships will remake the lethargic agency.
But the AI revolution isn’t off to a great start.
Elsa, FDA’s proprietary chatbot, has been plagued with issues, its capabilities limited by the fact that it isn’t connected to the internet due to security risks. One FDA reviewer told NOTUS that the tool is also not integrated across different FDA centers and doesn’t have access to agency files because of patient privacy concerns.
“FDA does not want to let the Anthropic engineers see our files,” the reviewer said, referring to the AI company whose large language model, Claude, powers Elsa. “This does limit the usefulness of an AI right now — they have to figure out if they can trust the software with so much access.”
AI’s muddled start within the FDA exemplifies the tug-of-war taking place between the DOGE-era streamline-first, worry-later mindset and Kennedy’s desire to create an agency that weighs the MAHA movement’s values when reviewing drugs and medical devices.
Former FDA commissioner Robert Califf warned that HHS-tech collaborations run the risk of conflicts of interest — a risk that would be counterproductive to the MAHA movement’s professed goal of ridding the federal public health system of industry influence. (Kennedy has called for an end to direct-to-consumer pharmaceutical advertising and reform to the FDA user-fee structure, in which the FDA is partially funded by companies whose products are reviewed by the agency.)
“There has to be a pretty careful balancing act because you run the substantial liability of not using the best technology or the best methods because you’re not interacting with the outside world in an effective way,” said Califf, who served as commissioner from 2015 to 2017 and again from 2022 to 2025. “But on the other hand, if that’s not managed well, you can end up with some very adverse conflict of interest issues that can be detrimental.”
FDA commissioner Marty Makary has given conflicting statements about the role of AI at his agency. He has said in the past that Elsa will revolutionize how the U.S. approves drugs and medical devices. But Makary told CNN last week that most agency staff are currently using Elsa for its “organization abilities,” including finding studies and summarizing meetings.
One FDA employee called Makary’s communications about the agency’s goals “so disingenuous.”
“I love and loved my job, we did good work with some of the smartest people, and they’re being routinely disrespected and lied to,” they told NOTUS.
The employee said that their experience using Elsa was limited, but when they attempted to use the chatbot to find a link to a reference in statute that they knew existed, Elsa returned a broken link.
An HHS spokesperson said in an email to NOTUS that the FDA’s previous attempts to use in-house AI models were a “failure,” and that Elsa “demonstrated in its pilot and additional rollouts that it has markedly improved efficiencies across multiple job functions.”
“Anyone versed in large language models, knows that they inherently hallucinate,” the HHS spokesperson wrote. “Therefore, FDA has put in guardrails, disclaimers, and trainings to help staff better use Elsa to its full potential.”
But further development on the chatbot may be limited because Anthropic has not been able to embed its employees with the FDA — they don’t have the required security clearances, said the FDA reviewer.
The reviewer added that the federal employees who would be working “hand-in-hand” with the AI company are “just overwhelmed with work and honestly rather pissed at software engineers, due to our DOGE experience, and really not that welcoming.”
The result is an an AI tool that has limited uses and a tendency to reference scientific studies that don’t actually exist, CNN reported last week.
Elsa may be hallucinating more than just scientific studies. The FDA reviewer who spoke to NOTUS said they no longer use Elsa to draft communications to drug companies in which they cite FDA industry guidance because of the tool’s propensity for creating guidances out of thin air.
“It seemed to quote the guidance, and it was believable enough that my colleagues went to look for it on the FDA network,” said the reviewer, based on conversations they had with colleagues.
When the FDA announced Elsa last month, the agency hailed it as an “innovative tool” that would help staff across FDA.
“The agency is already using Elsa to accelerate clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets,” the FDA said at the time.
Andrew Powaleny, a spokesperson for the Pharmaceutical Research and Manufacturers of America, told NOTUS that PhRMA is excited by the prospect of AI at the FDA — but urged caution and “human oversight.”
“Ensuring medicines can be reviewed for safety and effectiveness in a timely manner to address patient needs is critical,” Powaleny said in an emailed statement.
Elsa’s development fits right into the high-tech, high-efficiency vision of the federal government pushed by Elon Musk and adopted by Kennedy, who pledged at committee hearings on the HHS budget to “do more with less.”
But Kennedy’s relationship with AI has not been going smoothly. The MAHA Commission Report, released in May, included references to nonexistent studies that may have been the fault of an AI tool.
Nevertheless, the White House released an “AI Action Plan” this month that calls for the FDA to help establish “regulatory sandboxes or AI Centers of Excellence around the country where researchers, startups, and established enterprises can rapidly deploy and test AI tools while committing to open sharing of data and results.”
Califf told NOTUS he felt like the FDA under his leadership hadn’t taken full advantage of AI because of conflict of interest concerns. Now, Califf said, he thinks “the elements stressed in the new regime are the right ones.”
Streamlining the agency’s internal operations, freeing up reviewers’ time spent on tedious tasks so that they can spend more time working directly with companies and analyzing data submitted by companies to identify potential fabrication? All good uses for AI, Califf said.
But he added that it takes a light touch since outside partnerships come with “kind of obvious” risks.
“I’m very enthusiastic about it going forward, but it’s got to be governed with the public health interest in mind, and it’s got to be evaluated and adjudicated all along the way in a way which is not corrupt or playing favorites with people to give money to friends or political favorites,” Califf said.
Several tech companies have offered to help the FDA get in on the AI boom, and one red-tape-slashing nonprofit is apparently raising money to launch a pilot program at the FDA.
Palantir co-founder Joe Lonsdale said in a Substack post this month that he was supporting the Abundance Institute, a Koch-linked nonprofit that’s raising money to drop an AI “strike team” into the FDA.
The Abundance Institute is trying to raise $4 million “to place a strike team of 15-20 AI-native software engineers, data scientists, and product leaders inside the FDA to accelerate the FDA’s latest AI initiatives and bring outsider perspectives on new areas to iterate on fast,” Lonsdale wrote.
“These leaders will remain Abundance employees, but under the Intergovernmental Personnel Act they can sit desk‑to‑desk with reviewers, wire modern data pipes into legacy silos, and automate the mind‑numbing paperwork that turns months into years.”
But an FDA spokesperson told NOTUS that the agency has “no existing engagements with Abundance Institute.” The spokesperson added that “the FDA is continuing to explore opportunities to bring great talent to transform the agency.”
The Abundance Institute would be a different sort of partner for the FDA. The conservative nonprofit describes its approach as “[f]ocusing on the societal and policy barriers that emerging technologies face.”
“This has become central to our work at the Abundance Institute: without institutional reform, there’s no capacity to scale. You can’t achieve digital abundance if environmental reviews block the data center. You can’t build energy abundance when transmission lines take a decade to permit,” Abundance Institute CEO Chris Koopman wrote last month in The American Conservative.
Califf, who worked at Google’s parent company Alphabet between stints at the FDA, expressed unease when NOTUS read excerpts of Lonsdale’s post describing the pilot program.
“I believe in civil service as a great thing for the American people. And what really worries me about what you described is that they’ll be like a Trojan horse to come in and destroy the function of the civil service like DOGE tried to do,” Califf said.
Neither Lonsdale nor Koopman and responded to requests for comment.
The FDA reviewer told NOTUS that they felt the Palantir CEO’s claims were just the “usual hyperbole of Silicon Valley.”
“That’s for the investors, not for us,” said the reviewer.
They said, though, they are cautiously optimistic about partnerships between AI companies and the agency — with some caveats.
“I am sure in the long range we will have productive collaboration,” they said. But they said it will not be because of tech entrepreneurs boasting about their “elite engineers” on social media.
“It will be because we quietly work together to solve specific problems, and the AI will just be another tool in our toolbox,” said the reviewer.