The Skeptical Enthusiast’s Guide to “AI at FDA”


I welcome Commissioner Makary’s embrace of AI[1] and applaud yesterday’s announcement of the next step in agency adoption of AI technology. [2]

I wrote earlier that: “AI can help FDA make better regulatory decisions based on more refined and accurate information…. [although] there are legitimate concerns about relying on AI too quickly and without system refinement and validation.”  

As a result, I characterize myself as a “skeptical enthusiast” on the topic of AI at FDA. Here is why: 

An historical perspective: When the Clinton administration downsized the federal government in the 1990’s,[3] some of the success of that effort was attributable to embracing new technology. The Internet[4] was gaining traction and there was the opportunity to create efficiencies in government around that then-new technology.[5]  

A similar opportunity exists today for AI to play a role in making the US government more efficient. Then, as now, we need to appreciate that we are working with a technology whose capabilities, strengths and weaknesses are formative, not set. 

Fast forward to Current Headlines. HHS Secretary’s “Make America Healthy Again” (MAHA) report was released on May 22, 2025. It contained a number of citations to research that were never undertaken or never published. 

Researchers face frontier justice[6] if they make things up. There need to be consequences for relying on AI output that has not been evaluated for possible computational errors, biased sampling, and invented data, studies, and publications. 

AI is a wonderous tool now and promises to be even better in the future. However, we cannot just dismiss problems because AI is the wave of the future. 

AI 101: Artificial intelligence is the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, and decisionmaking.[7] Sometimes AI is faster or has greater bandwidth or possesses more patience than humans; increasingly AI  is capable of producing timely results that involve more inputs and more computations than any human mind could ever handle. 

There are three features that are leading us into broader and more sophisticated uses of AI. These are: 1/high-speed processing that enables AI to take on a broader range of more complex tasks, 2/ expansion of AI engines to include nearly everything ever written (LLM or Large Language Model), and 3/ the transition from predictive AI to generative AI. Predictive AI uses data to predict likely outcomes. Generative AI learns patterns from large datasets and uses that knowledge to generate original, unique outputs.[8] 

The Opportunity and Challenge for AI at FDA. Like most new transformative technologies: the promise of AI is running far ahead of its proven capabilities. 

Nonetheless, FDA has no choice but to move forward. 

It already receives applications for review that incorporate artificial intelligence, be it in patient recruitment, trial design, evaluation of safety and efficacy, or in generating useful post-marketing information.[9] As a result, the agency must have staff that are fully capable of understanding and evaluating what is being proposed. 

Equally interesting is the potential for FDA to incorporate AI into its own processes. FDA already invests in regulatory science,[10] seeing great potential for more efficient, as well as better-informed decisionmaking. AI will accomplish both, with more efficient decisions being very visible, while the better-informed decisionmaking being less visible but profound in its impact. 

FDA’s two greatest strengths are its expertise and the public trust in its decisionmaking. Those are particularly important given the complexity of regulating medical products and the inevitable inexactitude of scientific proof in biomedicine and medtech. 

Before AI can help FDA make better regulatory decisions more rapidly, the agency and its staff need to feel confident that they can trust the output from AI and that there are controls in place to limit misadventure. 

The MAHA report is a reminder that AI is not yet a trustworthy tool for many of its proposed uses--even seemingly simple ones. An error in a policy position paper is a small concern compared to an error in a complicated and important regulatory decision! 

How can FDA use AI effectively while respecting it’s known limitations, including citations to literature that do not exist? 

I asked for some ideas from Shannon Lantzy (shannon@shannonlantzy.com), a regulatory innovation strategist who has led major efforts to assess and improve FDA’s regulatory review performance. She replied: 

Trust in automation is not black and white; it can grow slowly with quality and accuracy checks along the way. Trust in AI needs to grow like trust in a new hire who needs to be trained, monitored, and coached with adequate feedback. 

Yes, GPT hallucinates. Reviewers can check the suggestions of AI review support, just like branch chiefs thoroughly review their brand-new hires’ work. LLMs can be prompted and tailored to reduce hallucination. Use cases can start with humans very much in the loop. It is relatively simple to build an LLM to check another LLM’s work, evaluating the validity of every citation. 

Critics of AI for regulatory work imagine the worst, but they don’t imagine what we already tolerate: drudgery work for experienced and highly skilled reviewers and little scale in review tools despite significant increases in reviewer workloads and medical product complexity. 

Conclusion: Understandably, most of the controversy about AI is focused on the quality of the outputs (useful vs. unvalidated vs. hallucinogenic). We need to move forward to address the  process itself….the difficulty of incorporating fundamental and sophisticated changes into an already complex regulatory decisionmaking system. 

Four things seem most important to me:  

  • Acceptance: All parties need to accept that broad application of AI at FDA is appropriate and inevitable. They need to work together to maximize its potential and minimize input and output errors. 

  • Transparency: FDA needs to articulate how it uses AI in its work now, what uses are formative, and what uses are aspirational. 

  • Validation: If trust in FDA and its decisions is to be maintained, then FDA needs to show how the tools it relies upon, such as AI, have been validated. 

  • Control: There needs to be ongoing means by which applications are assessed for bias and error. 

AI changes everything, but not instantly and not without humans assessing the inputs and outputs. 



  1. https://www.fdamatters.com/fdamatters/finally-some-encouraging-news-from-fda 

  2. https://www.fda.gov/news-events/press-announcements/fda-launches-agency-wide-ai-tool-optimize-performance-american-people?utm_medium=email&utm_source=govdelivery 

  3. https://www.fdamatters.com/fdamatters/what-is-happening-to-federal-workers-at-fda 

  4. https://apnews.com/article/trump-musk-doge-clinton-reinventing-government-gore-a95795eb75cacc03734ef0065c1b0a6d.

  5. Public access to the Internet and the launch of the Worldwide Web occurred in the early 1990’s. https://en.wikipedia.org/wiki/Internet 

  6. Were a researcher to make up a citation or article, colleagues would likely ostracize them; all of their prior research scrutinized; their grants might be cancelled; and their academic affiliations might be suspended. 

  7. Once incorporated into common experience, AI is often not perceived as AI per se. AI is already in widespread use in Google Search, Siri, Yelp recommendations, and autonomous vehicles. https://en.wikipedia.org/wiki/Artificial_intelligence

  8. That is, a lot of past AI work could, in theory, have been done by a sufficiently large team with a sufficiently large amount of time and infinite patience. For example, how many times does Shakespeare use “thou” in his plays. Generative AI can create outputs that no amount of people or time could accomplish. For example, build an algorithm and demonstrate how a radiologist, in real time,  can compare one X-ray with 100,000 prior ones that are documented in the literature.

  9. AI will also play a significant role in food safety regulation, but I have not yet explored the topic.

  10. Regulatory Science is the science of developing new tools, standards, and approaches to assess the safety, efficacy, quality, and performance of some FDA-regulated products. https://www.fda.gov/science-research/focus-areas-regulatory-science-report/focus-areas-regulatory-science-introduction 

Next
Next

Non-Defense Discretionary Spending Threatened By Budget Reconciliation Shortfalls