OpenAI has engaged in discussions with the U.S. Food and Drug Administration (FDA) regarding the agency’s efforts to expand its use of artificial intelligence (AI) to accelerate drug evaluations, amid broader plans to integrate AI more widely across its centers.
OpenAI and the U.S. Food and Drug Administration have reportedly been in discussions over a potential AI initiative known as “cderGPT,” according to Wired. The tool is said to be designed for the FDA’s Center for Drug Evaluation, with the goal of exploring how artificial intelligence might support the agency’s efforts to streamline drug review and approval processes.
FDA Commissioner Martin A. Makary unveiled an ambitious plan to expand the agency’s use of artificial intelligence, setting a target to significantly scale its implementation by June 30. The initiative reflects the agency’s strong commitment to leveraging AI to transform how drugs are evaluated and approved in the United States.
However, the FDA’s accelerated rollout of artificial intelligence has sparked concerns over how regulatory oversight will keep pace with technological innovation. The urgency behind the expansion appears to stem from the reported success of the agency’s pilot program testing the software.
Related: Influencers Join Trend With AI Animals as Social Media Feeds React
The FDA has yet to disclose the full scope, methodology, or findings of its AI pilot program. Detailed reports outlining the validation processes and specific use cases remain unpublished, leaving key questions about the program’s rigor and outcomes unanswered.
The FDA has stated that its AI systems will adhere to stringent information security protocols and operate in alignment with existing agency policies. However, the agency has provided limited details regarding the specific safeguards in place.
Officials emphasized that the role of AI is not to replace human expertise but to augment it, with the goal of strengthening regulatory oversight by improving the ability to predict toxicities and adverse events.
Related: North Korea Uses Banned Nvidia GPUs to Supercharge Crypto Theft Efforts
As artificial intelligence becomes more deeply embedded in regulatory systems, maintaining public trust will require more than just technical advancement—it will demand openness, accountability, and clear communication.
Regulatory agencies exploring new technologies are drawing close attention from stakeholders across healthcare, technology, and government, all intent on ensuring that innovation reinforces public safety and trust rather than putting them at risk.
