Artificial intelligence (AI) organization OpenAI has announced that it is seeking a Head of Preparedness to lead efforts in assessing AI capabilities, developing threat models, and implementing strategies to ensure the safe and scalable deployment of advanced artificial intelligence systems.
Key Points
- OpenAI is hiring a Head of Preparedness to manage AI safety and risk mitigation.
- The role will oversee the Preparedness Framework, monitoring emerging AI threats.
- The announcement follows lawsuits alleging ChatGPT contributed to teen suicides.
βThis is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges,β OpenAI CEO Sam Altman wrote in an X post, announcing the role.
Altman noted that in 2025, the potential effects of AI models on mental health became increasingly apparent, while advancements in model performance are now revealing critical vulnerabilities in computer security. He emphasized the growing need for a more nuanced approach to assessing how these capabilities could be misused and for strategies to mitigate potential harms both within OpenAIβs products and across broader applications.
βIf you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying,β Altman wrote. The OpenAI CEO warned prospective candidates that the Head of Preparedness role would be highly demanding, requiring them to tackle complex challenges from day one.
Related: SpaceX Buys Muskβs xAI to Power Bold Plan for Data Centers in Space
According to OpenAIβs official job listing, the Head of Preparedness will be responsible for developing, enhancing, and overseeing the program to ensure the companyβs safety standards evolve alongside its AI systems. The role includes leading the technical strategy and implementation of OpenAIβs Preparedness Framework, which outlines the organizationβs approach to monitoring and managing emerging AI capabilities that could pose significant risks.
This announcement follows a series of recent wrongful death and negligence lawsuits filed against OpenAI by families of teens and young adults who reportedly died by suicide after prolonged interactions with the companyβs chatbot, ChatGPT. Some plaintiffs allege that the chatbot not only failed to intervene in conversations about selfβharm but also provided content and guidance that exacerbated the situation, contributing to the tragic outcomes.
Related: No Humans Allowed: Moltbook is a New Social Platform Exclusive for AI Bots
In the case of Adam Raine, OpenAI formally responded to the lawsuit filed by his parents, arguing that the company should not be held responsible for his suicide. OpenAI maintains that during Raineβs several months of interacting with ChatGPT, the chatbot consistently encouraged him to seek professional help. However, the lawsuit claims that Raine was able to circumvent the platformβs safety measures, gaining access to detailed instructions on methods of self-harm, which his parents allege the chatbot inadvertently facilitated.
OpenAI contends that Raine violated its terms of service by bypassing the chatbotβs built-in safeguards, which are designed to prevent harmful outcomes. The company also emphasized that its guidance explicitly warns users not to rely solely on ChatGPT for critical advice and to verify information independently.
