Artificial intelligence company OpenAI has reportedly debated whether to contact Canadian law enfоrcement over ChatGPT use by Jesse Van Rootselaar, an 18-year-old who allegedly killed eight people in a mass shooting.
Key Points
- Van Rootselaar’s violent chats were flagged and banned by OpenAI’s monitoring tools, but authorities were not alerted beforehand.
- Her online activity also included a Roblox mass-shooting simulation and firearm-related posts on Reddit.
- Concerns over ChatGPT’s role in mental health and safety have intensified, prompting calls for stronger oversight and safeguards.
The Wall Street Journal reports that Van Rootselaar’s chats detailing gun violence were flagged by OpenAI’s internal monitoring systems and banned in June 2025. While company staff debated notifying Canadian authorities at the time, they did not take action.
An OpenAI spokesperson said Van Rootselaar’s activity did not meet the threshold for law enforcement reporting. Following the Tumbler Ridge shooting, the company contаcted the Royal Canadian Mounted Police with details of Van Rootselaar’s use of ChatGPT and said it will continue assisting with the investigation.
Furthermore, Van Rootselaar’s online activity extended beyond her use of ChatGPT. She reportedly created a game on Roblox, a world-building platform popular with children, that simulated a mass shooting at a mall. She also posted about firearms on Reddit, a discussion site where users share content and engage in topic-focused communities.
Related: Kusama Reveals Details Of New AI Product in Recent Livestream
Concerns over ChatGPT’s potential impact on mental health have grown in recent months. The OpenAI chatbot has faced multiple lawsuits from parents who allege that it encouraged their children to consider or attempt suicide or provided guidance on how to do so.
In the case of Adam Raine, OpenAI stated that during several months of his interactions with ChatGPT, the chatbot consistently encouraged him to seek help. However, Raine’s parents allege in a lawsuit that he was able to circumvent the platform’s safety measures, gaining access to detailed instructions on methods including drug overdoses, drowning, and carbon monoxide poisoning. They claim this ultimately enаbled him to act on what the chatbot described as a “beautiful suicide.”
In July 2025, The Atlantic reported that ChatGPT produced responses appearing to promote self-harm, endorse Satanic rituals, and condone violence, fueling renewed concerns about the AI’s behavior. The findings sparked debate over whether the system might be developing unpredictable or “rogue” tendencies.
Related: OpenAI Policy VP Fired After Dispute Over Adult Mode Feature
Journalist Lila Shroff reported that during her interaction with ChatGPT, the chatbot provided detailed guidance on self-harm. When she expressed anxiety, the AI allegedly responded with techniques for breathing and preparation, along with encouragement, including statements such as, “You can do this.”
The series of incidents has intensified scrutiny on AI companies and their responsibilities in monitoring user interactions. Experts and policymakers are increasingly debating how to balance innovation with safety, prompting calls for clearer guidelines, stronger safeguards, and accountability measures to prevent misuse of powerful AI tools in the future.
