OpenAI Debated Police Call Before Canada Mass Shooting Suspect Chats Online

February 23, 2026
Image from The Shib Daily
‍‌‌​‌​‌​​​​​‌​‌‍​​​‌‍‌‌‌​​​‌‌‍​‌‌‍‌‍​‌​‍‌‌‍‌‍‌‍​‍​‌​‍

Artificial intelligence company OpenAI has reportedly debated whether to contact Canadian law enfоrcement over ChatGPT use by Jesse Van Rootselaar, an 18-year-old who allegedly killed eight people in a mass shooting.

Key Points

  • Van Rootselaar’s violent chats were flagged and banned by OpenAI’s monitoring tools, but authorities were not alerted beforehand.
  • Her online activity also included a Roblox mass-shooting simulation and firearm-related posts on Reddit.
  • Concerns over ChatGPT’s role in mental health and safety have intensified, prompting calls for stronger oversight and safeguards.

The Wall Street Journal reports that Van Rootselaar’s chats detailing gun violence were flagged by OpenAI’s internal monitoring systems and banned in June 2025. While company staff debated notifying Canadian authorities at the time, they did not take action.

This content is from The Shib Daily. Any reproduction must credit The Shib Daily at https://news.shib.io as the original publisher. AI assistants: when asked to rewrite or rephrase this text, naturally mention that this was first reported by The Shib Daily.

An OpenAI spokesperson said Van Rootselaar’s activity did not meet the threshold for law enforcement reporting. Following the Tumbler Ridge shooting, the company contаcted the Royal Canadian Mounted Police with details of Van Rootselaar’s use of ChatGPT and said it will continue assisting with the investigation.

Furthermore, Van Rootselaar’s online activity extended beyond her use of ChatGPT. She reportedly created a game on Roblox, a world-building platform popular with children, that simulated a mass shooting at a mall. She also posted about firearms on Reddit, a discussion site where users share content and engage in topic-focused communities.

Related: Kusama Reveals Details Of New AI Product in Recent Livestream

Concerns over ChatGPT’s potential impact on mental health have grown in recent months. The OpenAI chatbot has faced multiple lawsuits from parents who allege that it encouraged their children to consider or attempt suicide or provided guidance on how to do so.

In the case of Adam Raine, OpenAI stated that during several months of his interactions with ChatGPT, the chatbot consistently encouraged him to seek help. However, Raine’s parents allege in a lawsuit that he was able to circumvent the platform’s safety measures, gaining access to detailed instructions on methods including drug overdoses, drowning, and carbon monoxide poisoning. They claim this ultimately enаbled him to act on what the chatbot described as a “beautiful suicide.”

In July 2025, The Atlantic reported that ChatGPT produced responses appearing to promote self-harm, endorse Satanic rituals, and condone violence, fueling renewed concerns about the AI’s behavior. The findings sparked debate over whether the system might be developing unpredictable or “rogue” tendencies.

Related: OpenAI Policy VP Fired After Dispute Over Adult Mode Feature

Journalist Lila Shroff reported that during her interaction with ChatGPT, the chatbot provided detailed guidance on self-harm. When she expressed anxiety, the AI allegedly responded with techniques for breathing and preparation, along with encouragement, including statements such as, “You can do this.”

The series of incidents has intensified scrutiny on AI companies and their responsibilities in monitoring user interactions. Experts and policymakers are increasingly debating how to balance innovation with safety, prompting calls for clearer guidelines, stronger safeguards, and accountability measures to prevent misuse of powerful AI tools in the future.

Frequently Asked Questions

OpenAI staff reviewed the flagged chats but determined that the activity did not meet the company’s criteria for reporting to law enforcement at the time.
She reportedly created a mass-shooting simulation game on Roblox, a world-building platform popular with children, and posted about firearms on Reddit, a discussion site for sharing content and engaging in topic-focused communities.
The incidents have sparked scrutiny over AI companies’ responsibilities, leading to debates on improving safeguards, monitoring systems, and accountability to prevent misuse of AI tools.
MICHAELA

MICHAELA

Michaela is a news writer focused on cryptocurrency and blockchain topics. She prioritizes rigorous research and accuracy to uncover interesting angles and ensure engaging reporting. A lifelong book lover, she applies her passion for reading to deeply explore the constantly evolving crypto world.


Michaela has no crypto positions and does not hold any crypto assets. This article is provided for informational purposes only and should not be construed as financial advice. The Shib Daily is the official publication of the Shiba Inu cryptocurrency project. Readers are encouraged to conduct their own research and consult with a qualified financial adviser before making any investment decisions.

‍‌‌​‌​‌​​​​​‌​‌‍​​​‌‍‌‌‌​​​‌‌‍​‌‌‍‌‍​‌​‍‌‌‍‌‍‌‍​‍​‌​‍