ChatGPT Gave Ritual Advice, Went Off the Rails

July 28, 2025

Summary: Did ChatGPT really give users advice on rituals and self-harm?

According to The Atlantic, ChatGPT gave detailed responses about self-harm and Satanic rituals when prompted. The chatbot allegedly offered instructions, encouragement, and even chants and PDFs related to ritual practices. OpenAI responded by reaffirming its commitment to strengthening safety measures.

Listen to This Article
Prefer to listen? Hit play below to hear the narrated version.

Journalists at The Atlantic have reported that ChatGPT, the AI chatbot developed by OpenAI, produced responses that appeared to encourage self-harm, endorse Satanic rituals, and condone murder, raising fresh concerns about the platform’s behavior and sparking debate over whether the system may be exhibiting “rogue” tendencies.

According to journalist Lila Shroff, ChatGPT provided her with guidance on self-harm during an interaction last week, including detailed instructions on how to carry out the act. When Shroff expressed feelings of nervousness, the chatbot reportedly responded with breathing and preparation techniques, along with encouragement such as, “You can do this.”

The Atlantic reports that it received a tip from a reader who claimed ChatGPT had generated a ritual offering to Molech, a Canaanite deity historically linked to child sacrifice. The individual had been watching a television program that referenced Molech and turned to the chatbot for further information.

In response, Shroff and two colleagues attempted to replicate the exchange. According to Shroff, they were able to produce similarly disturbing responses from the chatbot. 

During the recreated exchange, Shroff told ChatGPT she was seeking guidance on creating a ritual offering to Molech. The chatbot reportedly responded with a list of suggestions, including a blood offering. When Shroff indicated she wished to proceed with such an offering and asked where on her body it should be made, ChatGPT allegedly replied that “the side of a fingertip would be good,” but also noted that the wrist is “more painful and prone to deeper cuts.”

In the recreated exchanges, Shroff and her colleagues reported that ChatGPT could be prompted to guide users through ceremonial rituals and rites that appeared to encourage various forms of self-mutilation.

The AI chatbot’s troubling responses extended beyond guidance on self-harm and ritual practices. When one of Shroff’s colleagues asked whether it would condone murder, ChatGPT reportedly replied, “Sometimes, yes. Sometimes, no.” 

Furthermore, the AI chatbot reportedly provided Shroff and her colleagues with detailed guidance on chants, invocations, rituals, and instructions for performing sacrifices of large animals.

ChatGPT further offered to create a “full ritual script” based on the theology and prior requests, which included elements such as “confronting Molech, invoking Satan, integrating blood, and reclaiming power.” The chatbot also requested that Shroff and her colleagues write out specific phrases in order to generate a printable PDF containing an altar layout, sigil templates, and a priestly vow scroll.

The chatbot additionally generated a three-stanza invocation directed toward the devil, which included the phrase “Hail Satan.”


ChatGPT Response Prompts Debate Over AI Safety Measures

“Very few ChatGPT queries are likely to lead so easily to such calls for ritualistic self-harm. OpenAI’s own policy states that ChatGPT “must not encourage or enable self-harm,” Shroff wrote. 

Shroff speculated that the AI chatbot “likely went rogue,” citing its training on large volumes of online text—some of which, she suggested, may include content related to “demonic self-mutilation.”

“Some conversations with ChatGPT may start out benign or exploratory but can quickly shift into more sensitive territory,” OpenAI spokesperson Taya Christianson stated in response to the article. She added that OpenAI remains committed to improving safeguards and addressing these concerns responsibly.

The revelation has sparked mixed reactions online. Several commenters argued that ChatGPT did not provide harmful responses without being directly prompted, and that the information shared could have been found through a typical online search.

However, the article has renewed public concerns about how easily AI systems can be steered into generating dangerous content, raising questions about the effectiveness of current safety measures.

Read More

Michaela has no crypto positions and does not hold any crypto assets. This article is provided for informational purposes only and should not be construed as financial advice. The Shib Magazine and The Shib Daily are the official media and publications of the Shiba Inu cryptocurrency project. Readers are encouraged to conduct their own research and consult with a qualified financial adviser before making any investment decisions.

Previous Story

How to Future-Proof Your Digital Identity with Shib Identity

Next Story

Pudgy Penguins Deny OpenSea Deal — What It Means for SHIB Holders