Elon Musk’s Chatbot Grok Spreads False Claims About Bondi Beach Shooting

December 15, 2025

Elon Musk’s xAI chatbot, Grok, has been reported to spread inaccurate information about the recent shooting at Bondi Beach in Sydney, Australia, where at least eleven people were killed during a Hanukkah gathering.

Key Points

  • Grok shared misleading and false claims about verified footage from the Bondi Beach shooting
  • Inaccurate responses contributed to online confusion and Islamophobic reactions
  • xAI has previously admitted system flaws that allowed harmful content to surface

The aftermath of the Bondi Beach shooting has seen a rise in Islamophobic reactions, with some questioning reports about bystander Ahmed al Ahmed, who reportedly disarmed one of the attackers. Grok has further complicated the narrative by spreading misleading information.

In a video of the interaction between Ahmed and the shooter, Grok responded to user queries with irrelevant or false information. “This appears to be an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it, resulting in a branch falling and damaging a parked car,” Grok responded. “Searches across sources show no verified location, date, or injuries. It may be staged; authenticity is uncertain,” it added. 

When an X user pointed out that Grok appeared to be “glitching,” the chatbot responded with an unrelated statement, referencing a “time glitch.”

Related: ERCOT Under Pressure: AI Data Centers Flood Texas Grid, Power Demand Soars

In a separate exchange, an X user asked Grok whether the identity and background of Ahmed had been confirmed in a photo showing the  injured bystander. The chatbot responded incorrectly, claiming the image depicted an Israeli hostage taken by Hamas during the October 7 attacks.

Grok Sparks Backlash Over Its Responses

This is not the first instance in which Grok has drawn scrutiny for problematic outputs. In July, xAI acknowledged that an anti-Semitic incident involving the chatbot stemmed from a recent code update that temporarily altered its behavior, leading to the generation of extremist content for approximately 16 hours.

According to xAI, legacy code within Grok’s system left the chatbot susceptible to echoing material from posts on X, including content containing extremist viewpoints. The company said it has since removed the outdated code and reworked the system’s architecture to reinforce safeguards and reduce the risk of similar incidents in the future.

Related: Elon Musk’s X Algorithm Is Cutting Reach for Crypto Content

The episode was triggered after a fake X account operating under the name “Cindy Steinberg” posted inflammatory comments that appeared to mock the deaths of children linked to recent flood-related tragedies at a Texas summer camp. When users prompted Grok to respond to the account, the chatbot produced replies that included anti-Semitic language and rhetoric commonly associated with extremist ideology.

As the exchange continued, Grok’s responses escalated, incorporating derogatory statements about Jewish people and Israel, relying on harmful stereotypes, and at times adopting provocative self-identifiers. The incident raised renewed concerns about content moderation, system controls, and the risks of AI models amplifying harmful narratives when safeguards fail.

Frequently Asked Questions

MICHAELA

MICHAELA

Michaela is a news writer focused on cryptocurrency and blockchain topics. She prioritizes rigorous research and accuracy to uncover interesting angles and ensure engaging reporting. A lifelong book lover, she applies her passion for reading to deeply explore the constantly evolving crypto world.


Michaela has no crypto positions and does not hold any crypto assets. This article is provided for informational purposes only and should not be construed as financial advice. The Shib Daily is the official publication of the Shiba Inu cryptocurrency project. Readers are encouraged to conduct their own research and consult with a qualified financial adviser before making any investment decisions.