ChatGPT Model Found Referencing Elon Musk’s Grokipedia in Responses

January 26, 2026
​‌​‌​​‌‌​‌​​‌​​​​‌​​‌​​‌​‌​​​​‌​​‌​‌‌‌‌‌​​‌‌​​‌‌​​‌‌​​​‌​​‌‌​‌​​​​‌‌‌​​‌​​‌‌​‌​‌​‌​‌‌‌‌‌​​‌‌‌​​​​​‌‌​​‌‌​‌‌​​​​‌​​‌‌​​​‌​​‌‌‌​​‌​​‌‌​​​​​​‌‌​‌‌​​​‌‌​​​​​​‌‌​‌‌​​‌‌​​‌​​

Key Points

  • ChatGPT’s latest GPT-5.2 model has cited Grokipedia nine times in tests, particularly for obscure topics.
  • Grokipedia content can include unverified or misleading information, raising concerns about AI reliability.
  • Experts warn that citing such sources may give the appearance of credibility, spotlighting challenges in AI content verification.
0:00

OpenAI’s newest ChatGPT model has reportedly started referencing Grokipedia, the AI-driven encyclopedia from Elon Musk’s xAI, prompting scrutiny over potential bias and the reliability of information sourced by artificial intelligence.

A Guardian investigation found that ChatGPT’s latest GPT-5.2 model referenced Grokipedia nine times while answering over a dozen test questions. The queries included topics such as Iran’s political organizations and the biography of British historian Sir Richard Evans, who served as an expert witness in the David Irving Holocaust denial libel trial.

The news organization further reported that ChatGPT did not reference Grokipedia when asked about widely reported misinformation, including the January 6 insurrection, alleged media bias against American President Donald Trump, or the HIV/AIDS epidemic, areas where Grokipedia has been criticized for spreading false claims. Instead, the AI encyclopedia’s content appeared in responses to less commonly discussed or more obscure topics.

Related: OpenAI Introduces Age Prediction on ChatGPT to Strengthen Youth Safety

In some cases, ChatGPT cited Grokipedia to present claims that go beyond what is reported on Wikipedia. For example, the chatbot repeated assertions linking Iran’s MTN-Irancell telecommunications company to the office of the country’s supreme leader. The AI also drew on Grokipedia for details about Sir Richard Evans’ role as an expert witness in the David Irving libel trial, information that the Guardian previously debunked.

Concerns have grown around a practice known as “LLM grooming,” in which large volumes of disinformation are fed into AI models to influence their outputs. Disinformation researcher Nina Jankowicz, who has studied this phenomenon, said ChatGPT’s reliance on Grokipedia raises similar red flags. 

While Musk may not have intended to shape AI models, Jankowicz noted that the Grokipedia entries she and her colleagues reviewed often relied on sources that were “untrustworthy at best, poorly sourced, and deliberate disinformation at worst.” She warned that when large language models cite platforms like Grokipedia, it can give these sources an appearance of credibility, potentially leading readers to assume the information has been independently verified by the AI.

Related: OpenAI Taps Real Freelancer Work to Benchmark AI Office Performance

This persistent presence of false or misleading content spotlights a growing challenge for AI developers: ensuring that chatbots not only provide accurate information but can also adapt quickly when errors are discovered. As AI becomes more integrated into research, education, and everyday decision-making, maintaining trust in these systems will require stronger verification processes, ongoing monitoring, and greater transparency about how sources are selected and evaluated.

In October 2025, Musk launched Grokipedia, intended as an alternative to Wikipedia amid his ongoing disagreements with the platform over editorial policies. Musk described the project’s mission as delivering “the truth, the whole truth and nothing but the truth,” acknowledging that while perfect accuracy may be unattainable, the platform is committed to striving toward it.

Frequently Asked Questions

Grokipedia is an AI-driven encyclopedia launched by Elon Musk’s xAI in October 2025, positioned as an alternative to Wikipedia. ChatGPT has begun referencing it for certain topics, particularly obscure or less-discussed subjects.
No. ChatGPT did not cite Grokipedia when asked about topics like the January 6 insurrection, alleged media bias against Donald Trump, or the HIV/AIDS epidemic, areas where Grokipedia has been criticized for spreading misinformation.
Researchers warn that Grokipedia entries often rely on untrustworthy or poorly sourced information. When AI models cite it, these sources may appear credible, potentially misleading readers and spreading false or biased content.
MICHAELA

MICHAELA

Michaela is a news writer focused on cryptocurrency and blockchain topics. She prioritizes rigorous research and accuracy to uncover interesting angles and ensure engaging reporting. A lifelong book lover, she applies her passion for reading to deeply explore the constantly evolving crypto world.


Michaela has no crypto positions and does not hold any crypto assets. This article is provided for informational purposes only and should not be construed as financial advice. The Shib Daily is the official publication of the Shiba Inu cryptocurrency project. Readers are encouraged to conduct their own research and consult with a qualified financial adviser before making any investment decisions.