Key Points
- ChatGPT’s latest GPT-5.2 model has cited Grokipedia nine times in tests, particularly for obscure topics.
- Grokipedia content can include unverified or misleading information, raising concerns about AI reliability.
- Experts warn that citing such sources may give the appearance of credibility, spotlighting challenges in AI content verification.
OpenAI’s newest ChatGPT model has reportedly started referencing Grokipedia, the AI-driven encyclopedia from Elon Musk’s xAI, prompting scrutiny over potential bias and the reliability of information sourced by artificial intelligence.
A Guardian investigation found that ChatGPT’s latest GPT-5.2 model referenced Grokipedia nine times while answering over a dozen test questions. The queries included topics such as Iran’s political organizations and the biography of British historian Sir Richard Evans, who served as an expert witness in the David Irving Holocaust denial libel trial.
The news organization further reported that ChatGPT did not reference Grokipedia when asked about widely reported misinformation, including the January 6 insurrection, alleged media bias against American President Donald Trump, or the HIV/AIDS epidemic, areas where Grokipedia has been criticized for spreading false claims. Instead, the AI encyclopedia’s content appeared in responses to less commonly discussed or more obscure topics.
Related: OpenAI Introduces Age Prediction on ChatGPT to Strengthen Youth Safety
In some cases, ChatGPT cited Grokipedia to present claims that go beyond what is reported on Wikipedia. For example, the chatbot repeated assertions linking Iran’s MTN-Irancell telecommunications company to the office of the country’s supreme leader. The AI also drew on Grokipedia for details about Sir Richard Evans’ role as an expert witness in the David Irving libel trial, information that the Guardian previously debunked.
Concerns have grown around a practice known as “LLM grooming,” in which large volumes of disinformation are fed into AI models to influence their outputs. Disinformation researcher Nina Jankowicz, who has studied this phenomenon, said ChatGPT’s reliance on Grokipedia raises similar red flags.
While Musk may not have intended to shape AI models, Jankowicz noted that the Grokipedia entries she and her colleagues reviewed often relied on sources that were “untrustworthy at best, poorly sourced, and deliberate disinformation at worst.” She warned that when large language models cite platforms like Grokipedia, it can give these sources an appearance of credibility, potentially leading readers to assume the information has been independently verified by the AI.
Related: OpenAI Taps Real Freelancer Work to Benchmark AI Office Performance
This persistent presence of false or misleading content spotlights a growing challenge for AI developers: ensuring that chatbots not only provide accurate information but can also adapt quickly when errors are discovered. As AI becomes more integrated into research, education, and everyday decision-making, maintaining trust in these systems will require stronger verification processes, ongoing monitoring, and greater transparency about how sources are selected and evaluated.
In October 2025, Musk launched Grokipedia, intended as an alternative to Wikipedia amid his ongoing disagreements with the platform over editorial policies. Musk described the project’s mission as delivering “the truth, the whole truth and nothing but the truth,” acknowledging that while perfect accuracy may be unattainable, the platform is committed to striving toward it.
