Elon Musk’s new project, Grokipedia, went live on 27 October 2025—promising “the truth, the whole truth and nothing but the truth.”
But within days, historians and AI scholars began sounding alarms about rampant inaccuracies and ideological slants that threaten the very idea of a neutral knowledge base.
Key Takeaways
- Grokipedia launches with ~885 000 articles, far fewer than Wikipedia’s ~7 million.
- A study finds Grokipedia mirrors Wikipedia in style but has fewer citations, lower lexical diversity.
- Academic users found significant factual errors in major entries—even on respected historians.
- Critics warn that the platform’s AI-driven model lacks transparency, giving chat-room input equal weight to archival research.
- Grokipedia’s editorial process and potential ideological alignment raise concerns about who controls “truth” online.
Grokipedia is an AI-generated online encyclopaedia developed by Elon Musk’s company xAI. Launched 27 October 2025 with roughly 885 000 entries, it is powered by the Grok large-language model rather than volunteer editors. Academics warn it contains factual errors and ideological bias, highlighting new risks in AI-curated knowledge.
What is Grokipedia?
On 27 October 2025, Elon Musk and his AI company xAI launched Grokipedia version 0.1—an online encyclopedia powered by the Grok large-language model, designed as a rival to Wikimedia Foundation’s Wikipedia.
Musk described it as “the truth, the whole truth and nothing but the truth.”
Compared with Wikipedia’s millions of entries, Grokipedia launched with around 885 000 articles.
Unlike Wikipedia’s volunteer-editor model, Grokipedia uses AI generation, supplemented by user “flagging” rather than open editing.
Why Musk says it was needed
Musk has long criticised Wikipedia, labelling it “Wokipedia” and claiming it carries a left-wing bias.
Grokipedia is positioned as a corrective: fewer human editors, more AI, less perceived activism, more “truth”.
He claims that knowledge must keep up with AI and the pace of change—and that traditional encyclopaedias were lagging.
Early critiques from scholars
Factual flaws in major entries
When British historian Sir Richard Evans checked his own entry he found “all these facts were false.”
For example:
- Grokipedia’s entry on Eric Hobsbawm wrongly claimed he experienced German hyper-inflation, was an officer in the Royal Corps of Signals, and omitted his two marriages.
- Its article on Albert Speer repeated long-discredited distortions spread by Speer himself.
Editorial & epistemic concerns
Scholars say Grokipedia’s model conflates chat-room content with archival research—giving both the same status. “AI just hoovers up everything,” Evans told The Guardian.
David Larsson Heidenblad of the Lund Centre said: “There is a clash of knowledge cultures … the Silicon Valley mindset is iterative; academic world builds trust over time.”
Bias and ideological leanings
Analysis shows many Grokipedia entries mirror right-leaning talking points. For example, the entry on the 2021 United States Capitol attack links to conspiracy-style claims and minimises Trump’s role.
Observers say that Grokipedia often excludes controversies or frames figures in a more favourable light than Wikipedia.
Transparency and governance worries
Andrew Dudfield of the fact-checking organisation Full Fact warned: “It doesn’t display the same transparency but it is asking for the same trust.”
Key questions remain unanswered: How much human oversight? What are the training data? How are sources chosen?
Comparison with Wikipedia – what the numbers say
A computational study of 382 matched article pairs between Grokipedia and Wikipedia found:
- Grokipedia articles tend to be longer but use less lexical diversity.
- They have fewer references per word, suggesting weaker sourcing.
- In structural depth (sections, sub-sections) they diverge from Wikipedia norms—less consistent editorial structuring.
- Many entries appear almost identical to Wikipedia’s.
In short: Grokipedia mirrors Wikipedia in shape, but lacks the depth, sourcing rigor and editorial transparency Wikipedia is known for.
Global implications & what this means for you
Influence on AI-trained knowledge systems
Wikipedia has long been a backbone of many AI systems’ training data. Grokipedia entering the mix means future AI may draw on content with less vetting, more bias.
Trust and misinformation risks
If one platform offering “truth” lacks transparent governance, readers will struggle to evaluate reliability. Trusted encyclopaedias matter not just for casual browsing, but for academic research, journalism and AI pipelines.
Knowledge commons vs proprietary project
Wikipedia runs under a non-profit, volunteer-driven model. Grokipedia is driven by a billionaire’s company. That raises fundamental questions about power, control and agendas in the age of AI-curated knowledge.
Impact on everyday readers
When you look up facts online:
- Beware: Grokipedia is not a human-peer-reviewed resource yet.
- Cross-check high-stakes information (health, history, politics) using multiple reputable sources.
- Recognise that even “AI-fact-checked” content can reflect ideological frames or selective omissions.
Future outlook & What happens next
- Grokipedia is labelled version 0.1—Musk has promised version 1.0 will be “10 × better”.
- Academic scrutiny and automated studies will ramp up. The Yasseri et al study is early but suggests persistent structural issues.
- Wikimedia Foundation is monitoring. It said Wikipedia’s strength remains human-created knowledge.
- For readers and AI systems alike, governance questions will grow: Who writes the encyclopedia? Who edits it? Who controls the narrative?
Conclusion
Grokipedia is more than just a new website—it’s a flashpoint in the battle for how we define truth in the digital age. Musk’s claim to build a “truth”-driven encyclopedia is bold—but for now the academic verdict is harsh: high in ambition, low in reliability. As AI increasingly powers our knowledge tools, we must ask: who holds the pen?