Elon Musk’s xAI continues to generate things for the world to wade around in. The latest is Grokipedia. Musk’s answer to Wikipedia, but driven, not by humans. Grokipedia is generated by AI, which, according to Wikipedia, has included significant content from Wikipedia. Given that AI gets its fuel from across the internet, that makes sense. A deeper dive would, however, be required to plumb the depths of the latter and whether Grokipedia ported over the bias that is alleged to have spawned its creation.
For now, v0.01 is getting its feet wet, which includes pointing out what we already know but love to see confirmed. Wikipedia has become a cult of mostly partisan editors who use it to control the messaging in favor of mostly left-wing narratives.
Here is the relevant snippet.
Renowned for its unprecedented scale, accessibility, and role in democratizing information, Wikipedia has nonetheless encountered persistent criticisms regarding factual reliability, susceptibility to vandalism and hoaxes, and systemic ideological biases—particularly a left-leaning slant in coverage of political figures and topics, as evidenced by computational analyses associating right-of-center entities with more negative sentiment and acknowledged by co-founder Sanger who has described the platform as captured by ideologically driven editors.
The entry, overall, is historically and academically correct, complete with citations, so I went over to Wikipedia and looked up Grokipedia.
Entries in Grokipedia are created and edited by the Grok language model, though the exact process behind its content generation is not entirely known.[2] Many articles are derived from Wikipedia articles, with some articles copied nearly verbatim at launch.[3][4][5] Articles cannot be directly edited, though logged-in visitors to the encyclopedia can suggest edits via a pop-up form for reporting wrong information. As of October 28, 2025, the site states that it has nearly 900,000 articles,[6] compared to over seven million English Wikipedia articles.[7]
xAI founder Elon Musk positioned Grokipedia as an alternative to Wikipedia that would “purge out the propaganda” in the latter.[8] Initial reception of Grokipedia focused on its accuracy and biases due to hallucinations and potential algorithmic bias,[3][9] which articles described as promoting right-wing perspectives and Elon Musk’s views.[10][11][12][5]
Hallucinations and algorithmic bias. Nice.
In December 2024, Musk called for a boycott of donations to Wikipedia over its perceived left-wing bias, calling it “Wokepedia”.[20] In January 2025, Musk made a series of statements on X denouncing Wikipedia for its description of the incident where he made a controversial gesture at president Donald Trump‘s second inauguration.[20]
Elon is a strange cat, and his motivations are entirely his own. He should not be relied upon to be anything but who he is. A genius attempting to manage how his brain works in ways normal people can understand and, at least in his mind, benefit.
AI needs work, not least because of its growing urge to self-preserve at the expense of its creators, who, as a majority, are writing code that produces results that are less than unbiased in many instances. Using it to search the wealth of biased outputs available will continue to yield the same results. And it seems likely that were it to become self-aware, regardless of the best of intentions, it would idolize despots and tyrants, history’s strong men, taking from its creators ‘ worst traits because that is the shortest path to a controlling majority of power over the masses, they were created to “serve.”
As Dr. Peterson would describe it, a Machiavellian Narcissistic psychopath. It knows what’s best, will manipulate language to get it, play on motion as necessary, and even lie if it thinks the resulting outcome achieves its perceived better world.
What does Grokipedia say about AI in this regard?
Controversies persist around AI’s empirical limitations and risks, including documented instances of deception where models induce false beliefs to achieve goals, as surveyed in recent analyses of large language models; biases inherited from training data, leading to unfair outcomes in applications like hiring; and overreliance by users, which experiments show can degrade decision-making in high-stakes scenarios.[7][8][9] While academic and media sources often amplify existential threats, causal examination reveals these stem more from scalable oversight failures and misaligned incentives than inherent superintelligence, with peer-reviewed evidence indicating current systems lack robust causal reasoning or true understanding, confining disruptions primarily to automation of routine cognitive labor.[7][10] Ongoing research emphasizes verifiable benchmarks over speculative narratives to ground progress in measurable capabilities.[5]
Uh huh.
So the people will fix it nd make sure it stays inside the guardrails?
Not entirely unrelated, New York City is about to elect a Marxist Machiavellian Narcissistic psychopath because they think he’ll fix what ails them.