ARTICLE AD BOX
![]()
As AI-generated content blurs reality, Elon Musk's Grok AI launched a fact-checking feature to verify online posts. However, Grok's history of significant errors, including unfounded claims and offensive suggestions, raises concerns about its reliability in combating misinformation. AI hallucinations remain a persistent challenge, underscoring the need for human oversight.
Can AI save us from fake news or spread more of it?In today's digital age, spotting truth amid the noise feels like a full-time job. Social media floods us with numerous claims that often go viral online.
Some real, some twisted, others pure invention.In this digital age, where AI evolution is on its peak, and innovation is on its peak with the creation of AI generated digital images it gets nearly difficult to differentiate between the real and digitally created content.Recently, Elon Musk’s Grok AI has rolled out a feature that claims to verify the authenticity of posted content online.

What is Grok's fact-check button Elon Musk posts about Grok's AI feature that claims to verify information (Photo: Grok/ X)
What is Grok's new fact-check feature?
Elon Musk's xAI posted about Grok’s new feature on X, letting users verify posts in seconds by tapping the Grok icon.
Musk wrote on his post that users could use it by tapping the icon on the “left” side of posts, though Grok itself wrote it's actually on the right.The tool breaks down post content, captions, and engagement to judge accuracy, aiming to curb misinformation. Yet Grok's rocky past fuels doubts.
Previous mistakes raise questions
Grok has made errors in the past. Last year, it unexpectedly brought up “white genocide” in South Africa during unrelated conversations, such as one about a baseball player’s salary, even though such claims have been dismissed as unfounded.
xAI attributed this to an “unauthorised modification” to its prompts and promised greater transparency on GitHub along with stricter reviews.
Elon Musk On EU Radar; X’s AI Chatbot Grok Faces Biggest Probe Over Sexual Deepfakes | Details
It also once suggested Adolf Hitler as a solution to “anti-white hatred,” later calling it “an unacceptable error from an earlier model iteration” and adding safeguards to prevent similar responses.
AI hallucinations are another concern
AI hallucinations occur when AI models, like chatbots, confidently generate false or made-up information that sounds real.They don't verify facts but predict patterns from training data, leading to errors like fake details or invented sources. This common issue affects tools from ChatGPT to Grok, stressing the need for human checks.
English (US) ·