ARTICLE AD BOX
Last Updated:January 28, 2026, 21:30 IST
It is a sad precedent of our times to include only the lower castes as passive sufferers for no reason at all whenever a new technology comes into vogue

What is most disturbing is the implication that ChatGPT is somehow designed to oppress or exploit lower-caste users. (Representational image: AI-generated)
One worrying tendency has become common in Bharatiya discourse: interpreting every new international instance through the prism of caste. Caste, undoubtedly, represents one of the most long-standing and violent forms of inequality in Bharat, but not all subjects will benefit from this interpretation. Interpreting every new instance through this prism can often distort a valid critique into something troubling, as identified. The notion that ChatGPT, an AI tool developed in accordance with international standards of technological development, represents domination by upper castes over lower castes is the most extreme form of this distortion. Just because this tool was developed according to international, rather than Bharatiya, technological principles, it represents domination by upper castes over lower castes, which is the most extreme form of distortion of all. Such a rather weak argumentation, by contrast, provides a quite rigorous and unconstrained critique of the narrative. This points out the flaws in those who correlate worldwide technological abilities with lower-caste identification, stresses the need for evidence in claims of beneficial repression, and reveals the politically beneficial character of defining lower-caste communities as permanent victims of technology, without accepting them as active technology users and developers. That argumentation does not defend corporate aid but defends intellectual integrity. To the degree to which technology criticism begins for the sake of intellectual integrity in the context of social justice ideas, reductionism and ideology have poor performance and existence. This argument about ChatGPT being nothing but the upper caste technology’s Bharatiya lower caste downgrading version itself is intellectually flawed, politically pathetic, and only reveals much more about the particular person who said so, rather than the technology itself. Eventually, ChatGPT is not really a Bharatiya social practice or cultural artefact, and it’s not connected in any meaningful way to the Bharatiya caste system. That’s a worldwide technological system developed and implemented by people worldwide, in the context of worldwide research cooperation, not in any form or connection related to the Bharatiya caste system. That labelling of such a worldwide system as upper-caste products makes no sense at all.
The argument also relies on a dangerously reductionist view of how power works in technology. Power in AI is a function of capital, state control, corporate ownership, data privilege, and digital literacy, not caste identity. These are real, measurable, and amenable to critique. But substituting them with metaphors of caste may create rhetorical heat but no analytical light. If exclusion is true, and it is, it is along the lines of language access, internet availability, educational quality, and economic inequality. These are structural problems that affect millions, including lower castes, but they are not evidence of caste intentionality in the technology itself. What is most disturbing is the implication that ChatGPT is somehow designed to oppress or exploit lower-caste users. This is a claim of intentionality, design bias, and ideological motivation. Where is the evidence? What design specification, training goal, or deployment strategy supports this contention? Instead, suspicion is transmuted into a false theoretical conspiracy against the society’s socio-cultural fabric. Critique without evidence is not resistance; it is speculation masquerading as scholarship. A professor with more ideological bias making this claim has a duty to distinguish between structural inequality and intentional oppression. To fail to do so is an abdication of academic duty. The irony is that the argument self-destructs in its own contradictions. ChatGPT, like most digital technologies, is often most revolutionary for those who have historically been excluded from elite circles, first-generation students, students from rural areas, non-English-speaking users, and economically marginalised communities. It gives them access to information, writing assistance, and conceptual explanations on a scale never before possible. To claim that such a technology is primarily designed to maintain caste domination is to wilfully ignore how it is actually used in the world, especially in developing countries, where marginalised students have consistently reported digital technologies as equalisers rather than oppressors.
It is a sad precedent of our times to include only the lower castes as passive sufferers for no reason at all whenever a new technology comes into vogue. This reduces them to mere recipients of wisdom and goodness, bereft of their own agency, imagination, and intelligence. This is no liberation politics, and this is paternalism in its most monumental form. It is precisely for taking this kind of stand that I believe a larger flaw in academia is revealed: the inability to question technology and a slide into cultural essentialism. Of course, caste is real, and caste-based discrimination is real. But this is no way to tackle most of the world’s problems. It is futile and unnecessary. This sort of activism undermines the struggle for an end to caste injustice because it confines it to areas where it is irrelevant or inapplicable anyway. But if there is serious discussion about ChatGPT in India, the actual ground to be covered lies elsewhere: online divides, English dominance, educational inequities, algorithmic black-boxing, and the corporatisation of knowledge infrastructure. All these actually require serious focus without delay and don’t require idle fantasies. A focus solely on upper-caste dominance without exploring how it operates may be easy, but it’s not exactly an academic conversation. After all, when academics speak at such a raw and flash point rather than at such a nuanced one, does the point of critique inside the academy itself remain credible and viable? Students might expect no more than a bumper sticker argument from academics and academic publishing today. Society deserves more than frightspeak from academics or anywhere else. Caste justice itself demands that arguments be viable and withstand scrutiny. Scrapping AI tools for being upper-caste does not seem logical but rather a perverted mindset. Such a critique of technological advancement, blended with false narratives and tricks to get fame, does nothing for social justice.
Dr Barthwal teaches Political Science at Sri Aurobindo College, University of Delhi. He posts on X @prashbarthwal. Views expressed in the above piece are personal and solely those of the author. They do not necessarily reflect News18’s views
Handpicked stories, in your inbox
A newsletter with the best of our journalism
First Published:
January 28, 2026, 21:30 IST
News india Opinion | Is Your AI Casteist? Demeaning The Technological Advancement With Caste Prism
Disclaimer: Comments reflect users’ views, not News18’s. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.
Read More
1 week ago
3




English (US) ·