India is rapidly digitising its healthcare system, and artificial intelligence (AI) offers a huge opportunity to super-charge this transformation. From streamlining hospital diagnostics to powering telemedicine platforms, AI is helping expand reach like never before.
The government’s eSanjeevani platform has clocked over 196 million consultations and 12 million AI-assisted diagnoses since its integration with the AI-powered Clinical Decision Support System, the Union Health and Family Welfare Ministry has said. Initiatives such as the National Health Stack, National Tele Mental Health Programme, and AI-enabled TB diagnostics showcase how public health systems are increasingly embedding AI into service delivery.
These developments signal more than just modernisation or technology adoption. They represent emerging public health infrastructure built on speed, accessibility, and data-driven precision. Start-ups and public-private partnerships alike are exploring AI-based screening, risk prediction for high-burden diseases, and even early cancer detection, especially in underserved geographies. However, as AI is increasingly becoming the foundation for new-age solutions, it is important to ensure that any inherent representation gaps in the underlying AI models are adequately addressed, well in time.
Real-world challenges
Despite the promise of inclusivity, real-world implementation of AI tools may struggle to accommodate India’s vast linguistic, cultural, and digital diversity. Take symptom checkers, for instance. While these tools are theoretically designed to democratise access, in practice, they require a baseline of digital literacy, health awareness, and trust in technology. Individuals in rural or low-income urban areas may find it difficult to interpret AI-generated guidance and will need adequate awareness. Without the ability to contextualise probabilistic advice or navigate risk hierarchies, users may misread suggestions, delay care, or worse, take incorrect actions.
Women face a disproportionate burden. Structural gender gaps in data collection and medical practice get compounded in AI systems trained largely on male-centric or urban-centric datasets. A 2024 McKinsey research from the U.S. estimates that women are up to seven times more likely to be misdiagnosed for certain heart conditions. In India, too, such outcomes are a real risk. If AI systems fail to adjust for cultural norms or social dynamics that shape how women report pain, describe symptoms, or access care, then there is a risk of replicating existing disparities rather than correcting them.
The problem is the lack of proactively designed communication pathways within these AI tools that acknowledge and compensate for these deep-seated societal and systemic gaps. An AI tool might generate medically sound advice, but if it does not account for how that advice will be received, interpreted, and acted upon by a woman in a context where her health concerns are often de-prioritised, it will likely fail in its objective.
Despite the sophistication of AI systems, they remain vulnerable to errors, misinformation, and data limitations. This presents an enormous challenge: how do we ensure the information conveyed by these tools is not only accurate but also trustworthy, actionable, and culturally appropriate across a vast and diverse population, especially when the tools themselves could be prone to error?
In the vast potential of AI lies the opportunity to enable equity and access. The stakeholders can and must reorient AI development around the needs of those currently left behind. First, every health-oriented AI system should be evaluated not only for overall accuracy but also for performance across demographic groups, with a specific focus on understanding how well the communication of health information is being achieved for each community: women versus men, urban versus rural, high versus low income, high versus low literacy. Any significant underperformance must trigger retraining on more diverse datasets or algorithmic recalibration before deployment, with a focus on improving the clarity, cultural relevance, and accessibility of the information provided.
Second, partnership with the very people who will use these tools must guide every stage of development. Establish structured community advisory panels, comprising representatives from rural areas, low-income neighbourhoods, women’s health organisations, and language groups, to collaborate on dataset selection, conversational design, and testing protocols. These panels should review draft prompts for cultural sensitivity, validate that recommendations reflect local realities, and help define clear escalation pathways for queries that might be out of scope. By embedding this feedback loop into the development life cycle, one can ensure that AI systems are both relevant and trustworthy in their communication for those they aim to serve.
Third, transparency and interpretability are non-negotiable. Providers and patients alike deserve clear explanations of how recommendations are generated, akin to a clinician walking through their reasoning. Open reporting on dataset composition, bias testing results, and known limitations will build trust and enable external review. This means ensuring the AI’s “reasoning” is understandable and its limitations are explicitly communicated, empowering users to make informed decisions rather than blindly accepting advice.
Post-deployment monitoring
Finally, rigorous post-deployment monitoring must track real-world outcomes: are AI-augmented interactions reducing missed diagnoses? Improving preventive care uptake? Narrowing gender and geographic disparities in health behaviours? Specifically, are these tools genuinely improving health literacy and enabling better health decisions across all population segments? Independent audits by relevant agencies or civil society partners can ensure that any emerging inequities in information access and understanding are caught early and corrected swiftly.
Insisting on comprehensive representativeness, genuine community participation, full transparency, and ongoing oversight can help harness AI as a catalyst for more inclusive, effective health communication and delivery across India and beyond.
The opportunity is real. So is the risk. The time to act is now, so that AI solutions are enabled to deliver safe affordable and accessible healthcare for all.
Pooja Sehgal is country lead for Health and Nutrition Communications at Gates Foundation India Office; and Shirshendu Mukherjee is managing director at Wadhwani Innovation Network; views are personal