ARTICLE AD BOX
Last Updated:February 22, 2026, 20:51 IST
The World Bank said it is prioritising what it calls "small AI" solutions that are affordable, practical, and effective even where connectivity and infrastructure are limited

Paul Procee (L), acting country director for India at the World Bank, and Mahesh Uttamchandani (R), regional practice director for digital and AI across East Asia and the Pacific and South Asia, World Bank. (Image: World Bank Group)
The ‘India AI Impact Summit 2026’ brought together policymakers, industry leaders, multilateral institutions, and technologists at an unprecedented scale, signalling a shift in the global conversation around artificial intelligence.
Moving beyond hype and model size, the summit focused squarely on how artificial intelligence (AI) can deliver real-world development outcomes, from jobs and productivity to public service delivery, while confronting risks around inequality, exclusion, and trust.
For the World Bank Group, the summit was a key moment to advance its vision of AI as a tool for inclusive growth. As governments across the Global South race to embed AI into welfare systems, education, healthcare, and governance, the World Bank has positioned itself at the centre of debates on responsible adoption, digital public infrastructure, cybersecurity, and global safeguards. Its emphasis on “small AI", practical, affordable systems that work in low-resource settings, reflects a broader push to ensure AI narrows, rather than widens, development gaps.
CNN-News18 spoke with Paul Procee, acting country director for India at the World Bank, and Mahesh Uttamchandani, regional practice director for digital and AI across East Asia and the Pacific and South Asia. The conversations ranged from the risks of AI-led exclusion and algorithmic bias to India’s role in shaping global AI norms, the governance challenges of deploying AI at the state level, and the uncomfortable truths policymakers still avoid when it comes to AI and inequality.
Excerpts from the interview:
The World Bank increasingly frames AI as a development tool, but many argue it risks widening inequality in low-capacity states. How do you ensure AI projects backed by the World Bank do not end up benefiting governments and vendors more than vulnerable populations?
Mahesh Uttamchandani: At the World Bank Group, our focus is clear – AI must drive inclusion, not deepen divides. That means designing AI that works for people at the margins, not just for governments or tech vendors. We are prioritising what we call “small AI" solutions that are affordable, practical, and effective even where connectivity and infrastructure are limited.
In Andhra Pradesh and Telangana, we are working with governments and partners to assess AI-powered learning tools that help students build job-ready skills. In Uttar Pradesh, AI tools are helping farmers reach wider markets, raise incomes, and create new employment opportunities. These initiatives show that when AI is grounded in local realities, it can deliver immediate gains in health, education, and agriculture, and directly strengthen communities rather than bypass them.
India’s digital public infrastructure is often held up as a global model. As AI gets embedded into welfare delivery, health, and education systems, what specific risks of exclusion or error worry the World Bank most in the Indian context?
Paul Procee: India has emerged as a global benchmark for digital public infrastructure. Platforms like Aadhaar and unified payments interface (UPI) show how technology can deliver services at scale with speed and transparency. But as AI is embedded into welfare delivery, health, and education, new risks come into focus.
The biggest concern is exclusion by design. Algorithmic bias, weak local-language data, or systems trained on non-representative datasets can unintentionally lock out certain communities. There are also serious cybersecurity risks. Attacks on AI-enabled systems could disrupt essential services or expose sensitive personal data, undermining public trust.
For the World Bank Group, the priority is to put responsible AI governance and cybersecurity at the core, not as an afterthought. That means strong data governance, transparency around how algorithms are deployed, effective grievance redress mechanisms, and clear lines of accountability.
India has already taken important steps in this direction. The Digital Personal Data Protection Act establishes clear rules on consent, data handling responsibilities, and cross-border data sharing. Building on this emphasis on trust, Prime Minister Narendra Modi, at the AI Impact Summit, called for a “glass box" approach to AI. The idea is simple but powerful – AI systems should be open, explainable, and governed by visible and verifiable safety rules, not hidden behind opaque black boxes.
Several Indian states are now experimenting with AI in policing, education, and social services. Is the World Bank engaging directly with state governments on AI deployment and, if so, how does it ensure consistency with national and global safeguards?
Paul Procee: AI governance cannot stop at state or national borders. Data flows freely across jurisdictions, and risks such as cyber threats or misinformation do not respect boundaries. That is why regulation must be rooted in local realities, but anchored in shared global principles.
The World Bank Group follows a layered approach. At the state level, AI deployment must comply with national laws. At the same time, it should reflect global best practices on fairness, transparency, accountability, and data privacy.
States need room to tailor AI tools to local needs, but within a common safeguards framework. The approach we advocate is risk-based, principles-driven, and aligned with each country’s institutional capacity and level of digital maturity.
This philosophy extends globally. For instance, we have supported the development of the African Union AI Continental Strategy, which strikes a balance between regional coordination and national flexibility.
AI may be borderless, but governance cannot be one-dimensional. It has to operate simultaneously at the local, national, and global levels to ensure innovation moves forward safely, inclusively, and with shared standards of trust.
India increasingly positions itself as a voice for the Global South on technology governance. Does the World Bank see India as a co-architect of global AI norms, or mainly as a test case whose lessons are later exported elsewhere?
Paul Procee: India is both a co-architect of global AI norms and a proving ground for inclusive AI at scale. Its strength lies in its ability to pilot innovation through regulatory sandboxes and targeted programmes, and then rapidly scale what works. That transition from proof of concept to nationwide impact offers powerful lessons for other developing economies.
More importantly, India is helping reframe the global AI conversation. Instead of focusing only on ever larger models or greater computing power, it is pushing the debate toward development outcomes such as jobs created, productivity gains, and better public service delivery. The World Bank Group is partnering with India in this shift by supporting “small AI": task-specific, multilingual systems that function on low bandwidth and basic smartphones.
India’s leadership also matters at the regional and Global South level. Not every country can build large-scale computing infrastructure on its own, but shared facilities, common standards, and open-source partnerships can expand collective capacity. With its scale, technical talent, and policy ambition, India is shaping how AI governance and digital development evolve across the Global South.
Finally, after listening to leaders and industry voices at the AI Impact Summit, what is the most promising signal you’ve seen for AI-led development – and what is the most uncomfortable truth about AI and inequality that policymakers still prefer to avoid?
Mahesh Uttamchandani: The most promising signal is the growing recognition that AI can create jobs and expand opportunity when it is designed for inclusion. “Small AI", practical and affordable tools, is already showing results. We see students receiving personalised learning support, farmers accessing better advisory services, small entrepreneurs building digital credit histories, and clinics extending care to underserved communities. These applications boost productivity and open new pathways to employment for people who are often left behind.
To turn this potential into real jobs and opportunity, countries need to learn from one another. That is why the World Bank Group, together with six other multilateral development banks, has launched the AI Repository. It brings together real-world AI applications in development, allowing governments to adapt, replicate, and scale what is proven to work.
The uncomfortable truth is that inclusion also increases exposure. As the poorest and most vulnerable are brought into digital systems, risks are unavoidable, from fraud and misinformation to algorithmic bias. Policymakers still tend to treat safeguards as secondary. That is a mistake. Responsible regulation, strong consumer protection, transparency, accountability, and human oversight must be built in from the start. AI can be a powerful force for development, but managing its risks is not optional. It is a shared responsibility.
Handpicked stories, in your inbox
A newsletter with the best of our journalism
First Published:
February 22, 2026, 20:51 IST
News india India Shows How AI Can Work At Scale, But Inclusion Remains The Challenge: World Bank | Exclusive
Disclaimer: Comments reflect users’ views, not News18’s. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.
Read More
1 hour ago
4






English (US) ·