ARTICLE AD BOX
![]()
AI systems will change from being “copilots” that require human steering to being reliable executors of complex tasks. Right now, AI is impressive but (outside of coding) brittle.
It demos well and answers questions. But in serious workflows, reliability, correctness, and integration matter much more. I expect AI models to address this and become better at general computer use, problem-solving in novel and unstructured domains, and learning on the job.In my work, I expect AI to:
- Become a dependable system that can execute multi-step workflows autonomously
- Operate directly on structured business data
- Actually do work and deliver outcomes rather than just generating suggestions
I would like AI to handle three things:
- Context stitching: Knowledge is fragmented across multiple domains, internal tools, and websites. AI should synthesise across all of it and surface coherent insights. For example – AI should be able to help a district magistrate in rural India in determining the root cause of a malaria outbreak, via a combination of reports from clinics, satellite imagery, and sensors.
- Repetitive cognitive work: Data cleaning, exploratory analysis, internal reporting, updates, documentation, compliance checks. This is all work that currently requires repetitive cognition from humans, but not creativity.
- Operational decision loops: Instead of just answering “what happened?”, AI should proactively determine why it happened, what we should test next, and what will likely happen if we do X? For example, instead of just answering questions about trends in a diabetic patient’s glucose levels, it should proactively look at all the potential reasons it could have happened (from looking at fitness trackers and food logs). It should then simulate effects (based on the users’ past data) about potential interventions. And finally give the user a few recommendations based on their potential impact.
Less creative writer, more verifiable truth engineI would like AI systems to have proven reliability on structured data.
Hallucination is intolerable for many real world problems – especially in public policy, health, and finance. AI systems that can prove correctness, cite data lineage, and reason with constraints will go a long way towards improving human systems that have ossified for decades.Additionally, I would like better and more reliable long-horizon autonomy. I would like AI to go beyond answering prompts. It should proactively set goals, breaking them into tasks, then perform those tasks and track progress over weeks.I want an AI system that acts less like a creative writer and more like a verifiable truth engine that acts on the insights it uncovers.



English (US) ·