ARTICLE AD BOX
It is a bit of a dichotomy, if you look closely. The world is still to fully decode artificial intelligence (AI) as it develops (this incidentally on the day OpenAI has released GPT-5, which they call “PhD-level smart”), deals with long-standing problems such as hallucination as well as context, and yet, agentic AI is being deployed to replace humans in many workflows in enterprises worldwide. “A lot of the conversation about artificial intelligence (AI) is driven by sensationalism, that is either fear-mongering or a head in the sand sort of response,” says Nitin Sethi, as he sits down for a conversation with HT. Sethi is better placed than many commentators, in understanding the trends and trajectory of algorithms and machines, as Co-founder and CEO of Incedo Inc., having previously worked at McKinsey, Fidelity International and Flipkart, and also an author who has just published his third book, Human Edge in the AI Age.

Sethi calls it the completion of a logical trifecta, having previously written enterprise and business focused titles, Winning in the Digital Age and Mastering the Data Paradox. He tells us that the forward movement of AI is inevitable, and for now, unstoppable — the key is for humans to find a way to reinvent. In the book, he talks about eight pillars that he thinks would work, and for one, spirituality stands out. Seth tells me that he’s often been asked why spirituality in the context of a world being increasingly driven by machines. “There’s noise all around us. At that time, looking inwards at oneself is very important, and that’s how I define spirituality,” he says. “It is that process which helps you find clarity, calmness, and that is eventually the source of creativity,” Seth defines. He spoke with HT about where we are with AI and where these developments are headed, the need for humans to reinvent themselves as workflows change rapidly, the problem businesses have with balancing productivity and his outlook on human oversight. Edited excerpts.
Q. Please tell us a bit about the book, basically what it’s all about and the thought behind it. Isn’t this your first consumer-focused writing?
Nitin Seth: This is for the consumer, after my first two books that were more about business technologies. My first book was on the digital age and the second was on data, and therefore it was natural that now I complete the trifecta of digital, data and AI. When I began to reflect on AI, I realised the question was much bigger, one of a human push. AI has become really powerful in the last 12 months, and I’ve seen it in our client work too. It has become clear that this isn’t like any other technology shift and will have a dramatic impact on human beings. I started to see research around job losses, that in the next 15 years, around 50% jobs worldwide are going to go. That’s a remarkable number — since we are talking about a couple of billion people.
It is a huge implication, but it’s not all doom and gloom. But we have to recognise that it’s a very, very big shift. Mankind has seen such shifts before, and we have evolved. It is quite dramatic, and either it becomes a trigger for a next stage of evolution for mankind, or triggers our downfall. So that’s the question I have tried to answer, what does it mean for us as humans? I have come up with a possible framework of problem solving, openness to change, spirituality, sports, impact, balance, leadership and entrepreneurship. The answers are not going to come from outside, they’re going to come from within; that as humans we have some intrinsic, innate capabilities, and that’s really my main thesis.
Also Read: Agentic AI’s continuous learning forms its Milky Way: Tredence’s Soumendra Mohanty
Q. What are the key things you’ll tell someone whose job may get impacted? What do humans need to do to be ready?
NS: AI is going to disrupt every industry, so that will create hundreds of thousands of problems to be solved. That’s a very fundamental shift, one from being a job seeker to a problem solver, or a problem seeker. We’ve seen in the industry with the TCS announcement and I’m seeing it in my client work too. The way we need to see this is that there’s a short-medium term and then there’s a medium-long term. In the former, domain knowledge is probably the biggest asset. Technical skills today also are becoming less important. Even until very recently, I was hiring because of .net or Java skills, but they have simply become less important in the space of a few months.
Domain knowledge, that of management, banking, and life sciences, will be very important in the next couple of years. I would encourage people to focus a lot more on domain knowledge, than only technical knowledge. But over a slightly longer term, I think it is more about fundamentally problem solving and entrepreneur skills, which will also take time to develop. Most impacted will be customer service, operations, mid-office, back-office, and software development, which basically makes up a bulk of the services industry.
Also Read:Tech Tonic: God complex is why AI chiefs can’t see the humans they’ll displace
Q. Would you say AI companies as well as enterprises that deploy AI need to be more responsible and sensitive to the transition to AI agents replacing humans in the workplace?
NS: The change is too significant for the companies to be able to do anything. The productivity impact isn’t 25%, but instead, it is 40%-60%. See it as a food chain. Let’s take an illustration of a company in the US, be it a telecom firm or a bank, their board members are very aware that AI should be driving a certain amount of productivity improvement. Now as that is happening, that’ll lead to a downward impact for enterprises in India. When the pressure comes, what are they going to do. If my client is reducing my revenue by 30%, it’ll eventually impact 100% of the company.
At the same time, I think there is an opportunity for, and more effort needs to go in this direction, because that’s how this can be reversed. That is, how can we create new opportunities, as long as the AI focus is on efficiency alone, which in 90% cases is. That’s the part of the equation about AI use-cases. Focus on growth oriented use-cases so far has been limited, and that is the opportunity where more effort needs to go. Human growth will happen if there is business growth. Right now it is a lot more about automation and agentic AI, and we are first going to see the negative impact.
Also Read:Fidji Simo, OpenAI’s new CEO, insists AI can put power in the hands of people
Q. AI companies claim every new model is getting close to uniquely human qualities needed for complex problem solving, and navigating uncertain scenarios with reasoning. At what point would AI hit a virtual ceiling in those aspects?
NS: The human mind is an incredible asset, but we don’t fully know how to use it. Our cognitive abilities are incredible, but we’re only using a fraction of that. If humans are fully working at 100% of that, it becomes a very different ballgame. To your question, the answer is not binary. Whether it is problem solving, empathy, conversational storytelling, or judgement, is not that machines are zero. They are, at every step, getting better. First, they have a learning loop that is quite efficient. Secondly, we’re bringing more and more data all the time, all the data of the world, which is what an LLM as the formational models rely on. And then that is being contextualised with the enterprise data. So even if the foundational model is 40%-60%, as you keep on layering it to the enterprise data, you are able to get to 80%-90% capability.
It is not possible to deal in absolutes. The ability of AI to do well in complex situations wherein very contextual judgement is required, is still very poor. If I’m honest, the majority of jobs today, are somewhat mechanical. Now, the innate human capabilities around complex judgement and creativity, we absolutely need to double down on that. But today, how many jobs require creativity? Over the next couple of years, I think a lot of workflows are going to get redefined. And redefined, where domain knowledge is very important. That’s where the human who has got domain knowledge will be very valuable, because the sheer processing aspect of AI will be less relevant.
Also Read: We are creating AI workers with human level intelligence: Avaamo’s Ram Menon
Q. At what point do we possibly lose human oversight over AI?
NS: It’s a very difficult question, and I think it’s more of a philosophical question. I see practical problems in front of me, and AI is inevitable. Let’s focus on that. In terms of human oversight, I don’t think it’s a zero or one situation, since there are supervised models as well as supervised models. I think human supervision is very important, since traffic rules have to be set very clearly. We’re still some way away from AGI. We may start to see some capabilities, which may mimic that sort of capability and may feel like it, but I don’t think it’s truly there. Whether we are 20 years or 50 years away, difficult to say. But what we are already seeing, is enough of a call for action. It means significant steps in terms of policy, in terms of education, infrastructure, retooling, and for enterprises to understand what they need to do differently.