ARTICLE AD BOX
Artificial intelligence (AI) in workplaces is becoming more and more common. Relevance may still vary, but there are two distinct sides to that conversation — one in which AI will take over most, if not all human jobs rendering them jobless (and possibly hopeless), whilst the other talks about the need for a great realignment.

Soumendra Mohanty, who is chief strategy officer at data science company Tredence, is better placed than most to decode the changes the AI space is bringing to the workplace. More so, because they design agentic AI workflows for enterprises, across multiple verticals including healthcare, banking and telecom. “We’re building agentic workflows, as well as agents and models in that area, a constellation of all of these coming into an agentic workflow where there is a human in the loop,” Mohanty gives us a glimpse into a work where more than one “copilot” will work in sync with a human in the loop.
“And there is this feedback loop that keeps it learning is what we are calling it the Milky Way,” he adds. To add more context to the metaphor, the reference to “Milky Way” includes building towards a constellation of agent based systems, of course the continuous learning process, the human in the loop and an interconnected network, all of which he explains in the conversation with HT. Mohanty tells us he is working on a new concept, which is AI and various avatars, when it meets the individual’s ambidexterity. Edited excerpts.
Q. AI agents and Agentic AI frameworks are evolving rapidly. How do you see this transformation panning out, in terms of that transition from research labs to real world?
Soumendra Mohanty: This question requires elaborate context. The pace of technology and innovation over the last four or five years compared to the past few decades has been phenomenal. Compared to the previous eras of innovation was more, from very deterministic applications and systems where a user gives inputs, clicks on this or that and then it does something, to something more conversational and interaction oriented. The way humans work or interact, and that is why it is very interesting and also disruptive.
In typical enterprise settings and also in our personal lives, you’ll notice a human orientation and a tech orientation. The language between human orientation and if I can extrapolate that or extend that to a business orientation, and the language of tech orientation, are two different things. There is always an in-between translation and the complexity of doing it. That has always been there, but with generative artificial intelligence and conversational interfaces, that boundary has become pretty much non-existent in a sense. Of course, under the hood you will need a lot of integration, a lot of technology and heavy lifting. But on the surface, where basically the needs of humans and enterprises meet technology, that is now very seamless. It is voice enabled, gesture enabled, and natural language enabled. It abstracts the entire complexity of things
The other aspect, important to understand, would be that humans have cognitive limitations. We can only process a certain amount of data and information to make decisions or make choices.
Hence we leave behind many things which may also be critical in nature. There’s a book by Daniel Kahneman titled Thinking, Fast and Slow, which is all about survival, natural instincts and how quickly a human can make decisions.
There is another interesting concept that has emerged over the time we call ‘satisficing’, meaning are there the decisions we make for which we cannot process everything at the same time, and cannot look at every possible scenario.
Although businesses have become complex, it is sufficient enough to make those decisions and go ahead. Of course, there is a risk associated, so we do a little bit of risk profiling and management. By taking fast thinking which is spur of the moment and slow thinking which is longer for ‘satisficing’, it is this combination that’s allowing current generation of tech and applications to go deeper into areas of reasoning. This is the broad direction.
Technology is moving, a lot of innovation and research is happening across sectors and across industries. That’s the primary purpose. A result of that could be bigger, more powerful, more networks with more parameters. Or solutions that go deep and narrow to solve a problem that’s very specific, with precision. At that point, you move from large language models to small language models.
Q. What’s at the core of Tredence’s approach to building workplace solutions that meld humans and agentic AI tools, especially as AI becomes more autonomous in decision-making and task execution?
SM: I’ll take you back a little bit in the journey with something I have always quoted in my interactions. When we were growing up, the skill of stenography was critical. The typewriter came, and that skill evolved. Then document apps came, and that skill evolved further. Of course the advancement was much better for all those things, but these things didn’t happen overnight. In the similar context today, when we look at autonomous agents, it is not that tomorrow what we are doing is completely gone because we have not reached that maturity stage.
The second thing is a copilot kind of an agent where you start with a human in the loop. Then in the middle is the semi autonomous, where there are certain rules and guardrails being put in and the agent is actually conforming to, working within those boundaries, and doing things with right precision and accuracy. If it is delivering the right kind of goals and outcomes, then there is enough confidence to make it semi autonomous where there is still a human but that human has moved from being integral to the loop, to one that’s observing if there are boundary violations or conflicts between human and autonomous agent.
These agents and the technology is self learning. It remembers your preferences and analyses decision choices on its own. Many of these nuances that humans take a longer time to train for. The machines will be trained much faster, so there will be a time when the semi autonomous activities will become autonomous. And that is when the human has to wander around.
And this is where humans will actually move slightly above the loop because at that point there will be many agents working together.We can call it an agentic mesh, which will be a multi-agent network of architectures and systems operating in a complex enterprise scenario. The human role also needs to evolve from how we look at a copilot today, because tomorrow it will be an integral part of the workforce. Today it may be a team of 10 people with a leader, but tomorrow it will be a leader and maybe five people of varied skills or expertise alongside five agents that are similarly varied.
In such situations, a human also has to develop algorithmic empathy because today we are saying ‘I don’t trust it’, but tomorrow you have to start to trust it and you have to start to make decisions based on their inputs. It’s a transformational journey and maturity levels will vary. And hence there has to be a mechanism of trust and collaboration.
At Tredence, what we are trying to do given this kind of a spectrum and newer complexity of emerging skills and behaviour patterns, is working on various kinds of models and solutions. A question we ask regularly is, what combination of agentic workflow and human expertise can come together to solve this kind of problem? Any solution at the end of the day needs to solve a problem and make an impact from a business outcome perspective. We also are doing persona based solutioning. Think of this as a data analyst solving a very open-ended multi-turn research-hypothesis oriented problem solving which is what every business does, whether for their strategy for growth or new product launches or geographic expansions. There’s a lot of algorithmic intervention and a lot of data understanding is required.
Q. To build these solutions, are you using in-house models or a mix of third party models such as from Open AI or Anthropic?
SM: We have strong alliance relationships, but that doesn’t matter. We do a discovery and fit-gap assessment with every client and their environments up front. Our philosophy is that it has to be a composable architecture, meaning the best of an OpenAI model for instance, and the best of an open-source model that needs to come together to solve some of these very complex problems. We keep it transparent, along the way.
Q. Which specific roles are being increasingly replaced by Agentic AI implementation? How have the results been thus far?
SM: I’ve been contemplating that, and have come to an understanding that while earlier job roles were defined by either expertise or experience driven and hence there was a hierarchy of things. But today, much of that expertise is actually capsuled into a particular agent. So the learning itself is going in there as an agent and there is a defined input, with a defined output.
The job roles and hierarchies are also changing which is gradually changing some of the roles which were earlier about people management.
Now you have to become not only a people manager, but also an agent manager.It’s not that we don’t need people managers because the human has to be somewhere in the loop. That is one role I see needs upskilling and redesigning. The second role is what I see primarily as about not doing everything because with agentic workflows coming into play, it is about how you can make it more collaborative.
The middle manager and technology-driven roles are evolving into how I can look at routine tasks in an autonomous manner and how now I can evolve into a strategic design thinker. I think we have got a better job to do. Critical thinking is important and that is what we’re training a lot of our folks for. Today, probably 60% of our time is spent on writing lines of code, but you need to evolve into thinking how this code is going to work, or critical scenarios where it will fail, or running simulations.
Q. How important does it become for governance frameworks and regulation to dictate AI compliance, and what would your expectations be from such a set of regulations?
SM: There have already been regulations on data privacy and security. It was always focused on data, types and biases, and what you can use a user’s data for. Of course that is very foundational, but equally important. Now when you have these algorithms taking over and coming into the mix, there is another side of these regulations that needs to evolve as well. What are the thresholds, what are the boundaries, which application can be used how, which context this can be used in, if those are mission critical cases, and where its AI led with human support or the reverse. All of this represents a very thin line.
In many cases also it is about cyber security and the threat management side of it. Those policies need to evolve as well. At the end of the day, all these algorithms and deep learning models are actually data hungry. When a model gets done, it is launched and applications are designed around it. It is doing something, but then data is also changing. So if you have trained these models using a certain set of data from maybe five year back, but now it is a different scenario and hence refreshing the model is important. The currency of the data, it is those kinds of regulations that need to happen, particularly for pharmaceutical companies and healthcare or banking or life sciences, where you cannot have old or stale data.