ARTICLE AD BOX
![]()
New Delhi: What if your spouse discovered your deepest secrets by scrolling through your chatbot conversations? Nikesh Arora, CEO of cybersecurity giant Palo Alto Networks, says companies like his are racing to address emerging risks in an increasingly AI-reliant world.“My fear is that in about six months, if I’m talking to my AI model, it might know more things about me than I’ve told my wife,” he told the audience at the India AI Impact Summit in New Delhi on Thursday. “I don’t want my wife to get her hands on my Gemini prompts because I’m surprised what it might tell her.”The remark drew laughs but underscored a serious concern: AI systems are fast becoming therapists, nutritionists, financial advisers and confidants.
Users are sharing intimate details with machines that promise convenience and insight. As Arora noted, if that data “falls into the wrong hands, it’s not a good idea.”The broader danger, he argued, is structural. “AI is accelerating faster than our institutions, our governance frameworks, and even our intuition,” he said. At present, “the balance is tilted… not in the favour of trust, inclusion, security; it’s actually tilted in the favour of speed.”
Every week brings new models and capabilities, often released before guardrails are fully formed.As the world moves toward an “agentic” future — where AI systems can act autonomously — the risks multiply. “As soon as you give control to an agent, you have to worry about who’s responsible for the actions of those agents,” he said. If an AI mismanages your investments or transfers money without consent, accountability becomes blurred. The same applies to physical systems: how do you ensure that a robot designed to assist at home cannot be hijacked or manipulated?Arora was blunt about the limits of prohibition. “AI is not going to go away if you govern it out of existence.
It cannot be governed out of existence,” he said. The answer, instead, lies in embedding governance and accountability into the technology itself.For cybersecurity firms, that means building protection from the outset. AI must be “secure, governed and controlled” — not patched after damage is done. That includes safeguarding vast datasets, monitoring AI-generated code that could be malicious or flawed, and preparing for adversarial AI systems designed to exploit vulnerabilities.Yet Arora clarified that he remains optimistic — not only that we will navigate this new terrain, but that it will create new opportunities. “I have a conviction that we’re going to need five times the number of technology people in the future than we have today,” he said, arguing that security, governance and oversight will generate new roles rather than eliminate them.



English (US) ·