Deloitte AI Institute chief on AI oversight: Governing AI agents is tougher because…

7 hours ago 8
ARTICLE AD BOX

 Governing AI agents is tougher because…</b>

Businesses are rapidly adopting AI agents for tasks, with projections showing a surge from 23% to 74% in two years. However, robust safety measures lag significantly, with only 21% of companies reporting adequate oversight. This gap poses risks as agents gain independence, potentially leading to errors and security vulnerabilities without proper governance.

Businesses are ramping up their use of AI agents faster than they're putting in place adequate guardrails. According to a recently published State of AI in the Enterprise report by Deloitte, based on a survey of over 3,200 business leaders across 24 countries, 23% of companies are currently using AI agents "at least moderately", but this figure is projected to jump to 74% in the next two years.

To compare, the portion of companies that report not using them at all, currently 25%, is expected to shrink to just 5%. However, the rise of agents (AI tools trained to perform multistep tasks with little human supervision) in the workplace isn't accompanied by adequate guardrails. Only around 21% of respondents told Deloitte that their company currently has robust safety and oversight mechanisms in place to prevent possible harms caused by agents.In a statement to ZDNet, Beena Ammanath, Global Head of Deloitte's AI Institute, said, “Because AI agents are designed to take actions directly, governing [them] requires new approaches beyond traditional oversight. As agents proliferate without governance, you lose the ability to audit decisions, understand why agents behaved a certain way, or defend your actions to regulators or customers.”"Given the technology's rapid adoption trajectory, this could be a significant limitation. As agentic AI scales from pilots to production deployments, establishing robust governance should be essential to capturing value while managing risk," Deloitte also warned in its report.

Why is more dependency on AI agents risky for companies

Companies like OpenAI, Microsoft, Google, Amazon, and Salesforce have promoted agents as productivity-boosting tools, with the main idea that businesses can hand off repetitive, low-stakes workplace tasks to them while human employees focus on more important work.However, greater independence brings greater risk. Unlike more limited chatbots, which require careful and constant prompting, agents can interact with various digital tools to, for example, sign documents or make purchases on behalf of organisations.

This leaves more room for error, since agents can behave in unexpected ways (sometimes with serious consequences) and be vulnerable to prompt injection attacks.The new Deloitte report isn't the first to point out that AI adoption is moving faster than safety measures.One study published in May 2025 found that the vast majority (84%) of IT professionals surveyed said their employers were already using AI agents, while only 44% said they had policies in place to regulate the activity of those systems, ZDNet reports.Another study published in September 2025 by the nonprofit National Cybersecurity Alliance revealed that while a growing number of people are using AI tools like ChatGPT on a daily basis, including at work, most of them are doing so without having received any kind of safety training from their employers (teaching them, for example, about the privacy risks that come with using chatbots).In December 2025, Gallup published the results of a poll showing that while the use of AI tools among individual workers had increased since the previous year, almost one-quarter (23%) of respondents said they didn't know if their employers were using the technology at the organisational level.Since technology frequently advances more quickly than laws and knowledge, it is unrealistic to expect perfect AI safeguards at this early stage. This disparity is evident in AI, where deployment has accelerated due to intense hype and economic pressure.But early studies like Deloitte's new State of Generative AI in the Enterprise report point to what could very well become a dangerous gap between deployment and safety as industries scale up their use of agents and other powerful AI tools.For now, oversight should be the priority: Businesses need to be aware of the risks associated with their internal use of agents and have policies and procedures in place to ensure they don't go off the rails (and, if they do, that the resulting harm can be managed)."Organisations need to establish clear boundaries for agent autonomy, defining which decisions agents can make independently versus which require human approval. Real-time monitoring systems that track agent behaviour and flag anomalies are essential, as are audit trails that capture the full chain of agent actions to help ensure accountability and enable continuous improvement," Deloitte advises in its new report.

Read Entire Article