Shadow AI: Why businesses need better oversight of unsanctioned AI use

By Justin Sharrocks, Managing Director EU/UK.

  • 9 hours ago Posted in

The pace of AI adoption in the workplace has far outstripped most organisations’ ability to manage it. Tools like ChatGPT and Copilot are now being used across a wide range of job functions, helping teams accelerate repetitive tasks, summarise documents, or make sense of complex data. But much of this usage is happening outside formal channels, and in many cases, without IT’s knowledge.

This is what we refer to as Shadow AI: the unsanctioned use of AI tools in a business environment. Unlike traditional shadow IT, which often involves people deliberately bypassing procurement or security protocols, Shadow AI tends to emerge from a lack of policy clarity or technical guardrails. Most employees are not being reckless; they simply don’t realise that using consumer-grade AI tools for work could introduce significant data security, compliance and operational risks.

In some cases, however, IT teams are aware of what’s being used – but simply lack the capacity or in-house security expertise to manage it appropriately. This is particularly common in lean or overstretched teams that are stuck in reactive mode, constantly putting out fires and unable to take a more proactive stance. We’re increasingly seeing this skill and capacity gap widen, especially in SMBs that are facing enterprise-level demands without the same resources.

When good intentions meet poor controls

Real-world examples are increasingly common. In one instance, a junior employee at a legal firm used a free AI tool to summarise contract clauses. They weren’t trying to cut corners, but in doing so, they pasted confidential client information into an external platform with no data handling agreements in place. Once discovered, this prompted a wider internal review, as the firm realised similar usage may have occurred elsewhere.

In another case, a retail store manager used Microsoft Copilot on their personal Microsoft account to automate a large portion of inventory tracking. The AI-generated files proved useful but were not accessible to others when the employee went on leave. This disrupted continuity and raised concerns about where operational data was being stored and who had access to it.

Both examples illustrate how Shadow AI can develop quietly and spread quickly, particularly when employees are encouraged to work efficiently but lack structured guidance on how and when to use AI responsibly.

Visibility is the first priority

To address Shadow AI effectively, organisations need a clear view of how AI tools are entering and being used within the business. Traditional monitoring solutions are not always equipped to detect traffic to public AI platforms, particularly when tools are accessed through personal accounts or on unmanaged devices.

Visibility can be improved through network-level monitoring that flags usage of known AI services, alongside endpoint management solutions such as Microsoft Intune, which can help enforce app access policies across both corporate-owned and bring-your-own devices. Without this level of insight, governance efforts will always be reactive and incomplete.

AI governance must be embedded in existing IT policies

Most organisations have already established frameworks for governing cloud services, setting out which tools are approved, how data should be stored, and who is accountable for oversight. These same principles should apply to AI.

An effective AI usage policy should explicitly define which tools are permitted, outline the approval process for introducing new ones, and clarify how sensitive or regulated data must be handled. It should also ensure compliance with data protection regulations such as the GDPR and assign clear responsibilities for ongoing monitoring and risk management.

Importantly, these policies must be accessible and easy to interpret. If employees do not understand what is permitted or where the risks lie, well-meaning efforts to improve productivity can quickly lead to governance gaps.

Training and culture are just as critical as controls

Technical controls can only go so far without user awareness. As with phishing or cybersecurity awareness programmes, AI-related training is becoming a necessary part of enterprise risk management.

At a minimum, employees should understand which AI tools are safe to use, why certain practices – such as pasting sensitive information into public tools—pose risks, and who to approach for guidance. In the case of the legal firm mentioned earlier, the leadership team has since implemented role-specific guidance that includes practical advice on anonymising data and escalation procedures for AI-related questions.

AI governance should not sit in isolation

Shadow AI is not just an AI problem. It is part of a broader need for integrated technology governance that spans IT, security, compliance, and business operations. Once business data leaves a secure environment, there is no reliable way to know how it will be stored or whether it might be used to train external models. That loss of control poses a clear risk, particularly as regulatory requirements around AI usage become more defined.

Rather than creating a new, siloed process for managing AI, organisations should incorporate AI oversight into their existing technology governance frameworks. This allows for shared accountability, a unified risk posture, and policies that can evolve in line with both technology and regulation.

Moving forward

AI is already embedded in day-to-day workflows across most organisations, whether formally acknowledged or not. Shadow AI is a clear signal that existing governance models need to evolve. Employees will continue to explore new tools in the absence of clear guidance, and while the intention may be to work more efficiently, the consequences can be significant: from compliance breaches to operational disruption.

Now is the time for organisations to take proactive steps. That means improving visibility into AI usage, updating governance frameworks, investing in employee education, and ensuring that AI is treated as part of the broader IT landscape – not as a standalone exception.

By Frank Jones, CEO, IMS Evolve.
Orange Business continues to pursue its digital transformation to deliver simpler, more flexible,...
As Official Partner of the Ferrari Hypercar team, ServiceNow’s AI Platform powers Ferrari’s...
By Asha Palmer, SVP of Compliance Solutions at Skillsoft.
Q&A with Mark Scrivens, FPT Software UK Chief Executive Officer, FPT Corporation.
By Laurent Doguin, Director, Developer Relations & Strategy, Couchbase.
By Gary Sidhu, SVP Product Engineering at GTT.