AI

Shadow AI is not a policy problem. It is a supply problem.

Banning unauthorized AI tools does not work. Employees use them anyway, just outside your security perimeter. The companies actually preventing shadow AI are doing it by making approved tools faster to access than unapproved ones, not by writing stricter policies nobody reads.

Banning unauthorized AI tools does not work. Employees use them anyway, just outside your security perimeter. The companies actually preventing shadow AI are doing it by making approved tools faster to access than unapproved ones, not by writing stricter policies nobody reads.

Your employees are already pasting company data into ChatGPT. Right now. Today.

Not because they’re reckless. Because you gave them a deadline, a task that AI can speed up, and no approved tool to do it with. So they opened a browser tab, used their personal account, and got it done. BlackFog’s research found that 60% of employees would use unsanctioned AI tools to meet deadlines, even knowing the security risks. That number jumps to 69% at the C-suite level.

Think about that for a second. Your executives are the biggest offenders.

Shadow AI is the use of unauthorized AI tools by employees. Personal ChatGPT accounts. Random Chrome extensions promising to summarize emails. API keys provisioned on personal credit cards. Free-tier accounts on tools IT has never heard of. A Gartner survey of cybersecurity leaders found that 69% of organizations either suspect or have evidence that employees are using prohibited AI tools. The kicker? Gartner predicts that by 2030, over 40% of enterprises will experience security incidents directly linked to shadow AI.

This is not a hypothetical threat. It is a data exfiltration pipeline you built by accident.

Banning AI is the worst possible response

Companies that respond to shadow AI with blanket bans are making the problem worse. Full stop.

I understand the instinct. Something feels risky, so you prohibit it. But with AI, prohibition does something uniquely destructive. It pushes the usage underground where you can’t see it, can’t monitor it, and can’t control it. Entrepreneur reported on this exact dynamic: bans drive usage to personal devices and accounts, completely outside your security perimeter. The primary reason for the ban was to protect sensitive data; the ban itself makes a leak more likely.

Samsung learned this the hard way when engineers pasted proprietary chip design code into ChatGPT through personal accounts. The company issued a ban afterward, but the data was already gone. Amazon noticed ChatGPT responses that looked suspiciously similar to internal documentation. Those incidents happened because there was no approved alternative fast enough for engineers under pressure to ship.

CIO found that roughly half of all employees are using unsanctioned AI tools, with enterprise leaders being the worst offenders. Among those using unapproved tools, 33% admit to sharing company research or datasets, 27% have shared employee data including payroll information, and 23% have inputted financial statements. They’re not doing this maliciously. They’re doing it because your procurement process takes three months and their project deadline is next Tuesday.

The question was never “should employees use AI?” They already are. The question is whether they do it through channels you control or channels you can’t even see.

The three things that actually prevent shadow AI

Forget writing another policy document nobody reads. Prevention requires making the approved path the path of least resistance. Three things make this work.

First, provide approved tools before employees go find their own. This sounds obvious, and yet most companies I talk to still haven’t provisioned enterprise AI accounts for their teams. If you want people to stop using personal ChatGPT accounts, give them a company ChatGPT Enterprise or Claude account with SSO attached. The ISACA guidance on shadow AI is clear: organizations need to establish approved AI tools alongside their governance frameworks, not instead of them.

Second, set clear and short policies. Not a 40-page acceptable use policy. A one-page document: here are the approved tools, here is what you can put into them, here is what you cannot. Done. If your AI policy takes longer to read than it takes to open a ChatGPT tab, you’ve already lost.

Third, make compliance easier than non-compliance. This is where most companies fail entirely. If using the approved tool requires a VPN, a ticket to IT, a manager’s signature, and a two-week provisioning window, people will use the free version that takes ten seconds. Your approved path needs to be faster and simpler than the shadow path. Single sign-on. Pre-provisioned accounts. No friction.

Lock down identity and control what flows in

Enterprise SSO through SAML 2.0 or OIDC is not a nice-to-have for AI tools. It is the single most effective technical control against shadow AI.

Here’s why. When every AI tool your company uses sits behind your identity provider, you get three things simultaneously. First, you eliminate rogue accounts because employees authenticate through corporate credentials. No personal Gmail sign-ups. No free-tier accounts that IT can’t see. Second, you get automatic provisioning and deprovisioning through SCIM, so when someone leaves the company, their AI access disappears with everything else. Third, you get an audit trail. Every login, every session, logged through your existing identity infrastructure.

This is the same pattern we see with deploying Claude Desktop in enterprise environments. The deployment itself isn’t complicated. Making it work within your identity and security stack is where the real work happens. But once it’s there, you’ve closed the biggest gap in shadow AI.

Most major AI providers now support enterprise SSO. ChatGPT Enterprise, Claude for Enterprise, Gemini for Workspace, Copilot through Microsoft 365. The technology isn’t the bottleneck. The bottleneck is procurement teams taking four months to finalize a contract while employees have already been using personal accounts for three of those months.

But identity is only half the equation. Here’s where I see a disconnect that genuinely irritates me in enterprise security conversations. Companies spend enormous energy evaluating which AI tools to approve while ignoring what’s actually flowing into those tools.

An employee pasting a customer list into an approved, SSO-protected ChatGPT Enterprise account is still a data handling problem. The tool being “approved” doesn’t magically make it safe to dump PII into a prompt. This connects directly to the data privacy implementation challenge that most organizations underestimate. Privacy controls need to be designed into how people use AI, not bolted on afterward.

DLP for the AI era needs to monitor clipboard activity, browser-based inputs, and file uploads to AI interfaces. Traditional DLP tools were built to watch for email attachments and USB drives. Copy-paste into a browser-based AI tool? Completely invisible to most of them. Modern solutions from vendors like Nightfall, Cyberhaven, and Microsoft Purview are building AI-specific DLP capabilities that can detect sensitive content in prompts before they reach AI platforms and block paste operations containing patterns that match PII, source code, or financial data.

The practical approach is classification. Define what data categories can go into which AI tools. Public information, fine for any approved tool. Internal documents, allowed in enterprise-tier tools with data retention guarantees. Customer PII, never in any external AI tool without anonymization. Regulated data under HIPAA or SOX, blocked entirely from external AI.

If your governance framework doesn’t include data classification rules specific to AI inputs, it’s incomplete. Knowing which tools are approved is table stakes. Controlling what goes into them is the actual game.

Technical enforcement at the endpoint

Policy without enforcement is a suggestion. At some point, you need technical controls that actually prevent shadow AI at the device level.

Browser extension management through Intune or similar MDM platforms is straightforward and surprisingly effective. Chrome’s enterprise policies let you whitelist specific extensions and block everything else. That means employees can’t install random AI Chrome extensions that promise to rewrite their emails. They can only use what IT has approved and pushed through policy.

For managed Windows devices, registry-based controls through Group Policy or Intune can block installation of unapproved applications entirely. DNS-level filtering through your proxy or CASB can block access to AI tool domains you haven’t approved. Network monitoring can flag unusual data flows to known AI API endpoints.

But here’s the part that makes or breaks technical enforcement: don’t block without providing. If you block access to ChatGPT at the network level but don’t offer an alternative, employees will use their phones. They will tether to mobile data. They will find a way, because the productivity gain from AI is too large to ignore. Every block needs a corresponding “use this instead” message.

The most effective deployments I’ve seen combine allow-listing with automatic provisioning. Block the consumer AI domains at the network level, but simultaneously provision every employee with access to the approved enterprise AI tool. The block and the alternative arrive on the same day.

Build a fast-track approval process or lose the race

The last piece is the one that IT teams resist the most, because it requires changing how procurement works.

Traditional IT procurement cycles run weeks to months. Vendor evaluation, security review, legal contract negotiation, budget approval, pilot period, rollout. That timeline was fine when employees wanted a new project management tool. It is completely broken for AI, where a new capability appears every week and employees can start using it in thirty seconds with a free account.

SPK Associates documented how leading organizations are reimagining their AI tool review process. The best approaches use risk-based tiering. Low-risk tools (text summarization, grammar checking, brainstorming) get a fast lane. Maybe a 48-hour security scan and auto-approval if they pass. Medium-risk tools that touch internal data get a one-week review. High-risk tools that process customer data or make automated decisions get the full treatment.

The submission process matters too. If someone needs to write a business case, get three signatures, and fill out a 20-field form, they won’t bother. They’ll just use the free tool. The best intake processes I’ve seen are a single form: name of tool, what you want to use it for, what kind of data it will touch. That’s it. An automated workflow can route it to the right review track based on those three answers.

Some organizations are going further, using AI to pre-screen AI tool requests. Feed the vendor’s terms of service and security documentation into an LLM, generate an initial risk assessment automatically, and let the security team focus their time on the edge cases instead of reviewing every request from scratch.

The goal isn’t to rubber-stamp everything. It’s to remove the friction that makes shadow AI feel necessary. When an employee can request a new AI tool on Monday and have it approved by Wednesday, they stop looking for workarounds. When approval takes until next quarter, they already have three personal accounts by the time you respond.

Shadow AI is a supply problem, not a discipline problem. Your employees want to do their jobs well. Give them the tools to do it safely, and they will. Make them wait, and they’ll find their own way. The only question is whether you want visibility into that process or not.

About the Author

Amit Kothari is an experienced consultant, advisor, coach, and educator specializing in AI and operations for executives and their companies. With 25+ years of experience and as the founder of Tallyfy (raised $3.6m), he helps mid-size companies identify, plan, and implement practical AI solutions that actually work. Originally British and now based in St. Louis, MO, Amit combines deep technical expertise with real-world business understanding.

Disclaimer: The content in this article represents personal opinions based on extensive research and practical experience. While every effort has been made to ensure accuracy through data analysis and source verification, this should not be considered professional advice. Always consult with qualified professionals for decisions specific to your situation.