Two years ago nobody asked me what to do about ChatGPT. Now it shows up in half the meetings I take with CTOs and CISOs. The first exit most of them try is to block it at the corporate firewall. It works for a week. People open another tab on another provider and the hole is still there, just under a new name.
Years ago we wrote on this blog about the Shadow IT threat, the gray zone where people get their work done with tools the IT team never approved. Generative AI reopened that door, with a flow no proxy rule is going to contain. There’s a new name for the same phenomenon: shadow AI in companies.
What is Shadow AI and how does it differ from Shadow IT?
Shadow AI is the use of generative AI models by employees without approval, oversight, or inventory from the security or technology functions. The name is new. The pattern isn’t.
The difference compared to classic Shadow IT has three angles worth looking at from an architecture lens.
First, the entry barrier. Adopting a non-approved SaaS tool requires signing up, uploading data, learning a UI. Using a public LLM only takes opening a tab and pasting text. It’s the lowest friction we’ve ever seen on an unmanaged tool.
Second, the nature of the data leaving. Common Shadow IT tools (a personal Trello, a side Notion) receive structured data the person consciously chose to put there. A prompt to ChatGPT gets whatever the person is thinking right now, unfiltered, occasionally including contract fragments, customer data, or credentials pasted by mistake.
Third, the output flows back inside. The model’s response gets pasted into the corporate document, into the code committed to the repo, into the email sent to the customer. The feedback loop between outside and inside is immediate, which means any hallucination or model bias enters the workflow without going through review.
Why blocking ChatGPT doesn’t work in practice
When an organization blocks the ChatGPT domain at the corporate firewall, three things happen almost simultaneously.
People who were using the tool to solve real tasks don’t stop using it, they just shift to a personal phone or a home network. Data exfiltration becomes harder to audit because it no longer flows through the corporate network.
Less visible alternatives appear, offering the same service without sitting on any blocklist. Any reasonably current security playbook lists dozens of equivalent providers, and keeping a deny list up to date is a losing race.
The conversation between IT and the rest of the business breaks down. The person who was using the model for something legitimate (summarizing a long email, drafting a customer reply, debugging a code block) starts seeing IT as an obstacle and stops asking for tools. The organization loses visibility on real needs that a corporate solution could have served.
The instinct to block comes from a good place. I get it, because I’ve lived it in my own career. When a new risk shows up and we can’t yet measure it, the first reaction is to cut the flow. That reflex assumes behavior will stop when the tool stops working. With public LLMs, that’s no longer true.
Three concrete risks of Shadow AI in your organization
Before thinking about controls, it’s worth naming what’s at stake.
Sensitive data leakage. The most obvious one. A commercial proposal analysis with customer names, a legal-support conversation pasting contract clauses, a code snippet with hardcoded credentials someone wants to “clean up” before pushing to the repo. Any of those prompts can end up training a public model or sitting in the provider’s logs.
Decisions based on hallucinations. An LLM can produce answers that look right and aren’t. If the output gets copied without review into a proposal, a regulatory report, or a customer recommendation, the error propagates without traceability. The damage comes from a review chain that wasn’t designed for AI outputs.
Regulatory exposure. GDPR, ISO 27001, NIS2, and each country’s local regulations. All of them require control over the data lifecycle. A serious compliance program needs to answer, in front of an auditor, what data left the organization, where it went, and on what legal basis. When the flow is Shadow AI, the answer is “we don’t know,” and that alone is a finding.

Which controls actually work against Shadow AI?
The question I ask the teams we work with is a different one. Not “how do we block,” but “what channels do we design so AI use happens where we have visibility.”
Accessible, specific policy. A short page, in the employee’s language, stating what can be pasted into an LLM and what can’t. Three concrete examples beat ten pages of legalese. If the policy lives on an intranet nobody reads, the policy doesn’t exist.
Identity and corporate tooling. When the organization offers an approved channel (the enterprise version of a provider with a signed DPA, or a model on its own infrastructure) connected to corporate SSO with usage logs, most of the Shadow AI disappears on its own. People prefer the official tool when the official tool works and respects their flow.
Awareness at the moment of risk. This is where the awareness platform earns its keep every day. Traditional training, the annual 30-minute module, doesn’t talk to the employee who right now is about to paste a contract fragment into a chat. What works is contextual nudges, the micro-reminders that show up when the risky behavior is about to happen, and specific simulations that train the reflex to pause before pasting. We’ve written before about how that reflex gets trained in Cybersecurity Awareness: why AI cannot replace human guidance and on the role of nudges in cybersecurity.
Detection that talks to behavior. This whole layer works better when SIEM, DLP, and the awareness program signals talk to each other. If your DLP flags a bulk-paste event toward an external domain, that event can trigger a one-minute educational module on Shadow AI delivered to that specific person. Human behavior is the piece your SIEM wasn’t seeing, and contextual awareness closes that loop.
Where a platform like SMARTFENSE fits
What we see in programs that are working is a pattern. The awareness platform doesn’t compete with the firewall or the DLP. It’s the layer that connects to those systems via API, ingests their signals, and translates a technical event into an educational intervention relevant to the person involved.
At SMARTFENSE we design for that stack. Awareness gets assigned by real behavior, not just by role or seniority. When the organization already has catalogs on safe generative AI use (something that in the last year went from nice-to-have to expected by audits), the platform activates them where they’re needed, in the employee’s language, at the moment of risk.
That alone doesn’t solve Shadow AI. What it does is turn each minor incident into a learning point for that person and, in aggregate, for the rest of the program. After a few months, the organization has what was missing before. A real map of which areas use which tools, which data types are at risk, where training is moving behavior, and where it isn’t.
Three steps you can start this week
If your organization doesn’t yet have a formal stance on Shadow AI, there are three things you can sort out without large projects.
Talk to your team without a script. Ask who’s using what, for which tasks, with which data. The picture that comes out of that conversation usually surprises and is the most useful input for the policy that follows.
Write a first version of the policy, short, with examples. Don’t wait for the perfect document, because while the perfect document is being drafted, data is already leaking.
Open the conversation with the awareness platform about where the topic fits in next quarter’s catalog. If you don’t have a platform yet, this is a good starting point to talk to us about how to begin.
Blocking ChatGPT doesn’t solve Shadow AI. Designing the channels does.
Leave a Reply