Why Your IT Department Is Your Biggest AI Adoption Bottleneck
Ethan Mollick recently noted that an astonishing number of companies he talks to "STILL have AI effectively blocked by IT and legal departments for out-of-date reasons." He's right. And I see it from the training side — I'm the person who shows up to teach a workshop and discovers half my tools are blocked on the corporate network.
The first time it happened, I was furious. Now I understand it.
The IT Director Who Changed My Mind
At a manufacturing client in Hong Kong, the IT director came into our pre-workshop meeting radiating skepticism. He'd seen a failed AI vision project years earlier. Staff were already using free AI tools on personal phones. Company data was leaking through channels he couldn't monitor. And here I was, an external trainer, about to encourage more AI usage.
He told me: "I don't know what to block."
That sentence rewired how I approach every engagement. His team had no AI governance framework. No tiered data policy. No approved tool list. So their default was to block everything. And the result was exactly what you'd predict — employees uploaded internal documents to free AI tools on their personal devices. The security risk IT was trying to prevent was being created by the blocking policy.
But his instinct wasn't wrong. The concern about data leakage was real. What was missing was a framework that gave him control without killing adoption.
What I Do Now Before Every Workshop
I don't start with HR anymore. I start with IT. Every engagement begins with the same conversation:
Tier 1 — public information. Market research, general knowledge queries, competitor analysis using publicly available data. Any AI tool is fine.
Tier 2 — internal non-sensitive. Email drafts, meeting summaries, formatting, code syntax. Enterprise tools within the corporate tenant — Copilot inside Microsoft 365, for example. Safe as long as it stays inside the perimeter.
Tier 3 — confidential. Customer data, financial records, strategic plans. Off-limits to any external AI tool, full stop.
When I establish this before training begins, two things happen. The IT director relaxes because there's a boundary he can audit. And employees stop using their phones because they finally have a sanctioned path.
At BOCHK, the security framework was in place before a single employee opened an AI tool. That engagement — 1,530 people across 13 countries — had zero data incidents. Not because we avoided AI, but because the IT team felt in control of the boundary from day one.
The Part I Still Get Wrong
I wish I could say this always works cleanly. It doesn't. At Chow Tai Fook, VPN issues nearly derailed the first session — a problem I should have caught in the IT conversation but didn't because I focused on data policy and forgot about network infrastructure. At another client, the IT team approved Copilot but forgot to enable it on the specific tenant the workshop was running on. We lost twenty minutes of a two-hour session troubleshooting licensing.
These are fixable problems. But they remind me that the IT conversation isn't a checkbox. It's an ongoing relationship. The IT director who distrusts you on day one can become your strongest ally by session six — if you treat their concerns as legitimate rather than obstacles to route around.
Your IT department doesn't want to block AI forever. They need someone to tell them what "safe" looks like. That's a communication problem, not a technology one. And if you're the trainer who solves it, you've just removed the single biggest blocker to everything else you're trying to do.
If your AI rollout is stuck at the IT conversation, I've been there. See how I approach it or reach out on LinkedIn.
