9% of Copilot Licenses Are Actually Used. I'm Not Surprised.
I was in a meeting this week with someone who ran AI adoption across KPMG in the Nordics. He dropped a number: 9% of all Copilot licenses are activated. Not productively used. Not used daily. Activated.
A pension fund in the UK distributed laptops with Copilot pre-installed and zero training. No onboarding session. No documentation beyond what Microsoft ships by default. The procurement team checked a box. The AI transformation was declared underway.
I've trained over 10,000 professionals across 70+ organizations in six countries. I've seen this pattern so many times that I have my own number for it: roughly 4% of participants sustain meaningful AI use after a one-off workshop. Four percent lasting adoption. Ninety-six percent revert to their old workflows within two weeks.
The 9% activation rate and my 4% adoption rate are measuring different things, but they're pointing at the same problem. The gap between buying AI and using AI is not a technology gap. It's a human gap.
The Procurement-Training Ratio
Here's what the typical AI rollout looks like:
- Executive reads a McKinsey report about AI productivity gains.
- IT department procures enterprise licenses. Six figures. Sometimes seven.
- An email goes out: "Your new AI tools are available."
- A one-page PDF or a 30-minute webinar follows, maybe.
- Three months later, someone asks why usage is flat.
The ratio of money spent on procurement versus training is roughly 100:1 in most organizations I've worked with. They spend 99% of the budget getting the tool into the building and 1% getting it into the workflow.
Then they commission a consultant to write a report about the "adoption gap."
What Actually Works: The KPMG Model
The KPMG Nordics rollout was different. Here's what they did:
Twelve months of sustained investment. Not a workshop. Not a "lunch and learn." A year-long program with dedicated budget, headcount, and executive attention.
AI champion roles with recognition and bonuses. They created formal positions within the organization -- people whose job included driving AI adoption in their teams. These weren't volunteer assignments. They came with performance recognition and financial incentives.
Unwavering executive support. The Nordic CEOs of KPMG experienced AI capabilities firsthand in a training session. Once they understood what was possible, they immediately funded the rollout. Not "we'll consider it next quarter." Immediately.
The result: adoption that stuck. Not because the tool was better. Because the organization invested in the human side of change.
What Actually Works: The Continuous Model
One institution I work with in Hong Kong took a different path to the same destination. Instead of a one-off engagement, they signed a 24-month continuous learning contract. Monthly sessions. A WhatsApp community for real-time support between sessions. A feedback loop where this month's questions shape next month's content.
The difference is structural. In a one-off workshop, I teach you what AI can do. In a continuous engagement, I watch what you're actually struggling with and address it in real time.
Here's what continuous looks like in practice:
- Month 1: Introduction. NotebookLM deep dive. "Here's what's possible."
- Month 2: Advanced features of the tool they're actually using. Address specific blockers from Month 1.
- Month 3-6: Department-specific applications. Real workflows, real data, real problems.
- Ongoing: WhatsApp community where staff ask questions and share wins. I respond. Peers respond. A culture forms.
By month three, people stop asking "how do I use this tool" and start asking "can AI do this thing I'm stuck on." That's the shift. That's adoption.
The Hong Kong Education Problem
This week I also learned something that made my blood boil. The Hong Kong government has allocated HK$2 billion for AI in education. Two billion dollars. That sounds like progress until you read the fine print.
The fund explicitly excludes teacher training. The money can only be spent on "implementation and student-facing activities." Schools can buy systems -- typically ChatGPT wrappers -- but they cannot use the fund to train the teachers who need to operate those systems.
So Hong Kong schools are acquiring AI tools they don't know how to use, funded by a government program that won't pay for the training to use them. The procurement-training gap, codified into policy.
Meanwhile, 15,000 international schools globally are growing at 50% per decade. Forty percent belong to networks like ESF and Harrow, meaning one successful engagement cascades across countries. The schools that figure out AI adoption first will have a structural advantage. The ones waiting for a government fund to cover training will be waiting a long time.
The Pattern
After 200+ sessions across six countries, the pattern is always the same:
One-off training: People are excited during the workshop. They try the tool for a week. They hit a friction point. No one is there to help. They revert.
Sustained engagement: People are excited during the first session. They try the tool. They hit a friction point. They ask for help -- in the community, in the next session, in a direct message. Someone responds. They push through. A habit forms.
The difference isn't the content of the training. It's the presence of support after the training ends.
What This Means for Organizations
If you're planning an AI rollout, here's what your budget should look like:
- 40% on tools and infrastructure
- 60% on training, champions, and ongoing support
Most organizations run 95/5 or worse. Then they blame the tool.
The tool isn't the training. The training isn't the adoption. The adoption isn't the transformation. We keep buying the first one and expecting the last.
I write about what actually works in enterprise AI adoption -- not what sounds good in a slide deck. If this resonated, I'm on LinkedIn.
