Skip to content

The Skill Formation Problem: Is AI Making Us Worse at Our Jobs?

A research paper went around X recently with a finding that should bother every company rushing into AI adoption: AI can negatively impact skill formation. Not skill replacement — that's the headline everyone fears. Skill formation. The quiet version. People never developing the competence in the first place because AI did it for them from day one.

My first reaction wasn't alarm. It was recognition. I see this happening in my own workshops.

What It Looks Like in the Room

There's a pattern I've started noticing around session three of a multi-week program. The participants who jumped straight to AI-generated outputs in session one — letting Copilot draft the entire email, accepting the first summary without editing — are less capable by session three than the participants who started slower.

The slower group learned what a good summary looks like by writing bad ones first. They developed an eye for when AI output is wrong because they'd done the work manually and knew what "right" felt like. The fast group skipped that calibration entirely. They can generate output. They can't evaluate it.

I don't have a clean study to cite for this — it's pattern recognition from running 180+ sessions. But the researchers are catching up. Anthropic's AI Fluency Index tracked 11 behaviors across thousands of conversations and found that more fluent users don't use AI more. They use it differently. They prompt more precisely, evaluate more critically, and reject more outputs. Fluency isn't about speed. It's about judgment.

Where I Got the Ratio Wrong

I've been teaching the 70/30 human-AI split for over a year now. The idea: use AI for the repetitive structural parts (30%) and keep human judgment for the rest (70%). For experienced professionals, this works. It accelerates without undermining.

But I've been applying the same ratio to learners, and I think that's a mistake. A junior analyst who uses AI to draft 30% of their first-ever report is skipping the part where they learn what makes a report coherent. A design intern who generates concepts with AI never develops the visual intuition to evaluate what's actually good.

For learners, the ratio should probably be closer to 90/10 — maybe even 95/5. AI as a checker, not a creator. Build the foundation first, then gradually shift the ratio as competence develops. I haven't formalized this into a framework yet. I'm still working out where the breakpoints are. But the principle feels right: the less experience you have, the more you need to do manually before AI becomes helpful rather than harmful.

The Part Nobody Wants to Hear

The skill formation problem isn't going to be solved by better AI tools or smarter prompts. It's a training design problem. And most organizations aren't designing for it at all.

When I trained 400 educators at HKCT, the fear in the room was that AI would make students stop learning. The teachers weren't wrong to worry. They were wrong about the solution — banning AI doesn't work any better than banning calculators did. What works is requiring the manual work first, then introducing AI as acceleration.

The same applies in corporate settings. Teach evaluation before generation. Make people write the first draft themselves before asking AI to improve it. Frame AI as something that challenges your thinking, not something that replaces it.

The irony is that making people better at working with AI requires making them do more work without AI first. That's a harder sell than "AI will make you 10x faster." But it's the version that actually holds up six months later.


If your team is navigating this tension — wanting AI speed without losing the skills underneath — that's exactly what my training programs are designed for.

Sam works with enterprises across banking, retail, engineering, and education in Hong Kong.

Explore Enterprise Training