The Skill Formation Problem: Is AI Making Us Worse at Our Jobs?

A research paper circulated on X recently with a finding that should concern every company rushing into AI adoption: AI can negatively impact skill formation. The author argued that every "AI-first" company should make this research available to employees.

As someone who runs AI safety training and corporate workshops, my first reaction wasn't "AI is dangerous." It was "of course it does -- if you implement it wrong."

The Real Risk: Outsourcing Thinking

The skill formation problem isn't about AI replacing skills. It's about AI preventing people from developing them in the first place. When a junior analyst uses AI to write every report from day one, they never learn the underlying reasoning that makes the report valuable. When a designer uses AI to generate every concept, they never develop the visual intuition that lets them evaluate what's good.

I see this in workshops. Participants who over-rely on AI output without understanding it are less capable than participants who learn to evaluate, edit, and iterate. The AI becomes a crutch instead of a tool.

The 70/30 Framework

This is why I teach the 70/30 human-AI split in every enterprise engagement. When I trained 1,500 bankers at BOCHK, we didn't teach people to hand everything to AI. We taught them to use AI for 30% of the task (the repetitive, structural, drafting parts) and keep 70% human (the judgment, context, quality evaluation).

The 70/30 isn't a permanent ratio. As you develop expertise, it might shift to 50/50 or even 40/60. But for learners -- people still building core skills -- keeping the human percentage high is essential.

What Responsible AI Training Looks Like

1. Teach evaluation before generation. Before showing anyone how to prompt, teach them how to spot when AI output is wrong. Critical evaluation is the meta-skill that prevents the deskilling problem.

2. Require manual work first. At HKCT, where I trained 400 educators, many were worried AI would make students stop learning. The answer isn't banning AI. It's requiring students to understand the fundamentals before they're allowed to accelerate with AI. The same principle applies in corporate settings.

3. AI as a pair, not a replacement. Frame AI as a collaborator that challenges your thinking, not a machine that does your thinking. When you ask AI to draft a report, don't accept the first output. Push back. Ask it to defend its recommendations. Treat the interaction as a conversation, not a transaction.

The Anthropic Fluency Index

Anthropic recently published their AI Fluency Index, tracking 11 behaviors across thousands of conversations. The finding that stood out: more fluent users don't use AI more. They use it differently. They prompt more precisely, evaluate more critically, and reject more outputs.

That's the goal of responsible AI safety training. Not to make people dependent on AI, but to make them better collaborators with it. The skill formation problem is real, but it's a training design problem, not an AI problem.