How 400 Educators Changed Their Minds About AI in 75 Minutes

When HKCT (Hong Kong College of Technology) invited me to introduce AI to their entire staff - 400 people, from teachers to administrators to support services - the first question I asked was: what's the real goal here?

The answer wasn't "teach them ChatGPT." It was something more fundamental. Many staff had tried AI tools already. Some were enthusiastic, most were skeptical, and nearly all had abandoned them after initial experiments. The tools weren't sticking because people were treating AI as software to learn, not as a new way of working.

AI workshop for HKCT's 400 staff members

We had 75 minutes to shift that perspective. No pressure.

The Misconception That Blocks Adoption

Here's what I've observed across dozens of organizations: people approach AI as a tool category, like spreadsheets or email. They expect to learn specific functions, memorize certain commands, and apply them in predictable situations.

But AI doesn't work that way. It's not a tool with fixed capabilities - it's a thinking partner with emergent ones. The same prompt can produce wildly different results depending on how you frame it. The "skill" isn't knowing which buttons to press; it's knowing how to communicate what you need.

This is why most institutional AI training fails. It teaches features instead of frameworks. People learn that ChatGPT can summarize documents, but they don't learn how to think about document processing differently.

For HKCT, we designed what I call the "AI Three-Part Framework" - not a comprehensive curriculum, but a simple mental model that gives people permission to start.

The Three-Part Framework

Part 1: AI Literacy (Reading and Writing)

The first capability addresses the most universal workplace pain: processing information. Every educator spends hours reading - meeting minutes, student reports, curriculum documents, administrative communications. The cognitive load is relentless.

We demonstrated how AI transforms this process. Upload a document, request a summary, extract action items, generate responses. The examples we used were deliberately mundane - actual meeting notes, actual administrative memos - because that's where the daily time savings accumulate.

The key insight we emphasized: AI isn't about doing spectacular things occasionally. It's about doing routine things faster, constantly.

Part 2: AI Creativity (Ideation and Planning)

The second capability targets a common block: not knowing where to start. Lesson planning, activity design, proposal writing - creative work that requires a blank-page beginning.

We showed participants how to use AI as a brainstorming partner. Not to generate final products, but to generate starting points. "Give me five different angles on this topic" or "What questions should I be asking about this problem?" The AI produces raw material; the human shapes it.

For educators, this was revelatory. Many had assumed AI-generated content would be generic and useless for their specific contexts. Seeing how targeted prompting could produce genuinely useful starting points shifted their perception.

Part 3: AI Visual and Document Creation

The third capability addressed a practical constraint: time. Teachers need presentation slides, administrators need formatted reports, support staff need visual communications. All of this takes hours that could be spent on higher-value work.

We demonstrated the rapid creation workflow: outline to draft to visual in minutes rather than hours. Canva's AI features, presentation generators, document formatters - tools that don't require design skills to produce professional results.

Why 75 Minutes Works

The time constraint that initially seemed limiting turned out to be an advantage. With 75 minutes, you can't pretend to be comprehensive. You have to focus on what matters most.

We didn't try to make everyone an AI expert. We tried to give everyone a clear starting point - one framework they could remember, one workflow they could try tomorrow, one permission slip to experiment.

The session included live demonstrations but also emphasized something crucial: these tools are free, available right now, and require no institutional approval to try. The barrier to experimentation is purely psychological.

The Ethics Conversation We Didn't Skip

Training educators on AI requires addressing questions that other industries can defer: data privacy, academic integrity, content accuracy. We spent time on each.

Data input: What information is safe to share with AI tools? Student data, internal discussions, unpublished materials - all require careful handling. We introduced a simple framework for categorizing sensitivity.

Originality: When does AI-assisted work become AI-generated work? The answer matters differently for student assignments versus administrative tasks. We discussed where the lines are and why they exist.

Accuracy: AI outputs can be confidently wrong. For educators, this creates real risk - a factual error in teaching materials has consequences. We emphasized the "human review, AI assist" principle: AI drafts, humans verify.

These weren't comfortable discussions, but they were necessary. Addressing concerns directly builds more trust than pretending they don't exist.

What Happened After

The measure of any training isn't the session itself - it's what happens next. Within the first week, I heard reports of staff experimenting: meeting notes being summarized automatically, lesson plan drafts being generated, administrative communications being composed faster.

Not everyone. Not immediately. But the percentage who actually tried was significantly higher than what I typically see after AI training sessions. The framework gave people a mental model simple enough to remember and specific enough to apply.

The most encouraging feedback came from skeptics - people who had written off AI as overhyped or irrelevant to their work. Several mentioned that the session changed their understanding of what these tools were actually for.

The Takeaway

Large-scale AI adoption in education won't happen through mandates or curriculum changes. It will happen when individual educators find that AI helps them do their existing jobs better - not differently, just better. Faster document processing, easier planning, quicker content creation.

The framework doesn't matter as much as the mindset. AI isn't a new subject to learn. It's a new way to do the work you're already doing. Once that clicks, adoption becomes self-sustaining.


If you're planning AI introduction for an educational institution and want to discuss approaches, connect with me on LinkedIn.