Why Free AI Isn't Enough: The Case for Paying for AI Tools
On the way to the Club 80 studio, my taxi driver told me AI is a scam. A conspiracy, even. It steals your intelligence and your money. He was genuinely concerned for me — here I was, heading to a show to talk about AI tools, and in his view I was walking into a trap.
I hear versions of this every week. Not just from taxi drivers. From executives, from professionals, from people who tried ChatGPT once and walked away unimpressed. The skepticism is real, and honestly, I get where it comes from. Most people's first experience with AI is underwhelming because they were never using the real product to begin with.
76% Don't Pay. That's the Opportunity.
During the Club 80 livestream, we ran a poll. 498 viewers voted. The question was simple: do you pay for AI tools?
76% said no.
That number didn't surprise me, but it should concern anyone who cares about staying competitive. If three quarters of the population is using free-tier AI — which means rate-limited, older models, fewer features — then the 24% who do pay have an asymmetric advantage. And as I'll explain, most of that 24% aren't even using what they paid for properly. The actual percentage of people using AI at full capability is vanishingly small.
That's not a problem. That's a window.
Three Myths That Keep People on Free Tier
Myth 1: "AI is useless"
This is the most common one. Someone downloads POE or tries free Grok, asks it to summarize a book chapter, gets a hallucinated answer with wrong details, and concludes AI is not ready. Fair reaction to a bad experience. But the problem isn't AI — it's which AI they used.
POE's default "Assistant" model is not ChatGPT. It's POE's own model, and it's significantly weaker. Most users in Hong Kong don't realize this. They see an interface that looks like ChatGPT, assume they're getting ChatGPT, and judge the entire technology based on an inferior product. It's like test-driving a Yaris and concluding that cars can't go fast.
Myth 2: "I paid but it's no better"
Some people do upgrade. They pay the $20/month for ChatGPT Plus. Then they use it exactly the same way they used the free version — same default model, same basic prompts, no settings changes. They never discover the model selector, never try deep search, never connect it to their email or calendar.
The analogy I used on the show: you bought a three-bedroom house but you've been living in it like it's a studio. You eat, sleep, and work in one room. The other two bedrooms are locked and you forgot the key exists.
Myth 3: "Even the paid version is dumb"
This one is subtler. Users who do pay and do experiment still get disappointed because they're using the wrong model for the wrong task. They ask a creative-writing model to do factual research and get confidently wrong answers. They ask a reasoning model to write casual copy and get stiff, over-structured output. The tool isn't dumb — it's mismatched.
Official Products vs. Aggregators
There are two categories of AI products most people encounter, and the distinction matters more than most realize.
Official products — ChatGPT, Gemini, Claude — are built by the companies that actually trained the underlying models. OpenAI built GPT. Google built Gemini. Anthropic built Claude. When you use their products, you're getting the model as intended, with the full feature set the team designed around it.
Aggregators — POE, Perplexity, and others — redistribute multiple models through a single interface. This is convenient, but it creates confusion. POE's default model is not GPT-4o. It's their own model, labeled "Assistant," and most users never switch away from it. They think they're evaluating ChatGPT when they're evaluating something else entirely.
If you're going to form an opinion about AI, at least form it based on the actual product.
The Distinction That Changes Everything: o3 vs. 4o
This was the key insight I wanted to get across on the show, and it's the one that makes the biggest practical difference.
4o is a text-completion machine. Under the hood, it predicts the next most likely word based on patterns in its training data. I described it on the show as "word relay" — it's playing a sophisticated game of completing sentences. This makes it excellent at writing, brainstorming, translation, and any task where fluency and creativity matter.
o3 is a reasoning engine. It doesn't just predict the next word. It generates multiple chains of thought, evaluates them, and selects the most logically sound path. I compared it to Doctor Strange scanning millions of possible futures before choosing the right one. It's slower — sometimes significantly slower — but it's thinking, not just completing.
The practical rule is simple. Use o3 when you need correct answers: analysis, research, fact-checking, data interpretation. Use 4o when you need creative output: drafting, brainstorming, rewriting, translation.
Here's a real example. I was on Victoria Peak, took a photo of an island in the distance, and asked ChatGPT "what island is this?" Using o3, it spent over 40 seconds analyzing the image — examining the coastline shape, the relative position, the vegetation patterns, pixel by pixel. It correctly identified Lamma Island. A text-completion model would have guessed. A reasoning model worked it out.
Most users never switch between models. They don't even know the option exists.
Three Features You're Probably Not Using
If you're paying for ChatGPT Plus (or any premium AI tool) and haven't touched these, you're leaving most of the value on the table.
1. The model selector. It's in the top-left corner of ChatGPT. A simple dropdown. You can switch between 4o, o3, and other models depending on your task. I estimate 7 out of 10 paying users have never touched this. It's the single highest-leverage feature in the product.
2. Deep Search and web browsing. ChatGPT can search the internet in real time, cross-reference sources, and even connect to your Gmail and Google Calendar. This transforms it from a static knowledge base into a live research assistant. Most users don't know this mode exists because they never looked past the default chat interface.
3. The Settings panel. Bottom-left gear icon. This is where you configure memory, custom instructions, connected apps, and data controls. I told the Club 80 audience: all the treasure is hidden in Settings. Nobody opens Settings. It's the digital equivalent of never reading the manual — except this manual multiplies what the tool can do for you.
The Real Competitive Edge
Let's do the math. 76% of people don't pay for AI tools at all. Of the 24% who do, most never discover the model selector, deep search, or settings panel. Of those who do find these features, most don't understand which model to use for which task.
The percentage of people actually using AI at its full capability is probably in the low single digits. Maybe lower.
This is the competitive landscape right now. Not in five years. Right now. If you invest the time to learn your tools properly — not just subscribe, but actually learn — you're operating at a level that the vast majority of your peers and competitors haven't reached.
The taxi driver wasn't wrong that AI costs money. He was wrong that it's a scam. The real scam is paying for a tool and never learning to use it.
I discussed all of this on Club 80 (Episode 024), a Cantonese show that covers tech, business, and culture. The full conversation goes deeper into specific use cases and live demonstrations. You can watch it here:
If you want to discuss AI adoption for your team or organization, connect with me on LinkedIn.
