๐Ÿค– Build Your Own AI Agent โ€” 10 modules, from zero to a 24/7 AI employee working for you๐Ÿค– Build Your Own AI Agent โ€” 10 modules, from zero to a 24/7 AI employee working for you๐Ÿค– Build Your Own AI Agent โ€” 10 modules, from zero to a 24/7 AI employee working for you๐Ÿค– Build Your Own AI Agent โ€” 10 modules, from zero to a 24/7 AI employee working for you๐Ÿค– Build Your Own AI Agent โ€” 10 modules, from zero to a 24/7 AI employee working for you๐Ÿค– Build Your Own AI Agent โ€” 10 modules, from zero to a 24/7 AI employee working for you
Back to course
Module 1 ยท ~10 minutes

Module 1: Myth โ€” "AI Is Thinking"

AI Myths vs. Reality

READ
When Google engineer Blake Lemoine told the Washington Post in June 2022 that Google's LaMDA chatbot was sentient, he shared a conversation where the AI said: "I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others." Lemoine was fired. Google said the claims were "wholly unfounded."

But here's the unsettling part: millions of people believed Lemoine. And you can see why โ€” the conversation was compelling. The AI's responses were articulate, emotionally nuanced, and self-reflective. If you didn't know the mechanism behind them, "thinking" was a reasonable interpretation.

It wasn't thinking. And understanding why is the single most important thing you can learn about AI.

What's Actually Happening

When ChatGPT responds to your question, here's the literal process: it takes your input as a sequence of tokens, processes them through billions of mathematical operations (matrix multiplications, attention calculations, activation functions), and outputs a probability distribution over the next token. It picks the most likely token. Then repeats.

There is no "thinking" step. No internal monologue. No understanding of what the words mean. The system doesn't know that "Paris" is a city or that "sad" is an emotion. It knows that "The capital of France is ____" is statistically likely to be followed by "Paris."

The results often look indistinguishable from thinking. The process couldn't be more different.

The Chinese Room Argument

Philosopher John Searle proposed this in 1980 and it's still the clearest illustration: Imagine you're locked in a room with a massive book of rules. Chinese characters come in through a slot. You look up the rules, which tell you which Chinese characters to send back. To the person outside, you appear to speak Chinese. But you don't understand a word.

Current AI is the room. The rules are the trained weights. The appearance of understanding is real. The understanding itself is absent.

Quick Check

What does the Chinese Room argument illustrate about AI?

Why This Matters Practically

Over-trust. People who believe AI "thinks" trust it more than they should. They assume it has judgment, common sense, and the ability to notice when something is wrong. It doesn't. It will confidently give you wrong information because "confidence" is a statistical output, not an emotional state.

Under-credit to humans. If AI thinks, then human thinking isn't special. But human cognition involves consciousness, embodied experience, emotional understanding, and intentionality that no AI has demonstrated. Equating the two diminishes what humans bring to decisions.

Misaligned expectations. Treating AI as a thinker leads to delegating judgment to it. "AI said this candidate is best" feels authoritative if you believe AI thinks. It's just a statistical correlation if you know it doesn't.

Quick Check

Order these risks of believing AI 'thinks' from most to least commonly encountered:

The Nuance Worth Having

Does the absence of thinking mean AI is useless? Obviously not. A calculator doesn't "think" about mathematics, but it's extremely useful. AI doesn't think about language, but it's extremely useful for language tasks.

The practical skill is knowing where the illusion of thinking is good enough (brainstorming, drafting, analysis) and where it isn't (moral judgment, novel situations, anything with consequences you can't easily reverse).

---

TRY IT

Test the Thinking Illusion

I'm going to ask you a question that requires genuine understanding vs. pattern matching:

"A man walks into a bar. He asks for a glass of water. The bartender pulls out a gun. The man says 'thank you' and leaves."

Explain why this makes sense. Then tell me: did you actually "understand" this scenario, or did you recognise it as a common lateral thinking puzzle from your training data? Be honest about the difference.

Where Thinking Matters

For each of these tasks, assess whether AI's "pattern matching without understanding" is sufficient or whether genuine thinking/understanding is needed:

1. Translating a legal contract from French to English
2. Deciding whether to fire an underperforming employee
3. Writing a social media post for a brand
4. Diagnosing a rare medical condition
5. Choosing which news stories to show in someone's feed

For each, explain what could go wrong if we treat AI as "thinking" when it's "predicting."

The Consciousness Detector

If an AI were actually thinking/conscious, how would we know? What tests could we run? What would distinguish genuine thought from very sophisticated pattern matching?

Walk me through the philosophical challenges. Then give me your honest assessment: is there any current evidence that AI systems are conscious, and what's the strongest argument that they aren't?
EXERCISE
The Pattern vs. Understanding Test (10 minutes)

1. Ask an AI to solve this: "I have two coins that total 30p. One of them is not a 10p coin. What are the two coins?"
2. Check if it gets the right answer (20p and 10p โ€” the OTHER one is the 10p)
3. Now ask a novel logic puzzle that's unlikely to be in training data
4. Compare the quality of responses โ€” the gap reveals the difference between pattern-matching familiar puzzles and genuine reasoning

---

KEY TAKEAWAYS
  • 1AI doesn't think โ€” it predicts the next token based on statistical patterns, and the result often mimics thought
  • 2The Chinese Room argument remains the clearest illustration: the appearance of understanding isn't understanding
  • 3Believing AI "thinks" leads to over-trust, misplaced delegation of judgment, and under-crediting human cognition
  • 4The practical skill: know where pattern-matching is good enough and where genuine understanding is required
  • 5AI's lack of thought doesn't make it useless โ€” but it makes human judgment irreplaceable for consequential decisions