ELIZA.
Most people discovering AI chatbots today have never heard of ELIZA, but this modest 1960s program is arguably where the whole story begins. Created by MIT computer scientist Joseph Weizenbaum in the mid‑1960s, ELIZA is widely considered the first true chatbot because it could sustain typed dialogue in a way that felt surprisingly human for its time.nation+1
What Was ELIZA?
ELIZA ran on mainframe computers at the MIT Artificial Intelligence Laboratory between roughly 1964 and 1967. Weizenbaum named it after Eliza Doolittle from “Pygmalion,” emphasizing how language alone can change how a character (or in this case, a program) is perceived.wikipedia+2
Under the hood, ELIZA was not “intelligent” in the modern sense. It did not understand the meaning of words or maintain a rich model of the conversation. Instead, it performed clever pattern matching on user input and applied scripted transformation rules to produce replies that looked relevant on the surface.onlim+2
The Famous “DOCTOR” Script
The most iconic version of ELIZA used a script called DOCTOR, which mimicked a nondirective psychotherapist in the style of Carl Rogers. This kind of therapist responds mostly with open-ended reflections and questions, which maps perfectly onto a rule-based system: turn the user’s statements back at them, tweak a few words, and ask them to elaborate.codecademy+2
Because of this design, ELIZA could often keep users talking for a long time, even though it had no deeper understanding of their problems or emotions. For many people interacting with a computer conversationally for the first time, the experience felt uncanny and personal.liacademy+1
The “ELIZA Effect”
User reactions to ELIZA led to what is now called the “ELIZA effect”: the human tendency to attribute real understanding, emotions, or intelligence to systems that simply manipulate symbols or text in convincing ways. Some users reportedly asked for privacy while “talking” to ELIZA, or assumed it must have access to their personal files, despite the program’s very limited capabilities.cbc+2
This effect still matters today. Modern large language models are far more powerful than ELIZA, but the underlying risk is similar: people can easily overestimate how much a system “knows” or “cares” based on fluent language alone.cbc+1
Why ELIZA Still Matters
Even though ELIZA’s code is primitive by modern standards, its impact is huge:
- It established the basic idea of a text-based conversational agent, inspiring later systems like PARRY, ALICE, and eventually today’s assistants.onlim+1
- It became a key teaching example in natural language processing, cognitive science, and human–computer interaction.codecademy+1
- It sparked early ethical debates about whether conversational software should be used in sensitive domains like therapy, given that users might mistake simulations for real understanding.nation+1
In other words, ELIZA is not just a historical curiosity; it’s a mirror that shows how easily humans project minds into machines, and a reminder that impressive conversation does not automatically equal genuine comprehension.liacademy+1
If you share this on a forum, you could end with a prompt like:
What do you think: are we still falling for a more sophisticated version of the ELIZA effect today?
- Nation AI - ELIZA: the story of the pioneering AI chatbot
- ELIZA - Wikipedia
- Does ELIZA, the first chatbot created 60 years ago, hold lessons for modern AI? | CBC Radio
- The History Of Chatbots – From ELIZA to ChatGPT - Onlim
- History of Chatbots | Codecademy
- The Story Of ELIZA: The AI That Fooled The World
TL; DR # Key Learnings from “The Story of ELIZA”
About ELIZA
- Created in 1966 by Joseph Weizenbaum at MIT
- World’s first chatbot and early natural language processing program
- Used simple pattern matching and substitution rules rather than true understanding
- Most famous script: “DOCTOR” - simulated a Rogerian psychotherapist
How It Worked
- Recognized keywords in user input
- Reflected statements back as questions
- Encouraged users to continue talking without providing genuine insights
- Example: “I feel sad” → “Why do you feel sad?”
The ELIZA Effect
A psychological phenomenon where people attribute human-like understanding to computers based on superficial behavior, even when they know it’s a machine.
Why it happens:
- Human tendency to anthropomorphize (assign human qualities to non-human entities)
- Surface-level interactions create illusion of deeper understanding
- Desire for connection and empathy makes people suspend disbelief
Limitations
- No genuine comprehension of context or meaning
- Pre-programmed responses only
- Failed with complex or ambiguous statements
- Was essentially a “clever trick” or mirror
Legacy & Impact
- Demonstrated machines could engage humans in dialogue
- Paved the way for modern chatbots and virtual assistants (Siri, Alexa, ChatGPT)
- Highlighted ethical and psychological considerations in human-computer interaction
- The ELIZA effect still applies to modern AI systems today
Modern Relevance
Understanding the ELIZA effect remains important to avoid overestimating AI capabilities and foster informed, effective use of technology.

