In part 1 of this series, I reflected on how the hype surrounding AI mirrors past technological waves, such as the rise and fall of Expert Systems in the 1980s.
Back then, the promise of AI transforming industries and solving complex problems was met with enthusiasm — and eventual disillusionment. Today, we find ourselves in a similar cycle, but the stakes are higher, and the misunderstandings are more widespread. In this article, let’s dig deeper into what has changed since then and explore whether we’re truly learning from history or simply repeating it.
In Part 1, we explored how public fascination with tools like ChatGPT has led to unrealistic expectations about AI’s capabilities. This disconnect stems from the way LLMs function, which often gives the illusion of intelligence while lacking its true essence. To understand why this distinction matters, we need to revisit what makes LLMs so captivating—and so misunderstood.
Pattern Predictors, Not Thinkers
In the first article, I highlighted how people marveled at ChatGPT’s ability to generate coherent responses. However, this “intelligence” is no more than advanced pattern prediction. LLMs don’t “think”; they predict the next word based on patterns in their training data. This is a far cry from the intentional, goal-driven reasoning we associate with human intelligence.
The Role of Expectations
As discussed earlier, inflated expectations have always plagued AI advancements, from Expert Systems in the 1980s to today’s generative models. Ordinary people, unfamiliar with the nuances of AI, interpret fluency in language as evidence of deep understanding. But LLMs don’t understand—they merely mimic understanding, drawing from pre-existing data without grasping meaning.
Absence of Intentionality and Goals
True intelligence requires intentionality — the ability to form goals, pursue them, and adapt based on experience. When ChatGPT or similar models provide an answer, it’s not because they “want” to help; they’re simply following statistical algorithms. This lack of purpose underscores a fundamental limitation that separates these systems from human cognition.
No Understanding of Meaning
In Part 1, I reflected on Marvin Minsky’s insight that intelligence cannot be modeled without first understanding its nature. LLMs embody this limitation — they process symbols without understanding their meaning. For instance, they know that “fire” often appears with words like “hot” or “burn,” but they don’t comprehend the concept of fire as a physical or cultural phenomenon.
Dependence on Training Data
We’ve seen how AI mirrors the data it’s trained on, for better or worse. In the same way that the limitations of Expert Systems were tied to hardware constraints in the 1980s, today’s LLMs are bound by the biases, inaccuracies, and gaps in their training datasets. Unlike humans, who can question and adapt, LLMs lack critical thinking and moral reasoning.
No Self-Awareness or Emotion
In Part 1, I questioned whether replicating human intelligence is desirable, given our flaws. LLMs, devoid of self-awareness or emotion, avoid some of our pitfalls but also miss out on the richness that emotions and consciousness bring to decision-making and creativity.
Human intelligence isn’t just about processing information — it’s about understanding, adapting, and creating meaning. In this sense, LLMs are remarkable tools, but they are not intelligent beings. They reflect the strengths and weaknesses of their programming and data, and their “creativity” is limited to recombining existing information in new ways.
To build something truly intelligent, as Marvin Minsky suggested, we must first understand our own minds. And as I posed in the first article: given the state of humanity — with our wars, hunger, and environmental destruction — are we even ready to replicate intelligence?
The limitations of LLMs remind us that AI is still a long way from true intelligence. For now, they remain what they were always meant to be: tools for augmenting human capabilities, not replacing them.
The story of Artificial Intelligence has always been one of both potential and caution. As we marvel at the capabilities of tools like LLMs, we must also remain grounded in their limitations. They are powerful allies but far from the sentient beings we often imagine them to be.
In Part 1, I reflected on how our inflated expectations can lead to disillusionment. In this article, we’ve explored why LLMs, while extraordinary, cannot yet fulfill the dream of true intelligence. Perhaps that’s a good thing. Before we strive to replicate our minds in machines, let us first strive to understand— and improve — our own intelligence.
AI has the potential to amplify our best qualities, but it can just as easily reflect our worst. If we proceed thoughtfully, we might find that AI doesn’t need to replace our intelligence to revolutionize our world. It simply needs to help us use the intelligence we already have, more wisely and compassionately.
Maybe the real question isn’t whether AI is failing — it’s whether we’re ready to succeed alongside it
See you next article, right?