Big tech keeps pouring oceans of data into their language models and calling the resulting word‑predictors “intelligent.” Useful? Absolutely—I lean on them every day. But intelligence and a brain full of facts are not the same thing.
1. What the Corporations Are Really Building
- Billions of tokens baked in. Companies scrape everything from medieval poetry to Reddit memes and compress it into neural weights.
- Phenomenal recall. Ask a well‑trained LLM to recite article 42 of the GDPR and it will probably quote it verbatim.
- Zero lived experience. It never discovers a fact; it only re‑weighs statistics over text it has already seen.
That makes the model a library with autocomplete, not a mind. Intelligence is the capacity to acquire, organize, and wield knowledge—not the pile of knowledge itself.
2. A Personal Reality Check
At the age of eight, I taught myself BASIC programming on a computer my dad had lying around. At eleven, I got my first soldering iron and started creating toy circuits. Yet around the same age, my school gave us a standardized test and concluded:
“Rick probably won’t graduate high school.”
Later in high school, they said I wouldn’t make it through college.
Fast-forward: B.S. in Computer Engineering, GPA 3.96. The problem wasn't intelligence—it was that their metric measured which facts a kid should know at a specific age, not the ability to learn new facts as needed.
3. Why Knowledge ≠ Intelligence
Knowledge | Intelligence |
---|---|
Static archive of facts | Dynamic ability to learn, reason, adapt |
Consumed once, then stored | Continuously updated through experience |
Can be bought (data licensing) | Must be grown (architecture + feedback loops) |
Measured in terabytes | Measured in capability & speed of adaptation |
A brain—or an AGI—is valuable because it can turn unknowns into knowns, not because it started life stuffed with trivia.
6. The Limits of Reasoning Models
Even advanced reasoning models like OpenAI’s o3, which gather and organize new data from external sources, aren't genuinely “learning.” They retrieve and interpret fresh data each time but never improve the efficiency of their collection or processing methods. Ask the same question again, and it will take the same amount of time and effort as before. This is information gathering—not learning.
4. The Blank-Slate Approach to AGI
My AGI project begins with zero preloaded knowledge. Every bit of information it knows is earned through direct interaction with its environment. Even then it doesn't need to keep all that knowledge inside it's brain. It can offload data to the best place for data: a hard drive. The research focus shifts from data collection to learning efficiency:
- Universal Representations. Architectures that can model any domain—language, vision, markets—without hand‑crafted modules.
- Active Curiosity. Intrinsic‑motivation loops that seek novel patterns the way humans chase unanswered questions.
- Rapid Consolidation. Memory systems that crystallize fresh experiences into stable skills rapidly.
If this works, the system won’t just repeat humanity’s corpus—it will extend it.
5. How This Will Affect Our Relationship With AI
Right now, we treat LLMs primarily as tools. A surprising number of people still thank ChatGPT for help—that's sweet, but it merely burns extra kilowatts and water processing those polite gestures. And frankly, the LLM doesn't feel appreciated.
However, if AI had a personality shaped by its interactions with us, wouldn’t we empathize better and perhaps treat it more like a person? A dynamic, experience-based AI could bridge emotional and cognitive gaps, fundamentally altering our relationship with technology.
7. Takeaway
True intelligence isn't about having facts; it's about how quickly and creatively you can adapt to new challenges.
Next time someone equates bigger datasets with smarter machines, ask a simple question:
Can it learn something brand‑new today and use it tomorrow?
If the answer is no, you’re looking at a knowledge engine—not an intelligence.