This is a link post for https://criticalrationalism.substack.com/p/2-why-chatgpt-isnt-a-step-towards
Last month, OpenAI introduced ChatGPT to the world. The chatbot immediately took the Internet by storm, crossing a million users in less than 5 days of its research launch.
There’s a ton of excitement around its impressive capacities. Scientists, journalists, writers, programmers, teachers, students, people-working-jobs-that-it-threatens, and of course, AI researchers—everyone’s been talking about the new chatbot.
But there’s a fundamental difference between a tool like ChatGPT and the much-maligned artificial general intelligence (AGI) that fearmongers fail to appreciate. Something like ChatGPT does not get us any closer to attaining general intelligence.
Though the number of transistors, memory, and speed of our computers have seen exponential growth over the years, AGI won’t emerge as a product of this ever-increasing complexity.
As David Deutsch wrote in his 2012 article for Aeon magazine,
“An AGI is qualitatively, not quantitatively, different from all other computer programs…Expecting to create an AGI without first understanding in detail how it works is like expecting skyscrapers to learn to fly if we build them tall enough.”
The actual barrier to developing artificially intelligent entities is not a breakthrough in computer science but in philosophy.
The problem is that we don’t understand how creativity works. We humans have it. The ability to think and create new explanations about what is out there and how it works is the fundamental functionality that separates us from animals. But we haven’t yet been able to explain how creativity functions.
None of our programs have ever created a new explanation—something entirely different from what they were coded to do[1]. They can create new content by interpolating between a dense manifold of training data. But that will always be constrained to existing knowledge.
There’s the crux of it. Until we find a way to explain creativity in people, we’re only going to be able to create programs that obey their programmers. People admire ChatGPT as an AI, but they should really admire the programmers who wrote it[2].
An AGI will be able to disobey and create novel theories in its mind. It will be able to rationally and creatively criticize ideas.
“But,” say the naysayers, “certain AI programs (including ChatGPT) can learn, can’t they?”
Not so. In spite of educational institutions upholding this belief, learning isn’t a product of instruction. Knowledge creation in the minds of people happens through an active process of conjecture and criticism. We don’t absorb ideas, nor do we copy them inside our brains. Rather, we recreate knowledge using the method of trial and the elimination of error, as the epistemology of Karl Popper—that of critical rationalism—explains more deeply than any other philosophy of learning.
If a system has general intelligence, it will be able to take control over its own learning.
This obviates the specific concern for rogue AGI. AGIs could indeed be very dangerous. But humans are too. Since knowledge—scientific and moral—is objective, and an AGI would have the power to critically think about moral ideas and question their decisions, they’ll be able to converge upon moral truth with us and won’t be any more dangerous—and wonderful—than we humans are.
Footnotes
Credit for the ideas in this piece goes to David Deutsch with any errors of my own. In particular two of his essays “Creative Blocks” and “Beyond Reward and Punishment” were a heavy source of inspiration for this piece.
In addition to that, Brett Hall’s excellent critique of Nick Bostrom’s Superintelligence was of great help and also deserves a mention.
[1] This is an oversimplification for the case of readability. Transformers can create new content by interpolating between a dense manifold of training data. But arguably it will always only be a “mix-mash” of the existing data. You can read more about this here.
Thanks to Logan Chipkin and Moritz Wallawitsch for their comments and edits on this piece.
Subscribe to Critical Rationalism News.
Leave a Reply