The story of how the human mind evolved is almost too good to be true. Contrary to most evolutionary changes, this change was not gradual. It was a jump from a cold, automated machine to a sentient, creative mind, and was, therefore, a momentous change in the history of the universe. For the first time, a part of the world could understand itself – through people. They were the first to experience what they saw whenever they opened their eyes. In front of them was a bleak, hostile world. Yet the very genetic mutation that allowed them to become aware of their surroundings also enabled them to explain the world, and, therefore, to control and improve it.
From this crucial moment in our evolutionary history onward, progress chugged along. Then, the enlightenment sped it up significantly. People have enjoyed rapid progress and technological innovations ever since. And even though hundreds of thousands of years have passed since this seemingly miraculous mutation occurred, we can still trace today’s innovations and abilities back to it. It is this ability to be intelligent and create new knowledge that is also known as creativity.
One of our most impactful innovations is the computer. Ever since this invention, people have tried to write intelligent programs. When the late Alan Turing, the father of modern computer science, wrote about “AI” or “artificial intelligence,” the term meant something different from today. He used it to describe a program that has human intelligence. Since then, the use of the term has changed to refer to specific types of software with a certain level of sophistication, particularly those that do not need to be “explicitly” programmed, such as self-driving cars, master chess players, and text prediction systems. Thus the term “AGI” - “artificial general intelligence” - needed to be introduced to refer to the original concept of human intelligence again. People have put forward other terms such as “strong AI” and “real intelligence,” but I use “AGI” as a stand-in for all of them because they are all equivalent. Likewise, when I say “artificial,” I do not mean that it is not real intelligence; it is intelligence, only running on a computer. Hardware differences aside, it is still the real deal.
Current AI research is narrow. AI can only work for specific applications, cannot have a mind of its own. It cannot learn anything its programmers have not designed it to “learn.” For these reasons, I shall refer to this kind of AI as “narrow AI,” whereas I will refer to human intelligence as “AGI.” It refers to an instance of the creative program on a computer other than the human brain. It is this creative program that makes people what they are. The project of AGI attempts to replicate it and all other cognitive abilities of people that creativity enables, such as consciousness and free will.
To develop an AGI has been the goal for decades - and it would be a momentous achievement. While we have made a substantial amount of progress in narrow-AI research, we have made none toward AGI. Why is that? It has been the toughest problem to crack since those days Alan Turing wrote his “Computing Machinery and Intelligence” and has attracted some of the brightest minds in computer science. Yet they have all failed to build AGI. It is not for lack of trying or thinking; in this book, I argue that it is because they have so far had the wrong philosophy.
There is a widespread prejudice that anything to do with philosophy is hand-wavy, pointless navel-gazing. It is this prejudice that has prevented progress in AGI research. I worry that many software engineers have it because most people in most fields do. While there is indeed much bad philosophy out there, for those who fear that philosophy is generally a waste of time, fear not: there exist real philosophical problems that require solving. How to build an AGI is one of them, and it is soluble.
Philosophy is crucial: it tells us how to think. It determines our every endeavor and our chances for success. Everyone has a philosophy because each of us has a way of going about solving problems. This way is one’s philosophy.
Software engineers should not dismiss philosophy for the simple reason that they routinely contribute to the philosophical field of epistemology without realizing it: by discovering rules governing the improvement of the structure of programs. Our understanding of what makes some implementations of a program better than others has dramatically improved since the physicist and father of quantum computation, David Deutsch, recently made important epistemological discoveries. This book is an attempt to apply these discoveries to software engineering, and, in turn, to broadly apply software engineering principles to progress in all domains. For, the search for good explanations is what drives human progress and flourishing generally, and we can state it as a software engineering principle.
There is an objective difference between a good program and a bad one, between a program that has reach and a limited one, and between a program that solves a problem well and one that solves it poorly. Progress in software engineering always depends on a single activity: writing good implementations in response to problems. How to write good implementations cannot be known without a good epistemology. It is a programmer’s philosophy that tells him how to write his programs. All software engineers are philosophers, and all people are software engineers, whether they realize this or not.
Epistemology is the study of knowledge. It provides answers to questions such as: how is knowledge created? How does it grow? These questions are crucial because, by definition, an AGI is a program that can create knowledge of any kind: it is a universal explainer. It can explain anything people can; indeed, it is a person. The study of epistemology is the study of AGI. Therefore, any intelligence research not directed toward epistemology is futile.
We know of only one approach to building AGI. Despite appearances, the industry is not pursuing it because it is the victim of bad philosophy. A programmer with the wrong epistemology will eventually fall prey to it, but armed with a good one, he is unstoppable. Thus, I guess that programmers will continue to be significant contributors to the human project; only more so after the invention of AGI.
Where does current AGI research go wrong, how might one realize it, and how can we tell whether we have successfully built it? Are our computers capable of running AGI? Is it safe, or instead man’s last invention? Once built, what will the future hold - would it just make life a little more convenient, or can it take us to the stars?
Software is not just a tool to solve problems, but it shapes the universe and exerts causal power on the physical world around us. Many philosophical problems, including AGI, will not be solved until software engineers adopt the requisite philosophy, and until philosophers realize the importance of software engineering. For, both philosophy and software engineering are two sides of the same coin, and I believe this realization will yield fruit in our attempts to build this amazing thing…