The headline of this article was inspired by an excellent book by Erik Larson I have just finished reading called The Myth of Artificial Intelligence – Why Computers Can’t Think the Way We Do. In this book, the author argues that despite all the current hype about artificial intelligence, or AI, computers will never be able to think like a human or, for that matter, any animal. He points out that the brain is the product of billions of years of evolution, and not only are we a long way from understanding how the brain works, even if we did, we would be incapable of building a replica.
As an example, let’s look at the earthworm, which we all agree has a much less evolved brain than we do. But the earthworm’s brain is perfectly adequate to ensure its propagation as a species. Specifically, it can extract nutrients from the soil to keep living and, just as important, dig deeper in the soil to elude predators such as robins. It does this using a primitive set of eyes that sense only light and darkness. In fact, that is probably how intelligence evolved in the first place, as the ability to move toward food and away from predators.
Climbing the evolutionary ladder, early humans probably spent most of their time trying to catch prey and evade predators. However, they eventually discovered that they had a skill that set them apart from other species; the ability to think abstractly. Although other species create tools and communicate with sounds, our ancestors were able to build advanced tools such as spears and axes, tame fire and, most important, develop complex languages. Initially, language was probably used primarily to indicate predators and prey (“Hey, Og, there is a herd of deer in the next valley, but watch out for the leopard hiding in the trees as you go through the forest!”), but it soon got to the point where complex ideas and creation myths could be communicated.
The next stage of human abstraction was the invention of mathematics. At first, this involved the ability to count objects and utilize shapes (e.g., in building houses and temples), but it gradually became more abstract. In conjunction with the creation of abstract mathematics came the further development of technology and the observation that the world around us is full of unexplained periodicities such as tides and the movement of the Sun, Moon, planets, and other stars. This led to the realization that mathematics and nature were intimately related, culminating in the work of Johannes Kepler (lived 1571-1630), who was able to show that planets move in ellipses around the Sun, and finally Isaac Newton (1643-1727), who proposed the theory of gravitation to explain the mechanism that made this possible.
Newton’s pioneering invention of both his physical laws and the mathematical tool called calculus (co-developed by Gottfried Wilhelm Leibniz – 1646-1716) led to a flowering of scientific and mathematical discovery. Amazingly, every time scientists observed some new phenomenon, a mathematical theory was developed that explained this phenomenon. (We still don’t fully understand why this is, and I refer you to a paper by Eugene Wigner (1902-1995) entitled The Unreasonable Effectiveness of Mathematics in the Natural Sciences.) Probably the most important example of this since Newton is the work done by Michael Faraday (1791-1867), an English experimental physicist, and by James Clerk Maxwell (1831-1879), a Scottish mathematical physicist. Faraday developed devices for creating current electricity, which led eventually to all modern electrical products, and Maxwell combined the theory of electricity and magnetism into a set of four concise equations now known as Maxwell’s equations.
Even more incredible, Maxwell showed that by combining his equations he could predict the propagation of electromagnetic waves travelling at a constant speed of light, such as visible light, X-rays, radio waves, and so on. This led to the development of all modern communication devices and inspired Albert Einstein (1879-1955) to develop the theory of relativity, which had both positive and negative consequences for our species, as we know.
So, what has my digression into the history of science got to do with artificial intelligence? Two things. First, these were all constructs of the human mind, and involve real intelligence at its highest level. Second, AI is a direct outgrowth of the work of Faraday and Maxwell, which led to the digital computer and the Internet. Relatively recent human inventions such as the solid-state transistor also played important roles. On the mathematical side, the development of advanced statistical theory led directly to most AI algorithms, again the product of human minds. My point is that AI is a child of mathematics and science, and could not exist without the ideas and devices that have been developed by very smart human beings.
Let’s get back to AI and ask why we are starting to marvel at its “intelligence”. One thing that AI methods can do now is defeat humans in games such as chess. Chess has a set of very rigid rules that are quite easy to learn. What makes it challenging is that the number of possible moves in a game is almost infinite. Computers and AI algorithms excel at this, storing lots of moves and finding an optimal strategy to win a game. One of the more recent types of AI processes is called reinforcement learning, in which the computer is told the rules of a game and then goes on to play billions of games with itself, thus evolving a winning strategy. But if you asked a computer that had just beaten a Russian grandmaster at chess to suggest a movie and a restaurant for Saturday night, it would be stumped. In fact, as Larson points out, the “smarter” an AI algorithm gets in its particular specialty, the “dumber” it gets at everything outside its specialty.
However, as Laurie Weston pointed out in her excellent overview AI: Where are we and where are we going?, there is a lot more to AI than just the algorithms. Current interest in AI stems from the amount of data available to us today and the speed with which AI procedures sort through all this information. In fact, it is quite possible that if you fed an AI algorithm enough data, it could come up with Maxwell’s equations on its own. But the problem with this argument is twofold. First, AI algorithms wouldn’t exist if it was not for the initial human development of the equations and the technology that they generated. Second, and more important, if an AI algorithm developed Maxwell’s equations, it would not have the intelligence to know what to do with them. This would take real intelligence, such as the human who developed the AI algorithm and fed it all the data in the first place.
AI processes are often seen as unknowable “black boxes”, like the brain. But the workings of an AI algorithm are understood. Although we do not fully understand the brain, we do know that the brain is composed of neurons and that the neurons “fire” off and on using chemical processes. This inspired the early AI researchers to develop a mathematical model of the neuron that is still at the heart of most AI algorithms. The mathematical neuron accepts inputs such as pictures and stock-market data, and modifies numerical weights so that the output of the algorithm matches some desired output. Mathematically, this is called minimizing the error, and is the basis of supervised AI methods. There are also unsupervised algorithms that look for patterns in the input data. However, we fully understand what is going on inside these algorithms, since they have been programmed in a language such as Python or C++ by a human programmer. What makes the brain so unknowable is that little “spark” called creativity. I have talked about several mathematical creative geniuses, but there are examples of creative geniuses in every area of human endeavour. Think about William Shakespeare (1564-1616), Ludwig van Beethoven (1770-1827), and Pablo Picasso (1881-1973). From where did the spark come that inspired their work? We do not know, and we probably never will. Crossing the chasm between the simple mathematical model of the neuron and the true processes that go on in the brain is not something I see happening any time soon, if ever.
Don’t get me wrong; I am a great admirer of AI, and I have worked hard to understand it better for more 30 years. It will be an invaluable aid to human beings as we develop new technology and mathematics in the 21st Century. In fact, it makes the perfect executive assistant. But the “I” in AI is not true intelligence and, in my opinion, never will be.