During some of my student years I was captivated by some very intriguing questions relating to artificial intelligence and to the philosophy of mind. Here are some of the books I read then.
I should probably start with Alan Turing: his seminal 1950 paper Computing Machinery and Intelligence is generally considered to have given rise to the domain of artificial intelligence (AI). It starts straightforwardly: “I propose to consider the question, ‘Can machines think?'”. Turing proposed an operational definition of intelligence, now known as the Turing test. Let a number of judges chat with some other people, and with machines pretending to be human. If an average judge can’t distinguish machine conversation from human conversation, let’s just call the machine intelligent. Turing was highly optimistic about our ability to design such artificially intelligent machines.
Such a definition is entirely satisfactory to engineers. ‘Consciousness’ and ‘intelligence’ are terms that are notoriously hard to define exactly, so they prefer to sidestep the question, and see whether they can engineer a machine that behaves in a way most people would call intelligent. This does not mean they try to rebuild a human brain, just as aerospace engineers do not try to rebuild wing-flapping birds. In fact, most active AI researchers do not even try to rebuild behaviour comparable to human intelligence: instead they’ll break “intelligence” down into some of its ingredients or components — for example, reasoning with abstractions, or learning from experience — and rebuild those computationally. If successful, many of those components can then usefully be applied in highly specific contexts, such as medical diagnosis. For example, a thriving subdomain of AI is machine learning: though inspired by biological learning, few machine learning researchers try to mimic biological entities — why would they?
Pretty soon the AI researchers did run into moral and philosophical questions, though. For example, in the 1960s Joseph Weizenbaum built a language analysis computer program called ELIZA and gave it the ability
to play (I should really say parody) the role of a Rogerian psychotherapist engaged in an interview with a patient. The Rogerian psychotherapist is relatively easy to imitate because much of his technique consists of drawing his patient out by reflecting the patient’s statement back to him.
Knowing full well the limitations and crude design of his computer program, he was baffled and alarmed by the importance people — including serious psychiatrists — soon ascribed to it, by the emotional bonding people were soon feeling toward it, by how unequivocally they anthropomorphized it, and finally, by the extent to which people overestimated the capabilities and understanding it demonstrated.
Though he had his worries about “an age in which man has finally been recognized as nothing but a clock-work”, his main concerns were with what we ought and ought not to let a future intelligent machine do, or be responsible for. For instance, he shuddered at the idea of computerized psychotherapy.
Such concerns about morality are of course typical with any new technology1, and in the end, morality and technology will adapt to each other. For example, while in the 1970s and 80s this was a hot moral topic, the interpretation of patients’ electrocardiograms is nowadays routinely left to computers; MDs agree that the computers simply outperform humans on this task.
Can machines think?
But practical AI engineering and moral questions aside, many people are interested in the deeper questions of consciousness and human intelligence, and whether those could be achieved in a computer. Can machines think?
For example, when Deep Blue beat Kasparov, in 1997, most people were not very interested in how it had achieved this remarkable feat. Though it was clear that Deep Blue could do nothing but chess — whereas Kasparov could also talk, read, enjoy art, etc. — the obvious question raised by the occasion was: if a machine can be programmed to be better than humans at this highly intellectual activity, could it also be designed to be more intelligent than humans in general?
This question has generated a lot of polemic. Opinions become especially convoluted when it comes to consciousness. For example, John Searle proposed the famous Chinese room argument. He imagined an AI program that could communicate in Chinese. It would be possible for himself, who doesn’t speak Chinese, to lock himself in a room and simply carry out all the computations of the AI program by hand, producing the exact same behaviour as the program. Since there is no understanding of Chinese in that room, his argument goes, the AI program doesn’t understand what it’s doing and therefore is not conscious.
Quite apart from the physical impossibility for a person to carry out so many computations in a reasonable timeframe, I think it should be obvious that in this argument Searle merely represents the CPU of the computer. So he is simply forgetting about the program itself, and the state it is in. In effect he invites us to imagine that the CPU should have an understanding of what the program is doing. Of course, it is the whole system — CPU, program and data — that is displaying the understanding of this hypothetical AI system.
Obviously AI has an important role to play in the question Can machines think? If ever AI researchers construct a machine that passes the Turing test, the answer will be there for all to see. But still some people would claim, in spite of the intelligence displayed by the machine, that the machine was not conscious and therefore not truly intelligent. Turing had this to say about that view:
According to the most extreme form of this view the only way by which one could be sure that a machine thinks is to be the machine and feel oneself thinking. […] It is in fact the solipsist point of view. It may be the most logical view to hold but it makes communication of ideas difficult. A is liable to believe ‘A thinks but B does not’ whilst B believes ‘B thinks but A does not’. Instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.
To materialists, like myself, mind is composed of matter, and matter must follow the laws of nature. Hence thought, intelligence and consciousness must be explainable using the laws of nature. Then it follows straightforwardly that if those laws are deterministic, it must at least in principle be possible to mimic a brain computationally. In short, most materialists will unequivocally answer “Yes” to the question Are thinking machines conceivable?
But there is an interesting if in there: maybe the laws of nature are actually not deterministic at all. This is the position held by Roger Penrose. He points to quantum physics to make his case, and concludes that a brain, and human thought, cannot be mimicked computationally. I’m skeptical about this line of reasoning: I would certainly be puzzled if somehow our thought processes required non-deterministic laws. But then I am also puzzled by those very laws, notwithstanding their phenomenal and indisputable success in physics — so Penrose may yet be right. Even so, I would consider it only an argument against the possibility of mimicking the human brain, not against the possibility of making an AI in some other way. Planes can fly without mimicking birds.
Philosophy of Mind
Apart from all the discussion related to computers, there are of course some deeply intriguing questions about human intelligence itself. What is it? And how does it work? Where and how is consciousness formed?
Puzzled by that last question, which he formulated as the mind-body problem, Descartes came up with dualism: the idea that the mind — consciousness — is a different type of thing altogether than the body; in other words, that it resides outside the realm of normal physics. Nowadays people don’t usually put it quite so explicitly, yet the belief that there’s a little agent in there2 doing all the hard work is still quite common. Daniel Dennett’s “Consciousness Explained” is mainly aimed at deconstructing that view, and replacing it with a purely materialistic explanation.
Weizenbaum, for one, was clearly uncomfortable with the materialistic view:
Ultimately a line dividing human and machine intelligence must be drawn. If there is no such line, then advocates of computerized psychotherapy may be merely heralds of an age in which man has finally been recognized as nothing but a clock-work.
I find his choice of words interesting. For a materialist like me, it seems obvious that all of nature, including man and his brain, had already been recognized as a clock-work for over a century: ever since Darwin showed us a simple and natural mechanism — evolution — that beautifully accounts for the amazing complexity of nature. It surprises me, furthermore, that Weizenbaum does not appear to be uncomfortable with life itself being “nothing but” a clock-work, only with the human mind.3
Douglas Hofstadter worked out the idea that systems with very simple rules can give rise to incredible complexity in Gödel, Escher, Bach, a bit of a cult book in AI circles. In one chapter he fancifully considers how an anteater has entire conversations with an ant colony — not with the individual ants, who are quite mindless, but with the colony as a whole. The conversation emerges out of the way the ants run, the trails they follow, and so on.
In fact, there is generally no controversy among physicists or philosophers over the idea that big systems with simple components can have complex and surprising emerging properties. Thus, when some people (dualists) say in essence “Consciousness must be something supernatural, because I cannot imagine how consciousness can arise out of atoms,” it is merely a failure of imagination on their part.
Obviously, the question of how the mind works may to some extent be answered by neuroscientists. Indeed, they are steadily acquiring an ever more detailed understanding of the workings of the brain. But they acknowledge themselves that “the final step”, from a complete description of the brain’s physiology to an understanding of how that gives rise to conscious awareness and intelligent behaviour, is outside their domain. Understanding the brain and understanding the mind are different things.
Some neurologists are acquiring such good understanding of the physical details of how the brain works, that their mechanistic world view compels them to say “we have no free will”. Curiously, the worry that we might indeed have no free will is what stops some people from accepting the materialistic view on intelligence. Dennett tackles this problem in Freedom Evolves.
If you want to duck when somebody throws a stone at you, of course you can. So in the everyday sense of the world, of course you have free will. The fact that deeper down, your wanting to duck can be explained through an incredibly complex — but ultimately mechanistic — interaction of neurons and synapses and so on, does not subtract anything4 from your everyday experience of your own free will.
There is of course the obvious intrigue of questions like “How is it that my brain can contemplate itself?” But I think that for me, the largest appeal of these questions of artificial intelligence and philosophy of mind lays in their implicit attack on anthropocentrism.
Science has been leading such attacks for centuries. Copernicus and Galileo have long since smashed the idea that the Earth is the center of the universe. A century and a half ago, Darwin made biology into the next frontier attacking human elitism: he crushed not only the idea that humans are special among the animals, but also — perhaps more fundamentally, and still not well understood by many today — the fancy that humans exist for a reason.
Our intelligence is perhaps our last human pedestal. Some day, perhaps not very far in the future, also this pedestal may be brought down when human intelligence will be surpassed by computer intelligence.