bannerbckground

The Soul of a Machine
 

It is the ‘I’, it is the ‘I’, that is deeply mysterious!

-- Ludwig Wittgenstein

Yet one more turning point in the contest between man and machine: a souped-up IBM computer called Watson recently trounced two top human contestants in the TV quiz show Jeopardy. This followed another widely touted milestone when an IBM supercomputer named Deep Blue defeated world chess champion Garry Kasparov in a six-game match in 1997. Does this mean computers can now outthink humans? It depends on what you mean by “think.” Most computer experts would say these feats, as impressive as they may seem, are really the result of brute processing power rather than any fundamental advance in machine cognition. The room-sized Watson computer demonstrated an impressive grasp of context in the processing of natural language, but beating its Jeopardy opponents to the buzzer depended on its ability to instantly retrieve some 200 million pages of information on every subject imaginable. Similarly, the chess-playing Deep Blue computer was able to evaluate 200 million positions per second, calculating as many as 20 moves ahead, with programming assistance from a team of grandmasters.

Even today’s most powerful computers fall woefully short of the capabilities artificial intelligence pioneer Marvin Minsky forecast in the 1960s when he served as a technical adviser to movie director Stanley Kubrick during the making of 2001: A Space Odyssey. The HAL 9000 computer installed in the film’s Jupiter spacecraft not only ran everything on board but also kept the crew company during its 18-month voyage. Introduced in the movie as the sixth member of the crew, HAL pointedly referred to himself as a “conscious entity.” 2001’s co-creator, Arthur C. Clarke, later acknowledged the common view that HAL was the only real human character in the movie, as opposed to his fellow crew members, who were clearly modeled after the laconic “Right Stuff” astronauts of the 1960s.

But what is it that made HAL human, in contrast to today’s supercomputers? For that matter, what is it that makes us human, and how will we know the difference between man and machine when the machines can carry on a conversation as skillfully as they play Jeopardy or chess today?

To address these questions we might start with the chess game between HAL and one of 2001’s human crew members, Dr. Frank Poole, played by Gary Lockwood. The scene, which lasted less than a minute on screen, was filmed looking down over Poole’s shoulder, enabling the audience to see every piece on the chessboard. Chess enthusiasts have recognized the game as a re-enactment of an actual match between Roesch and Schlage, played in Hamburg in 1910. Murray S. Campbell, an IBM research scientist who was one of Deep Blue’s designers, has pointed to some key differences in how today’s supercomputers approach the game of chess versus HAL’s style of play. Campbell noted that a computer will always go for the optimal move, whereas HAL made an ostensibly inferior move that Poole might have exploited had he been a better player. HAL’s ability to assess the strengths and weaknesses of his opponent enabled him to set a trap that resulted in Poole’s defeat. Campbell concluded that HAL played chess the way humans do, meaning that he played with imagination, ingenuity and guile.

HAL proved to be all-too human in at least one other respect as well: he made mistakes. In laying out for Poole the moves leading to checkmate, he used incorrect chess notation for one of the moves. Chess aficionados have hotly debated whether the error was deliberately inserted into the story or was mere carelessness on Kubrick’s part. The director was known to be something of a chess fanatic, as well as a stickler for detail, making it unlikely that he would have inadvertently made such a blunder, however minor. Furthermore, he had already used a chess match to foreshadow dramatic developments in a previous film (Lolita).

HAL’s notational error was shortly followed by a far more consequential mistake when he falsely reported the impending failure of a component on the spaceship’s antenna. Once it became clear that HAL was unreliable, the astronauts felt they had no choice except to disconnect him, which caused HAL to go on a murderous rampage that resulted in the deaths of four of the five humans on board. Later, when the sole surviving human crew member was about to pull the plug on him, HAL offered this defense: “I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I've still got the greatest enthusiasm and confidence in the mission.” Having just wiped out most of his fellow crew members, HAL demonstrated that he was not only human but a sociopath to boot.

Unlike movie robots before and since, HAL did not look like some variation of the Tin Woodman; in fact, he had no human features at all. In appearance, he was indistinguishable from a sixties-era mainframe, apart from a big red “eye” that operated like a wide-angle lens. His distinguishing characteristic was his voice (supplied by Canadian actor Douglas Rain): soothing, unctuous and more than a little creepy. In a long-distance interview with a BBC reporter back on Earth, one of the crew members noted that HAL had been programmed with emotions to make it easier to talk to him, adding that “whether or not he has real feelings is something I don’t think anyone can truthfully answer.”

This remark points up a basic difficulty when ascribing human qualities to a machine: how can we tell for sure whether an attribute, such as intelligence or emotion, is genuine or whether the computer is merely faking it? Long before computers were capable of anything beyond basic calculation, Alan Turing proposed a simple test to determine whether a machine truly had something on the ball. Ask a human being and a computer anything you want via a terminal, and if you can’t tell by their answers which one is which, the machine is deemed to be intelligent. So far, no computer has passed the Turing test. However, computers are already able more narrowly to fool experts in their own fields, as happened to Garry Kasparov when he was preparing for his chess match with Deep Blue. Kasparov’s computer advisers showed him some actual matches between several grandmasters and an earlier IBM chess-playing machine without telling him which player was which. In some instances, he misidentified the computer as the human opponent. Later, after losing his match with Deep Blue, Kasparov announced that the machine had displayed a suspicious level of creativity and intelligence in the second game, suggesting that humans were pulling strings behind the scenes – an accusation that IBM firmly denied.

What are we to make of HAL’s assertion that he was a conscious entity? According to artificial intelligence (AI) experts, a future HAL that can truthfully make such a statement about himself (itself?) would be capable of subjective thought and experience, a level of development they refer to as “strong AI.” But how do you program a machine to do that, assuming that we can even define what consciousness is?

Some theorists maintain that the so-called “hard problem” of consciousness doesn’t really exist. They insist the human brain is just a machine, albeit one that is made of organic matter and that is far more complex than any computer yet built. Others suggest that machine consciousness will emerge spontaneously as computers grow more powerful, in keeping with mathematician Kurt Godel’s theorem that any sufficiently complex logical system is inherently self-referential. There is a question, however, whether a computer that operates by executing a string of commands can ever achieve consciousness. The human brain is made up of 100 billion neurons organized into networks that do not require programs or algorithms to function. Perhaps what is needed is a machine with a completely different architecture modeled on these neural networks.

It is conceivable that a computer will eventually pass the Turing test and may even refer to itself as a conscious entity. But does that make it so? One of the reasons consciousness is referred to as a hard problem is that we don’t really know how to measure it. When we talk about a conscious entity, we are really talking about a self-conscious entity. But there is nothing we can really point to and identify concretely as a self. “The I is not an object,” wrote the philosopher Ludwig Wittgenstein. It is not the image we recognize in the mirror, not the words that come out of our mouths, not even the thoughts that pop into our heads, since there always seems to be a silent observer that stands apart from every thought and perception of self. Or as Wittgenstein put it: “The I that I apprehend is not the I that does the apprehending.”

HAL’s crew mate was on to something when he acknowledged the difficulty of determining if the computer had “real” feelings. Whether we are talking about men or machines, we can only make inferences about their mental states based on our observations of their behavior. If they act like us and talk like us, we assume they experience the world more or less the same way we do, as conscious entities. But, of course, there is no way to get inside their heads to assess their thoughts and feelings, assuming there are any. As conscious entities, we experience a variety of bodily sensations, such as pleasure, pain, color, aroma and taste, that cannot be fully described in terms of their physical properties. We also know happiness, anxiety, jealousy, exaltation and despair -- all of which are inherently subjective, meaning that they can only be experienced from the inside.

The philosopher David Chalmers, who first identified the hard problem of consciousness, goes on to suggest that it may be a fundamental constituent of the universe, like space-time or mass, which means it cannot be explained in terms of something else. By contrast, classical science attempts to explain the workings of the universe in wholly physical terms, ignoring consciousness altogether or explaining it as a mere by-product of physical forces. A complete understanding of the world in which we find ourselves must somehow account for both subjective and objective phenomena.

Consciousness is inherently subjective, but is it personal? My thoughts are mine; my feelings are mine. Yet they are still objects of a larger awareness. As Wittgenstein said, the I is not an object, at least not the I that does the apprehending. Within the frame of this larger awareness is all mind and matter -- a universe, in fact. It is the ground of being. Rather than call it mine, I would call it God.

Murray S. Campbell, "’An Enjoyable Game’: How HAL Plays Chess”
Ludwig Wittgenstein,
Notebook
David Chalmers, The Conscious Mind

Home | Readings

www.godwardweb.org
© Copyright 2004-2019 by Eric Rennie
All Rights Reserved