FROM THE ARCHIVES
Will Robots Ever Achieve Genuine Consciousness? How Will We Know?
by
Paul Nunez
Will Robots Ever Achieve Genuine Consciousness?
How Will We Know?
Paul Nunez
Consciousness is the most familiar aspect of our lives, but it is also the most mysterious. One mystery is the question of whether or not we can recognize genuine consciousness when it occurs outside of ourselves. This issue has gained more attention over the past few decades due to the advances in artificial intelligence (AI) that impact our lives
on a regular basis. Consider this little fable. A future society employs androids (robots with human-like exteriors) that appear to have achieved genuine consciousness. One day an assertive group of androids applies for citizenship. In response, skeptical humans pass a law listing genuine consciousness as a citizenship requirement. Thus, the robots are challenged by human skeptics to prove that they (the robots) actually possess consciousness. In reply to such a challenge, the astute androids demand that their human inquisitors prove their own consciousness. How would the humans respond?
The challenge of identifying genuine consciousness in androids is nicely demonstrated in several recent films such as Her (2013) and Ex Machina (2015), and TV series like Westworld (2016) and Humans (2015–). In these stories, each android is without gender at the outset but seems to become male or female quite convincingly as the story progresses. But in the real world, how could we determine if such advanced robots possess genuine consciousness? Does it make sense to assign a consciousness rating to each robot (say “20% conscious”) in a manner similar to graded states of consciousness observed in various stages of normal sleep or Alzheimer’s disease?
These kinds of basic questions have been hotly debated for many years. During the Second World War, the mathematician Alan Turing, who is often described as the father of modern computer science, proposed a now famous test to identify human-level intelligence. The Turing test employs two sealed rooms, one occupied by a human and the other by a computer. An observer sends questions to both rooms; answers are received on a monitor. If, after a very large number of answers have been received, the scientist–observer cannot tell which room holds the computer, Turing proposed that the computer should be regarded as having human-level intelligence. While some have interpreted this experiment as a genuine test for consciousness, many see no compelling reason to equate consciousness with human-level intelligence. The former demands awareness of its environment, while the latter may not.
The Turing test judges consciousness based only on external behavior. Neuroscience studies, which include observations of brain imaging, injury, and disease, demonstrate that brains and minds are closely correlated. These data reveal signatures of consciousness at several different levels of organization (spatial scales), including single neurons, large-scale space averages over millions of neurons, and the intermediate scales of cortical columns. Such studies have convinced many that the brain creates the mind or that the mind emerges from certain dynamic (ever-changing) brain patterns, analogous to the musical patterns played by orchestras. But just what is so special about the human brain that it can produce these special patterns of consciousness. This intriguing question has been approached from many directions, but one major focus is brain complexity—a feature that seems to be necessary, although not sufficient, for consciousness to occur.
Complexity science investigates how relationships between the small parts of some entity give rise to the collective behavior of large-scale systems, and how these emergent global systems interact and form relationships with lower levels of organization and with the surrounding environment.
Some scientists view the brain as a complex system in the sense defined in complexity science, producing dynamic patterns and sub-patterns of electrical and chemical activity. Complexity science often employs novel cross-disciplinary partnerships between subfields of ecology, economics, information science, physics, sociology, and more. Complexity science investigates how relationships between the small parts of some entity give rise to the collective behavior of large-scale systems, and how these emergent global systems interact and form relationships with lower levels of organization and with the surrounding environment. A single cell (e.g., a neuron) may contain a hundred billion interacting molecules. It seems to act much like a natural supercomputer—an information-processing and replicating system of enormous complexity that can both act down on the molecular scale and act up on the higher levels of emergent systems (such as organ systems and living beings). Let a lot of cells interact in the right manner and in the right environment, and a human being will emerge. In this case the emergent system is a person whose brain can produce top-down influences on the lower levels, when he eats a pizza, runs a race, visits his local bar, or hops in bed with his partner.
The familiar TV maps of temperature, rainfall, and other weather patterns over the earth’s surface provide useful metaphors that help us to imagine dynamic patterns within the brain. This picture raises a basic question: if we were somehow able to simulate these dynamic patterns in robots, could they become conscious or perhaps even self-aware? Strong AI supporters may find this idea plausible. But, let’s look into the issue a little more deeply. Philosophers refer to the principle of organizational invariance, roughly meaning that any two systems with the same functional organization must have qualitatively identical experiences. Suppose our silicon-based robots employ a hundred billion or so computer-like processing units, superficially mimicking the neurons of human brains. Assume also that the principle of organizational invariance is valid. Does if follow logically that these “silicon isomorphs” (the robots) must be conscious? Many apparently believe that connecting billions of neuron-like artificial elements following appropriate input–output rules can produce a conscious entity. On the other hand, maybe the answer depends critically on just how fine-grained a correspondence between human and robot brain is achieved. Maybewe should only trust a much stricter version of the principle of organizational invariance, in which the robot brains are required to yield a one-to-one functional correspondence of all parts at many (or all) spatial scales.
If so, construction of true artificial isomorphs may be impossible due to fundamental physical limits on information processing in our universe. Such cosmic discussion of fundamental information limits is beyond the scope of this short essay, but note that even single neurons are incredibly complex systems containing billions of molecules and involving fine-grained interactions down to (at least) quantum scales.
One convenient analogue of this multi-scale process is the human social system, which involves interactions and patterns occurring at individual, neighborhood, city, nation, and other scales. Two competing interpretations of brain patterns measured at different scales are evident. First, perhaps consciousness is encoded in dynamic patterns at some special consciousness scale (the C-scale). In this view, the conscious signatures observed at other scales are mere by-products of the mind-creating C-scale dynamic behavior. That is, consciousness is encoded in patterns at the single neuron level, a view seemingly embraced by some neuroscientists and AI scientists. From this perspective, neuroscience takes on a strong reductionist flavor—the single-neuron C-scale is then the level where consciousness resides or is encoded. It implies that a robot brain consisting of some hundred billion or so artificial neurons, if appropriately interconnected, might achieve genuine consciousness.
An alternative interpretation is that no special C-scale actually exists, therefore consciousness is fundamentally a multi-scale phenomenon. We may call this premise the multi-scale conjecture, where consciousness is seen as being encoded by the dynamic patterns occurring at multiple scales. This premise further implies that consciousness is critically associated with cross-scale interactions, both bottom-up and top-down. By analogy, human social systems involve top-down and bottom- up interactions between social networks and cultures occurring at multiple scales. The multi-scale conjecture argues forcefully against philosophical positions that trivialize the complexity of consciousness. In essence, consciousness seems to require systems that are at least as complex as ordinary life, which consists of interacting multi-scale structures. The multi-scale conjecture may operate in at least two distinct intellectual domains. First, the multi-scale conjecture may be taken seriously as a stand-alone idea, independent of questions about materialism, dualism, and the hard problem. Second, the multi-scale conjecture may provide a tentative bridge connecting brain information patterns to minimally materialistic or perhaps even nonmaterialistic conceptual frameworks underlying the hard problem. In either case, our basic premise is that genuine consciousness requires systems at least as complex as nonconscious life. If this premise is correct, we should not expect the appearance of conscious robots in the near or even distant future.
Paul L. Nunez is an Emeritus Professor and currently runs a consulting firm that works on problems involving brain physics and cognitive science. His latest book is The New Science of Consciousness: Exploring the Complexity of Brain, Mind, and Self.