304 North Cardinal St.
Dorchester Center, MA 02124
304 North Cardinal St.
Dorchester Center, MA 02124
THE nature of consciousness is truly one of the great mysteries of the universe because, for each of us, consciousness is all there is. Without it, there is no world, no self, no interior and no exterior. There is nothing at all.
The subjective nature of consciousness makes it difficult even to define. The closest we have to a consensus is that there is “something it is like to be conscious”. There is something it is like to be me or you, and probably something it is like to be a dolphin or a mouse. But there is – presumably – nothing it is like to be a bacterium or a toy robot. The challenge is to understand how and why this can be true. How do conscious experiences relate to the cells and molecules and atoms inside brains and bodies? Why should physical matter give rise to an inner life at all?
Some people fear science may not be up to the task. They point out that you can’t precisely control or observe felt experiences. Some even question the idea that physical mechanisms can ever explain consciousness.
I disagree. I believe that science is capable of explaining consciousness, but only if we stop treating it as a single big mystery requiring a humdinger solution. Instead, we must break it down into its various related properties and address each in turn. As we progressively explain why particular patterns of brain activity map to particular kinds of conscious experience, we will find that the deeper mystery of consciousness itself begins to fade away.
Humans have pondered the relationship between physical matter and conscious experience for a long time. In the 1600s, René Descartes divided the universe into “mind stuff” (res cogitans) and “matter stuff” (res extensa), raising the conundrum of how the two might ever interact. His dualist perspective is now accompanied by a bewildering array of alternatives, ranging from illusionism – the proposal that consciousness doesn’t exist, at least not in the sense that we normally think of consciousness – to panpsychism, which posits that consciousness is fundamental and ubiquitous. The relatively orthodox scientific view of physicalism – the view that I find most appealing – is that consciousness is related to or emerges from physical matter. The question is how.
In the 1990s, the philosopher David Chalmers made an influential distinction between the “easy” and “hard” problems of consciousness. The easy problem is really a series of problems. These involve understanding how the brain, in concert with the rest of the body, gives rise to functions like perception, cognition, learning and behaviour. These problems aren’t actually easy at all; the point is that there is no conceptual mystery, that these problems can be solved in terms of physical mechanisms, however complex and hard to discern these mechanisms may be. Chalmers’s hard problem, on the other hand, is the enigma of why and how any of this should be accompanied by conscious experience at all: why aren’t we just meaty robots, without an inner universe? The temptation is to think that solving the easy problems would get us nowhere at all in solving the hard problem, leaving the brain basis of consciousness a total mystery.
Fortunately, there is an alternative, which I’ve come to call the “real problem” of consciousness: how to explain, predict and control the various properties of consciousness in terms of physical processes in the brain and body. The real problem is distinct from the hard problem, because it isn’t – at least not in the first instance – about explaining why and how consciousness is part of our universe. It is also distinct from the easy problems, because it doesn’t sweep the subjective, experiential properties under the carpet. The real problem isn’t a completely new way of thinking but, for me, putting things in terms of explanation, prediction and control has helped to crystallise what a successful science of consciousness should look like, since these are the criteria that are applied in most other fields of science.
Take, for example, the visual experience of redness. The hard problem asks why there is such an experience at all, while the easy problems encompass all the processes and outcomes associated with light of a particular wavelength entering the eye. From the perspective of the real problem, we want to know what it is about specific patterns of activity in the brain that explains (and predicts and controls) why the experience of redness is the way it is. Why isn’t it like blueness, or toothache, or jealousy?
This approach has something of a pedigree. Not so long ago, biologists and chemists doubted that the property of “being alive” could ever be explained mechanistically. Nowadays, although there are still many things about life that remain unknown, the idea that being alive requires some special sauce has long been retired. The hard problem of life wasn’t so much solved as dissolved.
Now, it is true that life isn’t the same thing as consciousness. Most conspicuously, the properties of life are objectively describable, whereas the properties of consciousness exist only in the first person. But this isn’t an insurmountable barrier; it mostly means the relevant information, because it is subjective, is harder to collect.
Part of the strategy in dissolving the hard problem of life was to stop treating it as one big scary mystery in need of a eureka solution, and instead as a collection of related properties, each addressable somewhat separately. For consciousness, there are many ways to cut the cake. In my book Being You: A new science of consciousness, I distinguish between conscious level (how conscious you are, as in the difference between general anaesthesia and normal wakeful awareness), conscious content (what you are conscious of) and conscious self (the experience of “being you” – or “being me”). By developing accounts that explain, predict and control these different aspects of consciousness, my belief is that a fulfilling picture of all conscious experience will come to light.
The bedrock idea is simple. Imagine that you are your brain, locked inside the bony vault of the skull, trying to figure out what’s out there in the world – a world that, from the perspective of the brain, also includes the body. All you have to go on are noisy and ambiguous sensory signals, which are only indirectly related to what’s out there, and which certainly don’t come with labels attached (“I’m from a cup of coffee! I’m from a tree!”). Perception, in this view, has to be a process of inference – of neurally implemented probabilistic guesswork. When I see a red coffee cup on the table in front of me, this is because “red coffee cup” is the brain’s best guess of the hidden, and ultimately unknowable, causes of the corresponding sensory signals. When I experience the glow of a sunset, or the sharp taste of an adventurous cheese, that too is a perceptual best guess.
How are these perceptual best guesses arrived at? According to predictive processing, the brain is constantly calibrating its perceptual predictions using data from the senses. The predictive processing theory has it that perception involves two counterflowing streams of signals. There is an “inside-out” stream, cascading downwards through the brain’s perceptual hierarchies, that conveys predictions about the causes of sensory inputs. Then there are “outside-in” prediction errors – the sensory signals – which report the differences between what the brain expects and what it gets. By continually updating its predictions to minimise sensory prediction errors, the brain will settle and resettle on an evolving best guess of its sensory causes, and this is what we consciously perceive.
Perception, in this view, isn’t a passive registration of an external reality. It is an active construction, a kind of “controlled hallucination”, in which the brain’s best guesses are tied to the world – and the body – through a continuous process of prediction error minimisation.
Predictive processing isn’t a theory of consciousness, in the sense that solving Chalmers’s hard problem would require (see “Theories of consciousness”). Instead, at least at first, it is best thought of as a theory for consciousness science, in the real problem sense. It provides a method for building explanatory bridges between neural mechanisms and aspects of what conscious experiences are like, from the perspective of the person experiencing them.
This is what my colleagues and I have been doing in my laboratory at the University of Sussex, UK. Some of our experiments are very simple – for example, finding that people consciously perceive expected images more quickly, and more accurately, than unexpected images. But the really exciting work is taking us deeper into the phenomenology – the “what-it-is-like”-ness – of conscious experiences. In one example, we are investigating the phenomenology of different varieties of visual hallucination in terms of their dependence on different kinds of perceptual prediction.
Some hallucinations, such as those due to psychosis or neurodegenerative disease, can be complex, featuring rich perceptual scenes experienced as being real. Others, such as those arising from progressive visual loss, can be relatively simple and aren’t experienced as being continuous with the real world. These differences can be simulated by novel neural network architectures that deploy perceptual predictions in different ways. These neural networks serve as computational models of the brain basis of visual experiences, and we can check their output by asking people who experience hallucinations to rate what they come up with. We are now extending this approach – which we call “computational phenomenology” – to other, more fundamental aspects of perceptual experience, like the sensation of time passing and the three-dimensional structure of visual space.
By progressively accounting for the deep structure of perceptual experiences – not only the specific contents they present, like a cat or a coffee cup, but how they unfold over time and space – my belief is that the apparent mystery of the hard problem is already beginning to dissolve. And this process gathers momentum when we ask who, or what, is doing all this perceiving – and consider the experience of being a conscious self from the same “real problem” perspective.
Contrary to how things might seem, selves aren’t essences-of-you that peer out through the windows of the senses from somewhere inside the skull. Instead, the self is a perception too. Experiences of “being you” are collections of brain-based best guesses, and understanding this further erodes the dualistic intuitions on which the hard problem rests.
Just as consciousness has many aspects, there are also many ways we experience being a self. These can be arranged in a loose hierarchy, beginning with low-level experiences of being a physical body, through to experiences of seeing the world from a particular first-person perspective, conscious intentions to do things (what we might call experiences of free will), all the way up to experiences of being a continuous person over time within a rich social and cultural environment – a self with a name, an identity and a set of memories. As I argue in my book, each of these aspects of selfhood can be understood as a distinctive form of controlled hallucination.
For now, let’s drill down to the most basic aspect of conscious selfhood: the experience of being a body. I think of this as a rudimentary feeling of simply being a living organism – partly expressed through emotions and moods, but at its deepest layers without any describable content at all.
It is here that the perception of the body from within, known as interoception, comes to the fore. Interoceptive sensations tell the brain about the internal state of the body – blood pressure, say, or how the heart is doing – and therefore enable the brain to perform its most important task: keeping the body alive.
Like all sensory signals, interoceptive signals are only indirectly related to their causes, and so interoception must also involve a process of best-guessing. And just as inference about the causes of visual signals underpins visual experience, my proposal is that interoceptive inferences underpin other kinds of experiences: in this case, bodily experiences, like emotions and moods.
This proposal might seem nothing more than a modern gloss on some old ideas that emotion involves perception of changes in the physiological condition of the body. But there’s more to it than that. Following the real problem strategy, the differences between emotional experiences and visual experiences can now be understood in terms of the different kinds of perceptual prediction at play. Visual experiences of objects are generally concerned with figuring out what’s there, so it makes sense for the corresponding perceptual experiences to have the character of things with specific locations and physical extents. Emotional experiences, by contrast, are generally concerned with the organism’s physiological condition and prospects of staying alive. These experiences, and experiences of being a body more generally, don’t have shapes and locations in space – they instead have valence: which in psychology means things are good or bad now, or likely to be good or bad in the future. Which is what it is like to feel an emotion.
The upshot of all this is that the deepest layers of selfhood are intimately tied to our material nature as living creatures. The interoceptive predictions that underpin all self-related experiences are there to regulate our internal bodily state – to keep us alive. And from this, everything else follows. All our perceptions and experiences, whether of the self or of the world, are inside-out controlled hallucinations deeply rooted in the flesh-and-blood machinery that evolved, develops and operates from moment to moment in light of a fundamental biological drive to stay alive. We perceive the world, and ourselves, with, through and because of our living bodies. To repurpose another term from Descartes, we are conscious “beast machines” through and through.
“Experiences are controlled hallucinations rooted in flesh-and-blood machinery”
This way of thinking about consciousness and self transforms predictive processing from being a theory for consciousness science into a theory of consciousness – of why we are what we are and of the nature of all our experiences. There are many implications that follow from this. Most importantly, the connection between mind and life isn’t only that apparent mysteries may dissolve when approached in the right way. There is a much deeper continuity to be found here, which implies in turn that consciousness may be more widespread among other forms of life than we think, and is less likely to shimmer into existence in the fleshless circuits of artificial intelligences, however advanced they may be.
The question I keep asking myself is where all this will lead. Will the hard problem dissolve entirely, vanishing in a puff of metaphysical smoke? Or will residues of mystery stubbornly remain? Either way, addressing the real problem of consciousness is likely to deliver much more progress than venerating it as a magical mystery or dismissing it as an illusory non-problem. This is the real promise of the real problem. Wherever it eventually takes us, the journey will transform our understanding of our conscious experiences of the world around us, and of our selves within it.
In the early days of modern consciousness science, back in the 1990s, researchers focused on identifying empirical correlations between aspects of conscious experience and properties of brain activity. This search for the “neural correlates of consciousness” was motivated in part as a response to concerns, pervasive through much of the 20th century, that consciousness lay beyond the remit of science altogether. But despite its many successes, this approach is limited because correlations aren’t explanations, no matter how many you identify.
In recent years, however, there has been a blossoming of neurobiological theories of consciousness. This is a sign of the field’s growing maturity, because it is only when couched in terms of a theory that experimental findings can deliver a satisfactory understanding of consciousness.
There are currently four main theoretical approaches in consciousness science. According to higher-order theories, a mental state is conscious when another mental state – higher up in a hierarchy – says that it is. For these theories, the devil is in the detail about what kinds of “higher-order” representations count for consciousness. Global workspace theories propose that mental states are conscious when they are broadcast widely throughout the brain, so that they can be used to flexibly guide behaviour. A good way to think about global workspace theory is that consciousness depends on “fame in the brain” – conscious mental states have access to a wide range of cognitive processes in ways that unconscious mental states don’t. These two theories focus on functional aspects of consciousness and emphasise frontal (at the front) and parietal (towards the back and off to the sides) brain regions.
By contrast, integrated information theory focuses on phenomenological aspects of consciousness – what it is like, experientially – and proposes that consciousness is associated with a posterior cortical “hot zone”, located towards the back of the brain and including parts of the parietal, temporal and occipital lobes. According to this theory, consciousness depends on the ability of a system to generate integrated information.
The fourth approach, known as predictive processing, is more indirect, being used primarily to build explanatory bridges between aspects of consciousness and their underlying neural mechanisms. There are several examples of this approach, ranging from those which associate consciousness with top-down signalling in the brain to my own “beast machine” theory (see main story). For these theories, the idea is that by progressively accounting for diverse aspects of consciousness, they can ultimately shift from being theories for consciousness science towards being theories of consciousness itself.
More on these topics: