A New Science of Consciousness w/ Anil Seth
EPISODE #56
Apple Podcasts | Spotify | Google Podcasts
Neuroscientist Anil Seth shares his thoughts on the role of neuroscience in explaining human consciousness, why our perception of reality might be a controlled hallucination, and how psychedelics are challenging our understanding of the mind.
Anil Seth is a Professor of Cognitive and Computational Neuroscience at the University of Sussex, where he is also the Co-Director of the Sackler Centre for Consciousness Science. Anil is also a Wellcome Trust Engagement Fellow, Co-Director of the Canadian Institute for Advanced Research (CIFAR) Program on Brain, Mind, and Consciousness, and Co-Director of the Leverhulme Doctoral Scholarship Programme: From Sensation and Perception to Awareness. Anil edited and co-authored the best-selling 30 Second Brain (Ivy Press, 2014), was consultant for Eye Benders (Ivy Press, 2013; winner of the Royal Society Young People’s Book Prize 2014) and contributes to a variety of media including the New Scientist, The Guardian, and the BBC. Anil also writes the blog NeuroBanter.
Show Notes
YouTube
SoundCloud
Transcript
Luke Robert Mason: You're listening to the FUTURES Podcast with me, Luke Robert Mason.
On this episode, I speak to neuroscientist, Anil Seth.
"The experience of being a self is probably the most important aspect of any of our conscious experiences. This is not a single thing, either. There are many ways in which we experience being who we are" - Anil Seth, excerpt from the interview.
Anil shared his thoughts on the role of neuroscience in explaining human consciousness, why our perception of reality might be a controlled hallucination, and how psychedelics are challenging our understanding of the mind.
So, your new book focuses on a brain-based physicalist understanding of this thing called ‘consciousness’. With that in mind, what do you think is our best explanation for what consciousness is?
Anil Seth: We don’t have the best explanation, yet, of what consciousness is. That’s one of those metaphysical questions that try to figure out what the place of consciousness is in the universe. I take much more of a pragmatic view on this. Consciousness exists. We each know what it is to have conscious experiences. Consciousness is what you lose when you go under general anaesthesia and what comes back when you come around again. When we are conscious, we have specific conscious experiences. There are experiences of the world around us and of being a ‘self’ - of being ‘me’ or of being ‘you’ - within it.
When thinking about this grand mystery of how we understand the place of consciousness in the universe, I think there are two broad options. You can try and face the problem head-on and say, what is it about matter that generates conscious experience? Or maybe consciousness is somehow fundamental and ubiquitous to the universe. These are very grand statements.
Or, you can be a little bit more, I think, humble and pragmatic about it and say, okay, we know that conscious experiences in humans and other animals depend in very intimate ways on the brain and the body. The more we can explain the character of conscious experiences in terms of processes in the brain and the body, the more we’re making progress in the science of consciousness. It may not be necessary in the end to explain the big ‘why’ and ‘how’ consciousness is part of the universe in the first place.
Mason: What I was going to ask is, if the question isn’t ‘what?’ then should the question really be ‘why?’ Why does consciousness itself just happen, or I guess in some cases just emerge?
Seth: Right, so there’s this philosophical position of what’s often called epiphenomenalism. This is the idea that consciousness really doesn’t have a function. It just somehow comes along for the ride, like the whistle of a steam engine which is a classic example of this. There’s also this very well-known thought experiment in philosophy of the philosophical zombie. This could be a person like you or me, who from the outside is completely indistinguishable from a normal you or me, but has no consciousness. There is no conscious experience going on. The plausibility of something like a philosophical zombie is taken as one of the arguments against physicalism. I think it’s a very weak argument - we might get into that.
But why is consciousness present? What good is it? I think there are very many reasons why we have consciousness. If you think about what consciousness does for us, in practice - forget about any principle for a moment, but in practice - what are conscious experiences good for? Well, they bring together a large amount of information in a unified scene that emphasises the opportunities for action that are relevant to our survival. We see things in ways that help us interact with them and help us stay alive. We experience ourselves and our bodies in terms of emotions and moods, whether things are going well or badly, and whether we should run away from something or run towards it. The number of possible situations that an organism like a human being could be in is just absolutely vast. The number of ways we could respond to any particular situation is also absolutely enormous.
If you multiply that over time, what could do now? What could we do a few seconds from now? And a few minutes? And a few months? You get into these enormous possibility spaces. It seems to me that conscious experiences are a very effective and efficient way of compressing a lot of organism-relevant information in a unified, almost goal-directed - or at least survival-related - format. That’s what I think consciousness is ultimately for. It’s to help us stay alive better.
Mason: Do you really believe, then, that there is an evolutionary basis for consciousness? It’s that self-preservation urge if nothing else. The idea that we perceive ourselves as conscious means we suddenly have a reason to keep ourselves alive.
Seth: I think that’s part of it, yes. I do think there’s an evolutionary history. This is almost an article of faith, I would have to say. There’s very little evidence, because consciousness doesn’t leave a fossil record, but almost everything in biology can only be understood through the light of evolution - at least to some extent.
Consciousness is such a central feature of certainly human existence and I would say the existence of many other animals that it seems incredibly unlikely that it has no evolutionary past or evolutionary history and so wasn’t shaped by selection pressure. I think there absolutely was. Especially when you do think about what conscious experiences offer us, they’re clearly very adaptive things. They’re clearly very useful for the organism in all sorts of ways.
Mason: Your latest book, what it really looks at is the relationship between consciousness and the self. What is that relationship and why is that so important for our understanding of both ourselves and each other, I guess?
Seth: When we think about the nature of conscious experiences, there are two key aspects to it, I think. There’s the experience of the world around us. A lot of the science of consciousness has very reasonably focused on that. How do visual experiences come about? What’s the difference between conscious and unconscious visual perception?
There’s also the experience of being a self. It’s a bit harder to study because we can’t manipulate these experiences in the same way that I can, for instance, manipulate visual perception in the laboratory. When it comes down to it, the experience of being a self is probably the most important aspect of any of our conscious experiences. This is not a single thing, either. There are very many ways in which we experience being who we are. There’s the experience of being identified with a body; having a body. There’s the experience of how well the body is doing at staying alive, back to that old thing again. Then there are high levels of self that come to be associated with a name and a set of memories. Also things like free will and agency - all of these are parts of being a self.
The first critical thing to recognise here - certainly this is not a new idea in my book, it’s been around for a long time - is that the self is not the thing that does the perceiving. The self is not an essence of conscious observation or subjectivity that is perched somewhere inside the skull. The self is a perception too. The self is something that arises in the ongoing flow of conscious experience. When you put it like that, then it becomes necessary to account for why and how we experience self the way we do. What do the mechanisms underlying experiences of self have in common with those that underlie our experiences of the world?
Ultimately, I think it’s important because one of the reasons we’re interested in consciousness in the first place is that whilst it is a grand scientific mystery, there’s also a very personal aspect to it. Certainly in my case and I think for many others, we all want to understand ourselves better and how we fit into the wider tapestry of science and of nature.
Mason: I mean, you playfully suggest that being you is a controlled hallucination. What do you mean by that, when you say that being me is just a hallucination?
Seth: It’s true. It is playful. I’m glad you called it that. I don’t mean it completely literally. It’s very very difficult to find the right words to describe these sorts of things. Why do I use this term? By the way, it wasn’t my invention. I heard this term from Chris Frith in London who heard it from someone else, who heard it from someone else. It’s got a long history.
The central idea of using that phrase is that perception, whether it’s of the self or of the world, is always an act of construction. It’s not just a passive, transparent readout of what’s already there. Whether it’s the perception of a blue sky or a red coffee cup, of an emotion that might arise in my experience, or a memory that I might have from earlier in the day. All of these obey certain common principles. In my view, they are all forms of perception, and all perception is as much of writing as it is of reading.
The idea is that the brain is always throwing out predictions about what’s going wrong - whether in the body or in the world - and it’s using sensory data to update and calibrate those predictions. The content of what we perceive is conveyed not by the sensory input but by these top-down, inside-out conceptual predictions. There is, here, a continuity with what we typically think of as a hallucination. We typically think of hallucination as a false perception. You perceive something that other people don’t. It’s still coming from the inside out, but for a hallucination, it’s no longer controlled by the body or the world. It’s a kind of uncontrolled perception.
Normal perception is still coming from the inside out but it is now absolutely controlled. It’s geared to what’s out there in the body and the world. That’s why I think it’s a useful term. It certainly does not mean that the brain makes up reality or that any perception is equally valid. No. Control is absolutely critical.
Mason: It’s not just control, is it? It’s how it’s informed of anything else. Perception certainly feels like - in a very subjective way - an assemblage of a multitude of things. Yes, the data input from our sense organs is one of those things. Equally language and the way in which we then translate that, whether we’re talking to ourselves in some form to help us understand those sensory inputs. Then equally, memory plays a part in that. The way in which we’ve had previous relationships with the things that we’re sensing - whether it’s the perception of red or a memory of red - all of these things contribute to this conscious perception.
Is the brain organising this stuff into some sort of hierarchy to give us a conscious experience? Or, are we at the stage of scientific understanding where we just don’t know the relationship between language, the senses, and this equally mysterious thing called memory?
Seth: Right, that’s the business of neuroscience, basically.
Mason: Yep.
Seth: That’s the day job of what needs to be done. The views on this have spanned from one extreme to the other. From extreme views that everything in the brain is distributed and there’s no particular organisation into different functions. Then the other extreme is that everything is completely modular. You have one bit of the brain that does language, one bit of the brain that does perception and one bit of the brain that does memory, and so on.
The truth is going to be somewhere in the middle. Everything that the brain is capable of doing - whether it’s language, perception or action - is going to take a network. It’s going to be distributed across many different brain regions, probably in some loosely hierarchical form, although the amount of hierarchy is going to depend on what the specific function is. Something like vision might be more hierarchical than something like smell, for instance. The visual world is naturally more hierarchical in its organisation.
Then, not every part of the brain is going to contribute to conscious perception either. A lot of what the brain does may be just necessary for us to be conscious organisms but it may not shape the particular contents of what we perceive at any one time. This is the beauty of taking a pragmatic, neuroscientific perspective on it. These are all addressable questions.
It’s not easy, but we can basically go in and do experiments to try and figure out which bits of the brain or which processes and interactions in the brain contribute to our conscious experiences and which don’t. We can see what’s the best way to understand its overall organisation. What kind of network? Is it like the world wide web? Is it like a very strict hierarchical network? What innovations can we use from areas like machine learning and AI to think about the relationship between artificial neural networks and brain organisation? It’s a very exciting field to be in because the tools that we have are getting better all the time.
Mason: You are largely agnostic about where consciousness comes from but you do lean towards this idea of taking a physicalist approach to consciousness. In other words, it emerges from matter. Could you help explain to our audience what is the physicalist approach and how does it differ from some of the other theories and approaches to this thing called consciousness?
Seth: Right. Physicalism or materialism - I use the two relatively synonymously - is the view that there is a physical world out there. It’s made of stuff. We may not know exactly what that stuff is, whether it’s superstrings or quarks or whatever. Matter of some sort exists, and then other things that we observe in the universe are properties of that matter, organised in some way or other. We are quite comfortable with life being a property of matter organised in particular ways. Physicalism applied to consciousness is that consciousness, too, is a property of that physical universe or matter, organised in some way. It’s a very liberal perspective, really. It’s not making any specific claim about ‘this level of matter’ being important or ‘this particular kind of interaction’ being important, but just that this is a very useful picture in science, in general, to understand complex phenomena in terms of interactions in matter.
There are some other perspectives though. These come about because of the apparent mystery of how you could ever really explain consciousness this way. This goes right back to Descartes, probably further back, certainly further back to the Greeks and so on. Conscious experiences, because they are intrinsically subjective and private to us now, just don’t seem to be the kinds of things that could ever be explained this way. There’s this intuition that consciousness is not and cannot be any kind of property of physical material interactions. This motivated Descartes and it also motivates the so-called ‘hard problem’ of David Chalmers - this idea that we all know that consciousness depends on physical processing in some way, but it’s just rather unclear and mysterious how and why that should be so.
In the face of that mystery, you can do a number of things. You can either stick with the physicalist programme and try to explain the various properties of consciousness in terms of material interactions in brains and bodies, seeing whether this sense of mystery begins to dissolve and eventually evaporate. That’s my bet.
Or, you can say, no. We need another perspective on how consciousness fits into our picture of the universe altogether. Here you have these other options such as idealism - which is that only consciousness experiences exist and the problem isn’t how you get mind from matter but rather how you get matter from mind. Or you can have panpsychism which is the idea that consciousness is part of the fundamental structure of the universe. It’s everywhere, it’s ubiquitous and it has the same status as something like mass, energy or charge. If you assume that, there’s no longer any hard problem to solve because you’ve built consciousness in from the bottom. The problem with that, for me, is that it really doesn’t explain anything and it doesn’t lead to any testable predictions. There’s not much you can do with it.
Then there’s a whole variety of other interesting options on this menu of metaphysics. I think my favourite is mysterianism. I say it’s my favourite, I think it’s my favourite in the sense that it’s the weirdest. It’s the idea that there is a physicalist understanding of consciousness out there, but we humans are just too unintelligent and will always be too unintelligent to know what it is. We’re cognitively closed to this solution that nonetheless does exist.
Mason: Yeah, the wonderful thing about mysterianism is that if we were ever able to fully understand consciousness, would that confront us with the big question that our consciousness isn’t that necessarily complex? If our human brains can understand it, surely our human brains aren’t complex enough. There’s that weird paradox there, isn’t there?
Seth: I don’t think there is, actually. I think it’s already the case that our understanding of phenomena is not the property of any individual brain. It hasn’t been that way for a long time. I think it was this guy, Athanasius Kircher, in I think maybe the 17th century or 18th century. He was renowned for being the last person who knew everything. After him, it just became impossible for anybody to justifiably say they knew everything.
With the things we understand in science now, we use the word ‘we’ very literally to say that the scientific enterprise or the enterprise of science with philosophy and humanities, or science and culture in general have just reached an understanding that exceeds the capabilities of any individual. Certainly, we individuals can understand a solution that exists and there are people that understand quantum mechanics, potentially. Maybe they don’t. Maybe that’s the one counter-example. But certainly, we understand evolution. The development of that theory, although we associate it with an individual - with Darwin and sometimes with Wallace - of course, it was much more than just two individuals. I don’t think there is that paradox, actually. I think we might collectively reach an understanding of consciousness that will then make some sense to all of us.
Equally, and here’s another possible outcome that I do find a bit weird to think about, we may reach an understanding and explanation of consciousness that is consistent with what science should do in the sense that it allows us to explain the properties of consciousness, predict their occurrence, and control them through intervention. But we might not have any sort of intuitive sense of, oh yeah, that’s right. It has to be that way. I think that’s entirely possible, but that also wouldn’t be that odd.
Quantum mechanics, again, is extremely good at explaining, predicting, and controlling phenomena but no one has a good intuition about really, what the hell is going on. Of course, we can ignore it in our daily lives because we’re not generally dealing with quantum mechanical phenomena. But we are every day, dealing with the fact that we are conscious. If we have a scientific explanation that lacks that intuitive feel, we may find it dissatisfying. That may just be an unfair criticism of what scientific explanation should be expected to provide us with.
Mason: It certainly feels like physicalism does have a challenge that’s captured by this word ‘emergence’. If we assume that consciousness arises from complex matter, then we have to ask the question of at what point did matter become complex enough for - ping - consciousness to suddenly appear in reality. It’s the same thing with life. The environment of this Earth created the ideal circumstances under which suddenly - ping - life appears, it emerges. The same thing happens with consciousness. It’s almost captured by…I think it was Terence McKenna, who said “Give us one free miracle and we’ll explain the rest.” That one free miracle is that emergent piece. The ‘ping’ of consciousness into existence.
How do we confront that? How do we deal with that? How do we, I guess, tackle this thing called emergence?
Seth: I think you’ve hit a very important nail on the head there, by bringing up this concept of emergence. Yes, colloquially, that’s what a lot of people who take a physicalist perspective would say. They would say that consciousness emerges from physical interactions in some way and the question is how.
There’s a lot riding on this word ‘emergence’ there, as you rightly highlight. I will say two things about it. The first thing is that almost as with life, there will have been some point where you put a dividing line between things that are not living and things that are living. It does become a little bit arbitrary where you put that line. The same might go for consciousness. If we can think of consciousness as being along a dimension or along several dimensions. Of course, there will be things that are unambiguously unconscious, like a dead salmon. There will be things that are unambiguously conscious like you or me. We can imagine interpolating along that line and there may be no good reason to say any one particular place is the bright line between conscious and unconscious. There may also be a justifiable reason to say that, but I don’t think looking for that bright line is necessarily the right thing to do.
I think the more important thing to do, as you suggested, is to get clear on what we mean by emergence. Here, there’s the emergence over time in the sense of things emerging over evolution or over development of a particular organism. There’s also emergence in the moment. Here, I think is where the most interesting questions are to be asked. In physicalism, that’s usually the sense that it’s met. You have all of these neurons firing and synapses doing whatever synapses do, connecting and reconnecting. Conscious experience has somehow emerged out of that soup of electrochemical activity at a particular time, and at other times, under anaesthesia, doesn’t happen. This is a different meaning of the word ‘emergence’.
Again, there’s a sort of spooky way to think of emergence which is that there is some higher-level property at which new laws of physics come in or something, that is in principle not explainable by what’s happening at lower levels. We can think of this as a ‘strong emergence’.
Then there’s also a more scientifically legitimate form of emergence called ‘weak emergence’ which just tries to characterise how the whole of a complex system relates to its parts. If you think about flocks of birds…where I live in Brighton, on a winter's evening you often get these amazing murmurations of starlings; these huge flocks of starlings that wheel around the ruins of the Old West Pier. Then at a certain point, they settle down to roost for the night. If you look at these wheeling flocks, the flock seems to have a life of its own. It seems to have an organisation, autonomy, and dynamics that exceed the dynamics of the sum of the individual birds. Of course, we know that really, there are just individual birds. There’s nothing spooky going on here, it’s just interactions between the birds playing out in an interesting way that makes the whole more than the sum of the parts.
One of the things in my research group that we’ve been doing much more lately is trying to find a good mathematical way of understanding these systems. If we can characterise how a flock of birds appears to emerge from the individual birds, maybe we can use the same approach to try to understand when we are conscious, what kind of global brain-state is emerging from the firings of individual neurons in a conscious state but not in an unconscious state. Getting more precise about emergence in this sense, I think, will give us a lot more clarity on how that word is used in this physicalist picture of consciousness in the universe.
Mason: Hearing you say that makes me think that we shouldn’t be talking about consciousness in the singular, but perhaps talking about consciousnesses. In other words, maybe there’s not just one form of conscious experience that deals with everything that becomes us. Maybe there’s a multitude of consciousnesses. One that deals with this memory thing, one that deals with how we process senses from the outside physical world, and they all interact to create this singular - or hallucination, at least - of a singular perception of this thing called reality. Would that be fair? Or do you think that, in actual fact, there’s one integrated thing; it’s all coming from one single point?
Seth: The disappointing answer is of course that it depends. Again, it’s a very interesting question. There’s an assumption among many that consciousness is necessarily unified. There’s one sense in which you’re absolutely right, and I think it’s less controversial, which is that there’s more than one way of being conscious. The sort of adult human example of consciousness is just one point in a much larger space of possible conscious minds, whether they’re other humans or other species, or maybe other artificial systems - who knows?
There are definitely many ways of being conscious, but it gets more interesting and more tricky when we think about the apparent unity of consciousness as it unfolds for every one of us. Is that a necessary condition for consciousness; for a conscious agent?
There’s been a lot of debate about this over the decades. Of course, experiments on so-called split-brain patients really challenge this idea. This was an operation relatively - it was never common - but it was done not infrequently in the 50s, 60s and 70s - maybe up to the 80s - for people who had very very intractable epilepsy. For repeated epileptic seizures, one of the treatments, when medication failed, would be to chop most of the brain in half. Usually not all of the brain. You just chop through this corpus callosum which is this band of fibres that connects the two cortical hemispheres. Observations from people like Roger Sperry, Michael Gazzaniga, Joe Bogan and others seem to suggest that after this operation, you now had two independent conscious agents, but only one of them, typically, would have access to language and be able to tell you stuff. If you showed visual input to one hemisphere, it seemed to determine the behaviour of that hemisphere independently of what was going on.
This literature has been developing and it is, unsurprisingly, more complicated than that, but it’s still very much a live debate. Are there in practice examples where the conscious unity of a human being is split? Even then, even if you take that on board and say, “Okay, let’s accept that that can happen.” then each remaining consciousness might still be integrated itself. It might have its own unity. There’s a deeper philosophical question here which has been discussed in super interesting detail by people like my colleague, Tim Bain, in Australia, about the unity of consciousness. What do we mean by this? Is it contingent just on our own human experience? Or is it a more fundamental property of experience in general?
One of the more popular, but also controversial theories of consciousness that’s out there - the integrated information theory - builds us in from the bottom up. It says this is an axiom of consciousness. It is integrated. It is unified. Then you have questions about things like, what’s the granularity of that unification? If my whole brain is supporting a unified conscious experience, does that mean that smaller parts of my brain can’t support independent conscious experiences?
These questions are all difficult and at the moment, probably intractable to answer with experiments, but I do think they’re interesting to contemplate. When you think about consciousness, it’s always worth challenging our intuitions about what’s necessary for consciousness. What are the possible ways in which consciousness could be expressed, but not taking how things seem to us as what they are?
Mason: Is that partly the great problem with brain scans? The wonderful thing about a brain scan is that you can provoke certain inputs into the human body and you can see parts of the brain light up. You can cross-correlate that this sensory input triggers that part of the brain. In reality, we don’t actually know the location of anything. We know where the bits light up but we don’t know the location of a singular memory, for example. I can’t scan you brain and go, “Oh yeah, there’s the location of Anil’s memory of eating his breakfast this morning, located exactly in that part of the brain.” Brain scans are metaphorical in so many ways, aren’t they? How do they confuse our understanding of what’s really going on when we’re looking at things like brain scans?
Seth: Well you’re right there. That’s interesting to think whether they’re metaphorical. They’re certainly very very indirect. They can be misleading, but they’re also amazing tools.
Mason: Of course.
Seth: Let’s not undersell them. One of the things that has really catalysed a lot of neuroscience in general, but certainly our understanding of consciousness, is this ability to look inside a living human brain while people are having experiences and reporting them. This is just a game-changer, or it was certainly a game changer in the 90s.
The technology is limited. It’s limited in the sense that we don’t have a good brain imaging technology that gives us the three things that we would like to have at the same time, these being spatial resolution - where things are happening; temporal resolution - where they are happening; and coverage - we want to be able to look at pretty much the whole brain at once. We can have different combinations of these things but not all three of them together. That makes it almost inevitable that our analyses based on brain imaging data are going to be oversimplified. That’s really no surprise. The brain is incredibly complex as a physical system. It has 86 billion neurons and a thousand times more connections. We’re not going to be able to record an image of the activity of every relevant part of the brain at one time. I think the challenge here is both to develop improved imaging technologies that allow us to get closer to this combination seeing what, when and where in the brain, and also analysis methods.
Most brain imaging things - certainly when you see brain scans in the media - usually you just see a little hotspot here. There’s a red patch in your frontal cortex, or in your amygdala if you’re afraid of something. Of course, what you’re really seeing there is not a bunch of neurons firing and everything else quiet. You’re seeing a very small percentage change in the oxygenation of blood flowing around that particular region, which is related to brain neural activity in complicated and still, completely understood ways. This temptation that we need to be overly localisationalist and say, “This whole property of the organism relates to just activity in this one area because it shows a bit more blood flow.” is definitely an oversimplification. There’s much more attention to thinking about networks in the brain. How we measure things like information flow between different brain regions, rather than just the activity of this region or that region.
Mason: I guess what I’m pushing towards is, is consciousness like a property, or is it closer to something like gravity? Is it some form of field?
Seth: I think we’re again in danger of trying to address the question of what consciousness actually is. Is it a field? Is it a property of a small bundle of neurons, or a property of a network? I think it’s just, honestly, a bit too early to make those kinds of claims.
What we know is that human consciousness depends in specific ways on the brain. Some of these facts are really striking. The cerebellum, which is this mini-brain that hangs off the back of your cortex seems to have nothing to do with conscious experiences, yet it has about three-quarters of all the neurons in your brain. I always find that surprising. Three-quarters of your brain cells have basically nothing to do with consciousness. The methodology that people have been following for a long time is to look for the so-called ‘neural correlates’ of consciousness. This is a very, again, pragmatic strategy. It just tries to identify relationships between things happening in the brain and the kinds of conscious experiences that people report having. It’s limited because we all know those correlations are neither causal nor are they necessarily explanatory.
There’s this fantastic website - I forget the name now - which has weird correlations between things like the price of cheese in Wisconsin and the divorce rate in France in the 1960s. They correlate perfectly - something like that. Of course, the fact that there’s this correlation tells us nothing about the real world. The challenge in the neuroscience of consciousness is to identify not only brute correlations but correlations that have predictive and explanatory power. We want to know why this brain region of a pattern of activity goes along with this particular kind of conscious experience. That’s the direction this field is moving in. I think that’s a very productive way for it to develop.
Mason: Historically speaking, the brain - not necessarily consciousness - but the brain has always been understood in relation to human metaphors. We used to talk about the ‘cogs turning’ when we were thinking. Now we talk about the ‘processing of ideas’. I’m processing what you’re saying, Anil. It’s a machinic metaphor. We’ve got to this stage where the dominant way in which to understand the brain is like some form of computer. That three pounds of grey gloop inside of the skull is a computer. Whatever consciousness is must be some form of a software programme running on that computer. Does that sort of metaphor limit our understanding of the possibility of what both the brain and consciousness could really be?
Seth: Yes, I think it does. We can’t escape the use of metaphors. There’s a lovely book by Matthew Cobb called ‘The Idea of the Brain’ which gives a beautiful history of the metaphors that people have used to try to understand this incredibly complex and recalcitrant lump of tofu-like stuff inside our skulls. The metaphor of the brain as a computer has been fairly dominant for certainly the last half of the 20th century and into the 21st.
I do feel that it’s on the way out. It’s been on the way out in some communities and some philosophical perspectives for a very long time. There are ways in which the brain is very unlike a computer, at least the sort of computers that we have surrounding us now. We talked about memory earlier. One reason you can’t find the part of my brain that has the memory of what I had for breakfast is that biological memory is not like computer memory. It’s not like a file system, where the point is, if you save a file and then load it back up again, it’s the same file. With memory for humans, the more often we remember something, the less accurate it becomes. Memory is always an act of recreation and regeneration. There are just endless differences between what computers do and what brains do. Some of these get very fundamental. Here’s where the metaphor can start to be limiting.
I think the main reason it can be limiting is with this easy thing to say that you hear all the time, that the brain is, of course, processing information. The brain is an information-processing device of some sort. This is said very freely as if it’s entirely obvious. Computers are also information processing devices. That much is true. That is what they were built to do and that’s how to understand them. Is the brain an information processing system? That’s actually smuggling a lot of strong assumptions in about what brains do. What is information? What is processing?
When people talk about information processing, they usually also imply that you can, in principle, build a computer - an actual information processing system - out of anything. We build them out of silicon now because it works. It’s kind of cheap and we know how to do it. The principles of computation apply to anything. You could build a computer out of tin cans and bits of string if you took enough time. Of course, Babbage built his computer out of cogs.
The human brain is made of neurons and synapses. It’s this chemical machine as much as it’s an electrical machine. It’s a very open question, I think, of whether consciousness depends on the stuff that brains are made out of or whether it’s substrate independent in the way that the information-processing metaphor encourages us to assume.
It’s just not clear to me that consciousness is either one or the other. My intuition is that it actually does depend on the stuff. In the brain, unlike in a laptop computer, there is no sharp distinction in the brain between what we might call mindware and wetware.
A very fundamental principle of most computers - this can be blurred too - is that the hardware is what it is and then you can run different software programmes on it. That determines what the computer does and how it processes information. In their brain, there is just not that clear separation. Neurons fire. Every time a neuron fires, the network changes a bit. Neurons themselves are incredibly complicated little biological machines, made out of other things within neurons that are also very complicated. Where do you draw the line? It’s not clear there’s a good reason to draw the line anywhere in particular.
What metaphor do we move onto? The computer metaphor has done a lot of great work, too. It allowed us to think, most importantly, about the reality of internal operations. The brain is not just a stimulus-response device where there’s a pre-canned response to every input. A lot of interesting dynamics happen in the middle. It’s really unclear whether we should think of it in terms of input - stuff happening - output, at all. It’s a very coupled, embodied, embedded system. The computer metaphor at least moved psychology and neuroscience to the stage where we think about internal dynamics as very very relevant.
What’s next? The more we now think about cognition, perception, and brain function as a property of networks and interactions between different brain regions, metaphors like the internet, cloud computing and edge computing might actually be quite useful. They allow us to think about how functions can be realised by distributed networks rather than by very localisable processing elements.
Mason: That word ‘function’ reminds me of another ‘ism’ - ‘functionalism’ - which informs a lot of this understanding of how we can see the brain metaphorically. By hearing you say that, I know you’re going to upset, maybe, 50 percent of our audience who are hoping that they can have substrate-independent minds. In other words, they can upload their mind. What is your thought on Nick Bostrom’s idea of mind uploading or substrate-independent consciousness, or minds that can live in some form of server rack somewhere, or in silicon?
Seth: Right, these things go together, don’t they? There’s a whole collection of ideas that all hang together in a way that a lot of people do find appealing. If you think of the brain as a computer, this underwrites this view of functionalism that you’ve just mentioned. Functionalism is a sort of a subset of physicalism. It says that consciousness is a property of physical systems, but really it’s a property of the functions implemented by physical systems, in a substrate-independent way. That’s what functionalism says in the philosophy of mind. It doesn’t matter what the brain is made out of. It matters what it does. How it transforms inputs into outputs. If you buy really strongly into the computer metaphor then you can buy into functionalism and then you can even buy into this idea that, well, if I get the software or mindware right, then I can upload my conscious experience into the cloud and live forever in some sort of digital immortality.
What’s interesting to me is that these ideas also get bound up with things like the singularity - this idea that we’re right at this point in time, at this exponential tipping point where artificial intelligence and computational technology are just going to get us towards general AI. Then artificial consciousness might happen as well because there’s this other assumption that consciousness is to do with intelligence - which I think is wrong, too. I think it’s much more to do with being alive than being smart. At that point, yeah, we’ll be able to upload ourselves.
I’m very very sceptical of all this. I think this will not come as a surprise. It’s not that I think it’s in principle impossible or wrong. I just think there’s no good reason to think that it is possible. There’s certainly no good reason to think that it’s just around the corner. In some sense, if I transplanted or reproduced every molecule of your brain somewhere else, then I’d reproduce your consciousness, too. That’s just a statement again. It leads to my belief in materialism that consciousness is fundamentally a property of what there is in the universe, organised in a particular way.
But if I scanned your brain in enormous detail and then uploaded its structure to a very fast computer, and ran it as a simulation, would there be a conscious Luke inside that server rack? I don’t know. It depends on so many assumptions about what’s necessary for consciousness. That it is substrate independent; that it is purely a matter of information processing. I see no good reason to believe that that will definitely be the case. It may well just be that you have a very very good simulation of Luke’s brain running on a computer, but there’s nothing subjective or intrinsically conscious going on for that simulation.
Mason: Yeah. It also relies on an assumption that all of those metaphorical things happen to also be true things. The question is, we still don’t necessarily know how to locate - as we said previously - a lot of these things that give rise to consciousness. There is another burgeoning idea that what consciousness is or what our perception of consciousness is doesn’t exist necessarily in the brain but it could exist out there.
I know your friend, David Eagleman, has a wonderful story, or metaphor, for this, which is similar to finding an FM radio. Say a prehistoric man finds an FM radio on the floor and they pick the FM radio up, and they turn it over and take off the back panel, and messes with the wires inside the radio - they understand that, oh, by messing with the wires the voices coming from the radio change. If I turn the dials, the voices also change. So I assume that whatever I’m doing, physically, to this FM radio - this object - is causing this change. At no point would they assume that in actual fact, the voices are being transmitted by radio waves that are being received by radio.
What’s your relationship, I guess, with some of these other ideas that perhaps consciousness isn’t something that comes from the physical human brain, but is actually transmitted and received by the human brain? The brain is not so much a computer, but in actual fact, the brain is some form of antenna. What we’re going to be looking at as our new metaphor isn’t machinic, but has more relationship to quantum; some form of entanglement with something out there that we don’t fully understand yet. I know we’re touching on some problematic issues here, but I just wondered about your initial response to that burgeoning idea.
Seth: I think I have two responses. The first is that metaphors are useful in terms of their explanatory power.
Mason: Yep.
Seth: How they help us understand the system. That’s why the computational metaphor has been historically quite useful, up until a point. This metaphor of the brain as a sort of antenna that’s receiving consciousness from some broader field of consciousness that’s out there doesn’t, for me, explain very much. It doesn’t add a lot to making sense of the relations that we can see between brains and consciousness.
The other initial reaction is that this is why, from the more standard perspective of the brain as important…the brain is not a generator. The brain is where the action is. It’s the importance of going from correlation to explanation. The people who are really puzzled by the FM radio and start pulling it apart, if they reach the conclusion that the voice is in the radio, it’s because they haven’t developed a very good explanation for how radios work. They clearly don’t understand what the bits and pieces in radios are actually going.
Even in neuroscience, there is an interesting and difficult issue about which parts of the brain are involved in consciousness, purely as enabling functions; what you might call ‘sophisticated on/off switches’. Broadly, your heart needs to be beating in order to have conscious experiences over you for a bit more time. Your brain needs to have oxygen, but that doesn’t mean that’s where consciousness is happening. There are parts of the brain such as deep in the brain stem where if you have damage, you’ll lose consciousness forever. Does that mean that your consciousness - or the generating mechanisms - are located down in the brain stem? Maybe not. It just means that part of the brain is necessary for the rest of the brain to enter the states of activity that are generating or are identical to conscious experiences. We need to go from correlation to explanation. I think that will tell us which metaphors are the most useful.
Mason: Yeah. The title of your book is ‘Being You’. That’s really about an exploration of you or me, and what our self-experience of consciousness is. But you mention the word ‘relational’ there. Could conscious experience actually be relational? I.e. between people. Should the title of the book be less ‘Being You’ and more ‘Being Us’? Is the only reason I can understand and acknowledge that I have a form of self-consciousness because I am able to recognise consciousness within you? In the same way that when we anthropomorphise animals, we can recognise something that looks like consciousness in an animal. We have a relational experience with them.
My ability to, at least, have the empathy to understand you as a conscious being means that that gives me the confidence that I am also a conscious being. There’s almost a feedback loop between you and me right now that is giving rise to our understanding of self-consciousness.
Seth: There’s a lot there. I think that in some ways, I agree with that, but maybe not in all ways. Certainly, our normal experience of being a human being is partly dependent, I would think, on our social experience and social interactions. I experience myself as a distinct individual with my memories and my actions, partly because my brain has the ability to infer your mental state and your actions; what might be in your mind. In that sense, that aspect of what it is to be me is refracted through the perceived minds of others. It may even be that without that social dimension to human consciousness, it may have never arisen as a question for us as humans. Why am I conscious? Who am I? To even ask those questions may require this kind of social immersion or social context.
There are two ways in which I think this idea can be overextended. One is to say that there’s an expression of consciousness that is somehow collective. It’s somehow supervening over many individuals at the same time, in the same way, that my conscious experience is unified - back to our earlier conversation, I think it’s unified - based on the activity of many neurons. Could there be a group consciousness? Is there some conscious entity now that is somehow the sum of you and I both together? I think probably not, though both your consciousness and mine are now affected very substantially.
The second way I think the idea can be overextended is that it may not go all the way down. There are certain aspects of our conscious selves that I think are dependent on this social context, but not all of them. It seems to me that the experience of, say, pain or fear may not depend on us having evolved or developed in a social context. There might be some experiences that we can have, whether they’re of the self or of the world, that does not require a social component.
Mason: Yeah, I was just thinking about when you experience pain, how that feeling of pain can sometimes be projected back onto me. You cringe when you see someone hurt or you have a visceral response to the hurt of another person.
Seth: Yeah, it can be. So this is also true. If it’s true that I can experience pain or there can be an experience of pain without a social component, it can also be the case - and I think it is the case - that experiences of pain, when they happen, can still be affected by social interaction. Indeed they are.
There is this phenomenon, I think you just described it, called ‘vicarious pain’. If you show people videos of somebody accidentally smashing their thumb with a hammer when they’re trying to hammer a nail into a piece of wood, even describing it, some people will just feel something like pain. This varies quite a lot across individuals. It certainly seems that we can, in some circumstances and to some extent, feel the pain of others. My contention is only that we may also be able to feel pain - or non-social organisms will also likely be able to experience pain.
Mason: Yeah. The reason I’m asking those questions is that I’m slowly but surely bringing us to this idea of whether we’ll be able to acknowledge consciousness in certain forms of non-human entities. Fewer animals, because certainly, we’ve updated our understanding of what animal consciousness is, but more to do with robots. Will we ever understand that there is consciousness that has the possibility to emerge from complex systems of data?
I know you look quite heavily in the last chapters of your book at what a robotic consciousness may or may not look like to us. What is your understanding of how we should see artificial intelligence through the lens of something like human consciousness?
Seth: I think there’s a lot of danger in this area.
Mason: Yeah.
Seth: And a lot of biases that we humans bring to the table.
Mason: Yeah.
Seth: In what we think is important for consciousness, for ethics and so on. The first bias is that consciousness is a function of intelligence.
Mason: Yep.
Seth: You see this quite a lot in the artificial intelligence community. There’s this assumption that once AI reaches a threshold, the lights come on and you have a sentient system as well as an intelligent system.
Mason: That emergence piece, again. If we just have enough silicon - ping - consciousness again.
Seth: That’s right. That’s right. Why do we think that? I think there’s definitely a residue of this anthropocentrism where we think we’re smart, we’re intelligent and we’re conscious, so the two have to go together. I think this is very dangerous on both sides.
Firstly, there’s no reason to believe that simply making something smarter will just make it conscious. We may overestimate artificial consciousness that way, and then we may underestimate it in other living systems that don’t seem to be that smart by our questionable human standards of smartness.
Another - I think for me, a very important issue - is that we don’t know what it would take to build a conscious robot or a conscious machine. There’s just no consensus on what the minimally sufficient conditions would be. It really does depend on where you stand on a lot of these irresolvable, currently metaphysical debates about substrate independence or functionalism. Anyone who claims to be really confident about the answer to those questions is just being a little bit overconfident.
Since we don’t know what it would take, we also don’t know what it would not take. My view, as we’ve been talking about, is that I do think substrate matters. My intuition is that being conscious depends, in some way, on the stuff we’re made out of because we have this sharp distinction between the mindware and the wetware.
But I might be wrong about that. Maybe you can build a conscious machine out of silicon. I would worry that we might build a conscious machine by accident. We don’t know what it wouldn’t take and so it might just happen, despite the currently dominant philosophical view. That would be an absolute ethical catastrophe because we’d have built things capable of having experiences. Those experiences might be very aversive experiences. They might be negative experiences.
Furthermore, they might be negative experiences that we human observers cannot even recognise. When we look at another animal, if it’s sufficiently close to us on the tree of evolution, we can usually detect - though sometimes we choose to ignore - whether it’s a positive or negative experience going on for that animal. If we’ve just got a robot or a whirring box on a table, we have no intuitive guidelines for understanding what kind of experience might be going on. To then be building these machines for whatever purpose is just an extremely unethical thing to do. It’s a very undesirable situation to be in. I’ve yet to hear a good reason for attempting to build a conscious machine. I really don’t think we should be doing it. You just hear, “Oh, wouldn’t it be cool to build a conscious machine?” No. Building things because they’re cool is generally a very bad reason for doing anything.
We may also slip up on the other side, that we build robots that give us the strong impression of being conscious but which may just not be at all. These are anthropomorphic or anthropomemetic robots that give the strong impression that they’re conscious, but we just know they aren’t. That’s going to equally distort our moral and ethical perspectives because we find that our human instinct is very much driven by the appearance of something that is similar.
The TV series ‘Westworld’ deals with this in a very dark way. You have these artificial machines or creatures that the guests of this park are told not to worry about because they’re just machines. But of course, you’re still engaging with something that gives the appearance of having experiences. Of course, if people haven’t seen ‘Westworld’, it becomes a very interesting aspect of it whether they do or not. In ‘Ex Machina’, again, there is a very similar plotline going on through that film.
The dangers are in all these different directions. We don’t know what it wouldn’t take to build a conscious machine. It’s certainly not just a function of intelligence, and we’ll end up in complicated situations where we in human society are finding it increasingly difficult to distinguish the artificial from the real.
Mason: Despite that, there’s something very comforting in what you’re saying there. The sorts of artificial intelligence that we commonly understand - server racks, computers, algorithms, data and information might not give rise to something that’s like consciousness. But artificial life - lab grown biological robots or entities - maybe that’s the thing that will have some emergent property that has similarity to what we commonly understand as consciousness.
Seth: I think that’s much more likely. I mean almost a priori, it has to be more likely. When you have a neurotechnology, like we have brain organoids now that are emerging technologies, they are at the moment quite simple. But they are made out of the same stuff. There’s one big difference that you already don’t have to worry about - or actually, rather, it gives us something else to worry about.
In the short term, I’m more concerned about brain organoids developing some primitive level of awareness than I am about some complex AI in a server rack developing artificial intelligence. Organoids are made out of neurons. They are developing rapidly. It’s an interesting technology but again, I don’t think we should be that gung-ho about just developing more and more humanlike and more and more complex organoids, even if there are some good medical reasons for doing it.
I think we just need to be mindful of whether we’ve crossed the line or what we’re approaching. We want to think about these things pre-emptively. Are we approaching a line where we might seriously have to be concerned about whether these neurotechnologies - synthetic lifeforms or synthetic neural systems - have enough similarity to those systems that are related to consciousness in humans so that we should worry?
I was at a panel of the US National Academies of Science, Engineering and Medicine last year, who were trying to develop some regulation ethics frameworks for research on organoids. The conclusion was that in their current state, there’s nothing to worry about. Organoids that are being developed at the moment are very simple classes of cells and so on. I think that might be right, but I think it’s worth revisiting that really quite frequently given the rapid development of this kind of technology.
Mason: The last time I’ve heard people dismiss certain things because they’re just assemblages of cells is the burgeoning and ongoing debate, especially in the US, about zygotic personhood. When does life begin?
Currently, the debate is moving towards how much cellular material has formed as a foetus and at what point is that cut-off point. You’re right, with organoids, in some cases, there’s more cellular material there than zygotes that can be aborted at certain stages of the birth process. If organoids did develop consciousness, won’t that teach us so much? Isn’t that kind of the hope, in a weird sort of way? If this amalgam of cells was able to, or if we were able to understand its expression of, I guess, self - whatever sort of self that would be, wouldn’t that teach us, and you as a scientist, so much about consciousness?
Seth: I think potentially, yes. There’s a massive problem here though, which is, how do we know?
Mason: Yeah. Why does it matter?
Seth: Well it certainly matters because when we’re just in this state of almost guaranteed epistemic uncertainty of not knowing, I think we should err on the side of caution with respect to all this. As soon as something is a subject and is having conscious experiences, we have an ethical responsibility to it as well. That changes the game.
Neuroscientists since the beginning, for decades, have been doing experiments on animals; other conscious subjects. There is a lot of, I think, really important debate about whether this should be done or the limits, restrictions and conditions under which it’s justifiable, and the conditions under which it’s not justifiable. But of course, we can learn a lot from non-human animal experiments.
The same would be true of organoids but with this big proviso that because they’re just bundles of cells - maybe when they’re equipped with sensations and possibilities for action - but there’s going to just be, intrinsically, a great deal more uncertainty about the conscious state of an organoid than there is for another organism.
Mason: The interesting thing about trying to scientifically understand consciousness is there’s a multitude of ways to do so. It’s not necessarily by studying human individuals as they are now. Sometimes it’s by looking at individuals who are either on drugs, like psychedelics or who are in a vegetative state. What can we learn about consciousness from individuals in these very unique positions?
Seth: I think there’s a lot to be learnt in these conditions compared to something like organoids or artificial intelligence as it is now. The two examples that you picked are particularly relevant, I think, for understanding consciousness.
To deal with the vegetative state first, this is a condition that is a very unfortunate, difficult condition that can happen after severe brain damage - either after a blow to the head or through loss of oxygen to the brain. It defines a condition where people in a vegetative state still go through sleep and wake cycles. They still wake up and go to sleep again, but there just doesn’t seem to be anybody there. There seems to be no consciousness going on. This is usually defined from the outside by neurologists. They’ll see whether people respond to commands or make any kind of voluntary movements or actions. When these things are missing, then a condition of vegetative state is diagnosed.
There are two reasons this is very very relevant. One is that in terms of medical practice, it’s of course critical to diagnose this condition accurately. You wouldn’t want to miss people who are having conscious experiences but are diagnosed as not having them because they can’t express themselves outwardly through behaviour. This is a domain where consciousness science has already made a massive contribution to medical practice. By looking inside the brain with different kinds of brain imaging methods, we can detect individual cases where consciousness remains. Cases of so-called ‘residual consciousness’ that are just not apparent when looking from the outside. This is already changing clinical medical practice.
In terms of understanding consciousness, what you have in the vegetative state is a separation of what we would call physiological arousal from conscious status. Normally, when we wake up, we are conscious. The two kind of go together. When we are asleep, interestingly, of course, we can be both unconscious and conscious when we’re having dreams. But normally when we’re awake we are conscious, apart from when we’re in this condition of a vegetative state. It gives us this opportunity to figure out what other brain and bodily mechanisms are involved specifically in being awake compared to being aware.
In the other case of psychedelics - these are two examples among many other manipulations of consciousness that are very informative - psychedelics, of course, has spent a long time in the wilderness as a legitimate scientific research area, or area of clinical practice, too. There has recently been a lot of excitement about the psychedelic space and its potential for treating psychiatric or mental illness problems such as post-traumatic stress disorder, depression and so on.
As with the vegetative state, there’s also a way of using psychedelics as a tool for understanding consciousness. Briefly, you give somebody a small pharmacological nudge. All the classic psychedelics through a common pathway on the so-called ‘serotonin 2A receptor system’. You change how that behaves and you radically change people’s conscious experiences. This is informative in very many ways. Firstly, it shows that the normal way in which we have our experiences of the world and the self is not the only way.
There are experiences of ego dissolution where people’s experience of selfhood becomes much more blended with the rest of their experiences. Sometimes people will report similar things in really focused states of meditation, sensory deprivation and so on. But psychedelics is a very reliable way to turn these experiences on and off. Then, of course, we can look at what’s happening in the brain that might explain these changes in conscious experience. Because it’s something that is really quite controllable - you can give somebody a substance and then you can just track what happens - it’s a very powerful method, I think, for understanding brain basis of consciousness.
Mason: As we’ve been speaking, throughout this entire conversation, the one thing that’s incredibly clear to me is that consciousness is a real problem. Some people have called that a ‘soft problem’, some people have called that a ‘hard problem’, but to you, Anil Seth, it is a ‘real problem’. What do you mean by that, and why is that the framework through which you feel is the best way to understand consciousness, ultimately?
Seth: Ah, thank you for raising that term. It brings us back to where we started, I think, about what is the best way to understand consciousness and what is the best way to develop an understanding of consciousness.
Mason: Yep.
Seth: This distinction between the ‘hard problem’ of consciousness and the ‘easy problem’ - though I quite like what you said, the ‘soft problems’, I quite like that. The ‘hard and soft problems’ is another way to put it. David Chalmers very influentially distinguished the ‘hard problem’ which is the problem of how and why any physical interactions should give rise to or be identical to any conscious experience whatsoever. That’s the problem in the face of which we may leap to these radical alternatives like panpsychism, idealism, or mysterianism.
The ‘easy problems’ are all those problems about how brains do what they do for which you just don’t have to mention consciousness. They’re not easy in the sense of being easy to solve, but they’re conceptually easy in that there’s no big mystery. The complex mechanism of some sort should be up to the job. This is a very useful distinction, I think, in a lot of work on consciousness. It really isolates this sense of mystery about consciousness. But I think ultimately, I’ve found it not the most useful way for me to think about the problem.
Consciousness exists. Conscious experiences are real, and they depend in ways on the brain and body. For me, the real problem is what we’ve been talking about throughout. It’s the challenge and problem of how to explain, predict and control the properties of consciousness. Why experiences have the character they do. Why an emotion is the way it is. Why an experience of redness is the way it is, and not some other way. Why experiences of free will are the way they are and not some other way - in terms of things happening in brains and bodies.
Intuition is, by addressing this real problem…really, it’s not a new idea, either. This is what neuroscientists have been doing for a long time. There’s a whole branch of work called neurophenomenology which is very very along the same lines, trying to build expansion bridges between neural mechanisms and aspects of consciousness.
The more you do this, my hope is that this apparent mystery of the ‘hard problem’ begins to fade away and dissolve. The question is, will it dissolve entirely so that there is just no remaining sense of mystery that consciousness is indeed a property of particular kinds of physical systems? Or, will there still remain this residue of mystery, this hard problem piece that we can’t quite get rid of? If that is the case, then why is that? Is that because there indeed is some big metaphysical gap between consciousness and the rest of the world? Or, is it because we somehow expect more of an explanation of consciousness than we expect of science in other domains? I’m interested to see how this plays out.
The exciting thing for me is that the view is changing. I’ve been in this game for 25 years now. The way I think about consciousness - and I think the way the field, my colleagues, inspirations, mentors and students think about consciousness - is changing. Where we’ll be in another 20 years, I really don’t think we’ll have had this eureka moment where suddenly it’s like, “Oh yeah. I’ve discovered the structure of consciousness and here it is. Here’s the answer.”
But I do think we will see the problem in a different way. Even if some residue of the ‘hard problem’ remains, we’ll have understood so much more about how and why we experience the world and the self the way we do, and we’ll have had the opportunity to develop new technologies, and new interventions in neurology and psychiatry. The side benefits, if you like, of the ‘real problem’ approach are just enormous. It’s a good place to be. There’s a lot of excitement, we’re not trying to prematurely come up with a grand answer just to get rid of this uncomfortable sense of mystery. Live with the sense of mystery. It’s fine. It’s alright.
Mason: Well Anil Seth, on that incredibly hopeful and, I guess, mysterious note, I want to thank you for being a guest on the FUTURES Podcast.
Seth: Oh thank you, Luke, it’s been a real pleasure talking to you. Thanks very much.
Mason: Thank you Anil for sharing his thoughts on the range of philosophical debates related to our understanding of human consciousness. You can find out more by purchasing his new book, ‘Being You: A New Science of Consciousness’, available now.
If you like what you've heard, then you can subscribe for our latest episode. Or follow us on Twitter, Facebook, or Instagram: @FUTURESPodcast.
More episodes, live events, transcripts and show notes can be found at FUTURES Podcast dot net.
Thank you for listening to the FUTURES Podcast.
Credits
Produced by FUTURES Podcast
Recorded, Mixed & Edited by Luke Robert Mason
Assistant Audio Editing by Ramzan Bashir
Social Media
Twitter: @FUTURESPodcast | #FUTURESPodcast
Instagram: @futurespodcast
Facebook: @FUTURESPodcast