Future Superhuman w/ Elise Bohan
EPISODE #66
Listen on Apple Podcasts | Spotify | Google Podcasts | YouTube | SoundCloud | CastBox | RSS Feed
Episode Recorded on 22 September 2022
Summary
Macrohistorian Elise Bohan shares her thoughts on the importance of adopting a transhumanist worldview, why we live in a make-or-break century, and what is worth preserving about humanity.
Guest Bio
Elise Bohan is a Senior Research Scholar at the University of Oxford’s Future of Humanity Institute (FHI). She holds a PhD in evolutionary macrohistory, wrote the world’s first book-length history of transhumanism as a doctoral student, and recently launched her debut book Future Superhuman: Our transhuman lives in a make-or-break century (NewSouth, 2022).
Transcript
Luke Robert Mason: You're listening to the FUTURES Podcast with me, Luke Robert Mason.
On this episode, I speak to macro-historian, Elise Bohan.
"I'm going to keep - as we're doing now - entertaining multiple possible future trajectories. There have to be multiple maps of reality in play for the future so that we're not taken aback when something novel emerges."
- Elise Bohan, excerpt from the interview.
Elise shared her thoughts on the importance of adopting a transhumanist worldview, why we live in a make-or-break century, and what is worth preserving about humanity.
Luke Robert Mason: My thoughts on transhumanism have changed a lot over the last decade and a half. My first encounter with the movement was when I was 19 years old. If I'm completely honest, I'd been reading a lot about the ancient Mayan prophecy that the world would end in 2012. I was an impressionable undergraduate student with an interest in ideas related to the future. Naturally, I gravitated towards a doomsday vision that looked set to have a massive impact on my life in the next few years.
This investigation took me through the work of Daniel Pinchbeck, Terrence McKenna, Peter Russell and James Lovelock. I don't exactly know how I arrived at transhumanism from this motley crew of new-age original thinkers, but I suspect it may have something to do with the fact they were all loosely interconnected with the LSD advocate, Timothy Leary. During his time in prison, Leary had developed a futurist philosophy, 'SMI2LE', which stood for Space Migration, Increased Intelligence and Life Extension. What's more, he was quite optimistic that many of these things would occur within his lifetime. They didn't.
Leery wrote his futurist philosophy in 1976, but I found out that it would go on to inspire many of the core tenets of the 90s transhumanist movement who believed that a combination of radical technologies including cryonics, biotechnology, nanotechnology and artificial intelligence would be the key to humanity's future.
With this new interest, I abandoned my interrogation of the 2012 phenomena and started to explore how technology might actually help us delay the end of the world. I got to meet my first real transhumanists in 2010 after I discovered that 'The World Transhumanist Association' - recently renamed 'Humanity+', was still in existence.
David Wood - a self-styled futurist famous for having developed the Symbian operating system that ran on the majority of Nokia phones at the time - had organised the Humanity+ UK Conference at The Humanist Centre, Conway Hall - an event I eagerly attended. I'd been given a small grant from the University of Warwick, where I was an undergraduate, to shoot a documentary about some of these folk. In a weird sort of way, that documentary was the precursor of the FUTURES Podcast. Thankfully it never saw the light of day. One day I may consider releasing some of the audio from those interviews as a bonus episode, but we'll see.
The great thing about the project is that it gave me the excuse to interview some of the leading figures of the transhumanist movement, including Natasha Vita-More, and Max More. Max was kind enough to give me an hour of his time, to share his thoughts on cryonics. it was the first time I'd ever heard someone speak passionately and seriously about the concept of human popsicles. I was hooked.
I think that's what I've always loved and moreover respected about transhumanists - their steadfast commitment to their personal visions of the future. It doesn't matter if they might ultimately be proved wrong. There is also a small possibility that they might actually be right.
The older I've gotten, the more critical I've become of the more outlandish ideas that are associated with transhumanism such as mind uploading. My podcast episode with Edward A. Lee goes into some depth about why I believe this might not be possible. But it's still always a joy to be in the presence of folk who truly believe that science and technology will guide our next stage of evolution. Transhumanists truly believe.
One believer is Elise Bohan. She represents a next-generation of the transhumanist movement, who isn't afraid to confront the trickier aspects of its past. As you'll hear in this episode, we don't always agree with the direction of travel a transhumanist future will take us. But the value is in interrogating these possibilities as if there is a real chance they may come to pass. Doing so will help us to develop the foresight we will need to avoid the unintended consequences of pursuing such a superhuman future.
So, spoiler alert, the world didn't end on December 21st 2012, but, as Elise points out in this episode, it's more important than ever to explore all the options we have available to us in this make-or-break century.
Doomsday is still constantly around the corner, but that doesn't mean it needs to be inevitable. So it's with great pleasure that I get to introduce you to my friend, Elise Bohan.
Your new book, Future Superhuman, really tackles this concept of transhumanism. I feel like I have to ask the question: what is transhumanism?
Elise Bohan: Yeah, sure. So transhumanism is basically a philosophy and a social movement that goes back formally to roughly 1990. It's all about extending those core principles of enlightenment humanism, where we want to use science, reason, education and cultural tools to improve the human condition and to bring about the better angels of our nature.
Transhumanists have a bolder vision. They want to take it one step further and use the best of modern science and technology to radically extend the parameters of what it means to be human. To push up against all of those biological boundaries, push back against things like ageing, disease, and mortality, extend human healthspans and enhance human intelligence. it's a really, really big facet of those aspirations.
Luke Robert Mason: Would you happily describe yourself as a transhumanist?
Elise Bohan: Yeah. I think that basically the core ideals of trying to bring about a good life for as many people as possible - and particularly when that centres around enhancing intelligence and enhancing our ability to understand and solve problems in the world - that's a core philosophy that I'm really on board with.
In terms of the diversity of the transhumanist movement, there are so many people with so many different goals, aspirations and projects. It's not that I would necessarily subscribe to every project, whether that might be cryonic suspension - freezing your head in a vat of liquid nitrogen, for example - but yeah, basically the idea of being the best that we can be and ameliorating as much suffering in the world as possible.
Luke Robert Mason: Does it always necessarily have to be about being better and enhancing yourself? Or could a transhumanist project be about living in a differentiated way in this world?
Elise Bohan: Yeah, that's a question that comes up a lot. I've thought about that really deeply. It doesn't necessarily have to be about being better. Of course, we have the question of: by what definition of 'better' and who gets to decide? But there are some objective things that we can pinpoint. Basically, if we strip away the idea of radical enhancement and radical transcendence, almost everybody sort of agrees that it would be good not to fall apart and die prematurely. It would be good not to get cancer or heart disease, for example. It'd be really great not to watch your parents or your grandparents succumb to things like Alzheimer's and dementia, spending a lot of time in life, not in a state of good health. Sort of more suffering from sick care and heavily medicalised age care. I think most of us think there probably is a better - when we're thinking about that condition of existence - and less suffering and less disease would be good.
I think there are a bunch of aspects of life where you could kind of go, well would more abundance be good in terms of economic growth? Would it be good for people to have more resources to be able to work less? Again, I think that that's something that a lot of people basically nod along with and think, yeah, it would be terrific. The idea of extending very humanistic projects like disease amelioration and solving for poverty, and really being open to how far you can push that in the modern world.
Luke Robert Mason: I mean, no one would argue against the sorts of technologies that would allow us to live healthier lives. In 'Future Superhuman', you ou flirt with some of the more radical visions for what a potential human being could be. if we were to look at some of the wilder fringes of transhumanism and argue for some of those, what are some of the visions that really excite you?
Elise Bohan: Anything that involves radical augmentation of intelligence. The title of my book is 'Future Superhuman'. Any sort of posthuman, superhuman visions. AI is really at the crux of a lot of those visions. The idea that we could potentially design machines that match and then exceed human capabilities and then put that intelligence to work solving complex problems in the modern world - that excites me beyond belief, in no small part.
The modern world is so replete with an unprecedented number of accidental risks and challenges. We've never had more human-generated risks and dangers on our plate to juggle in this complex global civilisation. I'm talking about things like nuclear weapons, the risk of bioengineered pathogens, AI itself, obviously being one of the emerging risk categories, and of course climate change. That's a big one where we can see this iceberg on the horizon for humanity. We've had decades of information kind of flooding in, nudging us to kind of go, right, we have a finite time here. We need to take action. We need to be pre-emptive on this issue. We've seen humanity fail time and time again to pick up that ball - which doesn't bode well, I think, for our long-term thinking and our long-term problem-solving skills.
In a world where we had not only more intelligence - because I think that can be a really crude vision where we just imagine these very utilitarian, soulless bots making spreadsheet-type decisions that lack all compassion and nuance - the idea that you've got greater than human levels of intelligence but that it can be put to use solving the deepest challenges of our age, is, I think, something not only to be looked forward to but something that's going to be necessary for the human future.
Luke Robert Mason: I do have to ask what the obsession with intelligence is. It's always when I come to somewhere like the University of Oxford, where we're recording this now, when you have these future-looking conversations, intelligence is always the one that comes up. Famously, Anders Sandberg who is sitting in the office next door said, "Most of the folks who talk about the enhancement of intelligence see their bodies purely as the transportation systems for their brains. All they really care about is the thing that is contained within their skull and enhancing that because they've spent a life inside of academia that values intelligence." Surely the question we need to ask is: what sort of values do we need in the 21st century? Then based on that values set, decide what we should enhance or amplify.
Elise Bohan: Yeah. The problem is, you still need the capacity to make sound decisions. Not with an emphasis on heads in jars here, but with an emphasis on real-world outcomes. Again, I think we can all concretely agree that challenges like climate change and challenges like the risk of nuclear proliferation and nuclear war do require intelligence in which to juggle those very complex, very thorny issues.
In terms of values being relative, yes to some extent that they are. To sort of whitewash intelligence away as this thing that's only prized by ivory tower academics, because that's where they get their kicks or ego boost, I do think there's some truth to that narrative. I do think there's some danger of over-associating your identity with intelligence to the extent that you become siloed and unaware of broader facets of life, and broader aspects of the richness of human experience.
I think in particular, sensory embodied forms of experience have deep beauty. They profoundly enrich human experience and are deeply enmeshed in our capacity to bond, communicate and love in ways that cannot be whitewashed by someone saying, "I'm a [inaudible: 15:59], only my brain is divorced from my body." I think it is all part of this very complex system that drives the better and worse angels of our nature. The two sides of humanity.
We have seen in recent history that humans struggle to be on the same page. We're probably struggling more than ever to have a common sense of a core reality, an origin story, and a common set of beliefs and cultural values. I think it is - I hesitate to say, "a bridge too far" - but the idea that eight billion people are going to go here on a core vision for the future, I don't think is realistic. That said, I do think that there need to be democratic elements to the process.
It's one of the reasons why I do think we need better stories about some of the more challenging ideas of our age. To even talk cogently about things like AI reaching and exceeding human intelligence, and maybe disrupting lots and lots of facets of our lives - whether that be our employment prospects, our dating and mating lives, the global demographic situation, or global fertility rates. There are so many forms of disruption that we could say AI impacting our world with. We're not going to be able to make any decision about any facet of that disruption if we're still at the stage, as a collective, where we sort of seize up at the mention of it. We either laugh it off, throw it in the sci-fi bin and have that relieved laughter of, "That's really silly. That's really far out. That's never going to happen." - or we have the more defensive reaction of, "Oh, I don't like this. If you're talking about this, you must be one of those baddies. Uh-oh, I'm very confronted. I'd like you to please stop talking now."
I think it's really important that we keep talking but are able to acknowledge that this is one of the most confronting stories and potential realities that humanity has ever faced. The idea is that within our lifetimes - within the 21st century - humanity will reach a level of technological maturity where they are able to potentially succeed their own successors, or what the roboticists tend to call 'mind-children'. That is not what any flesh and blood human being wants to hear, because we've evolved on the African savannah to live lives that were the same as our parent's and grandparent's lives. Where technological change happened really, really slowly. Where your expectations were kind of baked in from day one and you didn't have the rug pulled from under your feet. You didn't face this sudden cascade of threats to your sense of identity, your sense of purpose, or your livelihood - everything that you've kind of invested in, in life.
It behoves us, really, to acknowledge that that rug-pull, psychologically, is one of the toughest things that all of us stand to face in our lifetimes. I'm not saying here that it's guaranteed that AI is going to do this or that, or become super-intelligent by this date or in this century. I'm saying it's a strong enough possibility that we need to be able to narrow divides. We need to be able to talk about it. We need to be able to entertain policies that allow for some of this disruption. Particularly in the short term, things like universal basic income. Things that speak to what people are going to do if they are displaced on mass from the workforce. Not only how they'll be remunerated, but how they'll find a sense of meaning or purpose.
Luke Robert Mason: Speaking with you, Elise, it always feels like it's transhumanism or bust. Either we pick transhumanism, or we're all fundamentally doomed. I think that's encapsulated by a phrase that you mention in your book which is the idea that the 21st century is "a make-or-break century." What do you mean by that, and what implications does that have for the sorts of decisions that we make as we travel through the 21st century, hopefully into the 22nd?
Elise Bohan: I do think give or take a century or two - we're talking orders of magnitude here - within the realm of a few hundred years but very plausibly this century, we are juggling enough of those big existential risks that we talked about that we are, in all likelihood, not going to be able to see our way through, just sort of maintaining a 2022 level of status-quo reality.
The fact that we've sat on nuclear weapons for roughly 70 years sort of seems to reassure some people. It's like, "Yeah, we can put that in the back of our mind. That's all fine now." The reality is that we've had so many near-misses. We're in a geo-political situation right now, of cold-war resurgent tensions. The idea that humanity can sit on nukes alone for hundreds or thousands of years without anything going particularly wrong is, I think, optimistic.
What makes this century unique is that we're adding new and more powerful technologies at a faster rate. They're the kind of technologies that are different to nuclear weapons. Nuclear weapons are really expensive, and hard to make, and only state actors have ever produced them. The same is not true of bioengineered pathogens and bioweapons. Particularly with the aid of more computing power and AI, we're on the threshold of an age where someone with PhD level training and a basic lab kit can bring back smallpox or tweak the bubonic plague. To bring us pathogens that are as able to spread as something like measles - which maybe infects 16 people for every person who is infectious. That is airborne, but maybe have the lethality of something like Ebola or HIV. This is a new dawn in biotechnology.
Of course, we're adding more and more into the mix. We're adding artificial intelligence which is a whole other can of worms in terms of incredible promise. To do all of those things like cure diseases, solve problems, and allow humanity to create more abundance by producing more goods through efficient manufacturing and farming. It could definitely stave off issues around famine and resource scarcity. That would be a wonderful boon for the species. But of course, keeping it under control is really, really difficult.
Luke Robert Mason: Well it does sound like we need technological progress to mitigate against the negative effects of technological progress. It's kind of the devil's bargain that we're dealing with. I guess it's best encapsulated by Elon Musk who is so scared of AI and yet at the same time as his own humanoid AI Tesla project. Whether we know that that's vapourware or not, he's still creating the future that he fears, which is incredible marketing in many ways. That self-fulfilling prophecy is a bitch. That is encapsulated. How do we deal with the fact that we have individuals like Elon Musk who are able to operate at scale over the sorts of futures that might be available to us?
Elise Bohan: I think the phrase I'd love to signal-boost here is 'perverse incentives. Not targeted specifically at Elon here, but targeted at...the big thing I foreground in the book is the idea of the ape brain. Our environment has almost outgrown us. We're not really evolved for this situation of eight billion humans with these complex global supply chains, with these incredibly complex technologies, with these thinking machines that we're building. What's happening with AI - Elon's one example but there are many others - what's happening is basically that the incentives to double down on rapid R and D that can generate billions if not trillions of dollars in global wealth - that incentive is so strong.
It is such a bankable technology and it's so bankable because it's a general-purpose technology, very much like electricity. Electricity is something that almost became banal because we've put it everywhere. We've infused it in every gadget, and every technology. It's in our built environments. That's the way that AI is going. You can imagine the huge economic boons that could be unleashed if we apply it to healthcare, education, computer science, supply chains, and logistics. You name it, we'll pack AI in there.
Luke Robert Mason: It's ironic with many of the conversations we have on the FUTURES Podcast, specifically around AI, that it's seen as this massive opportunity and yet also at the same time, this massive threat. The solution for it being a massive threat to - the famous one is our jobs - is that we'll just integrate AI into the human. We'll make a techno-human hybrid - the cyborg with the Neuralink attached to their brain - so that they can download information directly from the internet.
That presupposes that the values we have are purely tied to the job market. The only way we would want an AI directly drilled into us is because it gives us some sort of advantage. Not in the world, but in the market, which is an artificial construction in its own right. It goes back to that question of, what is it we truly are amplifying? Are we amplifying the sorts of things we think will be valuable in the future, based on an environment and a value set that we've created which we've based on market economics? Or is it truly about amplifying these things that make human beings so special? It feels like the more we technologically enable the human being, the less we see the human as this special entity.
Elise Bohan: The problem with AI is that we know so little about how it reaches the conclusions it does when we're using deep-learning neural nets. How it autonomously operates in the world, and how it may or may not misinterpret human commands, particularly as it gets more intelligent and has more autonomy.
This is exactly the double-edged sword that we face. We have enough of it in our world to have created these big problems that we can't seem to solve with our ape brains, so what do we need? We need more minds. We need cleverer, more objective, more rational minds than our own. We've built them, but of course, they're an outgrowth of our own minds. We may very much encode some profound errors into those mind children.
Luke Robert Mason: That's the great irony in what you said there. We have these ape-brained humans creating the sorts of technologies that are going to bring us into the future, and yet the fundamental flaw is that its ape-brained humans that are creating these technologies. What makes you believe that we're on the correct trajectory? Surely we need to enhance our intelligence first before we find the technologies that are going to create future intelligence. We're basing this on a human that has so many flaws.
Elise Bohan: Yeah, to be clear I don't necessarily think we're on the right trajectory. We're in a make-or-break century. We're in a danger zone here. If we dwell too much on the danger zone, we're at risk of paralysing everybody around us. We're at risk of the same problem with climate change if we go too hard down the doom and gloom story, then people feel hopeless. It's inept. We shut down. I don't think that's where we're at. I do think there are things we can do, but the landscape is changing so fast. The technology is so complex. Keeping on top of it as it evolves and knowing what to do next is a really daunting challenge.
I don't think we should lie about that. I don't think we should lie about the fact that some of the smartest minds of the world are working on this issue. They are kind of, at times, beating their heads against a brick wall, going, "Oh shit. This is terrifying. I don't know what's coming next." But the next step isn't to shut down and have a meltdown. The next step is to go, okay, I'm going to keep starting this phenomenon. I'm going to keep thinking about where it might go. I'm going to, as we're doing now, entertain multiple possible future trajectories. There have to be multiple maps of reality in place for the future so that we're not taken aback when something novel emerges.
Luke Robert Mason: My challenges with...and look, I'm pro-progression in many ways. I'm very much pro-transhumanism. My challenge very much becomes, it all feels like it's a byproduct of sitting within the anthroposcene. It used to be that environment and biology would define humanity. We're in this unique moment in the 21st century - this moment that you describe as this potential tipping point in the 21st century - where for the first time ever, humanity has the ability to define environment and biology.
To turn a phrase, 'with great power comes great responsibility. But it feels like we don't have - again, to your point - the intelligence to make the right sorts of decisions over what kinds of environments and what sorts of biology that we want to create. If everything is available, if there are a multitude of possibilities, our struggle is about which road to go down. Often, what ends up happening is someone either has the financial availability to create their own individualist version of what they believe should be the trajectory we should go down, and it's not something that we agree with collectively. For example, we wouldn't need to enhance our intelligence through AI if we didn't believe that the human workforce of the future is going to become obsolete based on AI. The only reason we have that bedtime story about being able to enhance our intelligence with AI and becoming these techno-human hybrids is that we believe that's the only way to stay competitive.
Elise Bohan: I don't think that's true at all.
Luke Robert Mason: No?
Elise Bohan: No. No, particularly at the vanguard of the folks who are really at the helm of developing this technology and garnering funding for this research, I don't think they think...I mean they are often champions of universal basic income and say the last thing humanity needs is to cleave its identity to jobs.
Luke Robert Mason: Yeah.
Elise Bohan: The idea that we need to plug in so that we can compete in an economic landscape where AI stands to generate trillions of dollars of GDP and render it more possible to live in a world of abundance that is decoupled, where our standards of living are decoupled from the amount of labour that we bring to the global economy - I don't think that's it at all. I think the idea, certainly my reason for thinking that AI is something that humanity is going to need in spite of the risks - and I think that's still uncomfortable to me but that's where I'm sitting at the moment - is those existential risks that we talked about before. Without something smarter, without something helping us to do geo-engineering or carbon capture, or whatever it happens to be, and developing more robust global systems of diplomacy and democracy, then I don't think we're going to sit on the technologies of today and certainly not tomorrow for hundreds or thousands of years into the future. It's just not realistic.
But the other reason that many people are excited about...I suppose taking the Neuralink example of plugging our brains into AI - there are many ways that we could have an AI-saturated world and that's one of the most popular signal-boosted ones in the media. The idea that - again, it's a sort of classic transhumanist idea of solving diseases, allowing people to have more eudaimonia, live longer in a state of good health, and use those years to do all of the things we love most about being human. To connect with our loved ones, not in a state of infirmity. To have our faculties about us, to be present, to be able to enjoy the rich sensory experiences of the world.
I think a lot of people who are bullish on AI are excited about it for its potential to allow us to live longer, to get more out of human existence, in a very age-old human way.
Luke Robert Mason: So in your mind, the only way is through.
Elise Bohan: I definitely say that in the postscript of the book. The only way out is through. But again, I'm very uncomfortable with that declaration. That is not a zealous, "Yeah, throw humanity in the bin! We're going to upgrade it, it's all going to be terrific. Don't even worry." My message is: we should definitely worry. Keep our wits about us. Think very carefully about the problems. The only way we're going to solve that is with the collective intelligence of a bunch of humans really thinking carefully a bit about confronting risks and challenges, and also about the opportunities on our plate with these advanced technologies. Utilising them to try to solve complex problems.
I think one of the reasons we hate that story and we hate that prospect is because it sounds like really hard work. We've got to think really hard, we might fail, and the challenges are really complex. Can't I just go back to watching reality TV, kicking back and just doing a bit of slacktivism on Twitter? Not really, no. The only way out is through.
Luke Robert Mason: Listening to you speak there sounds like you're not just a transhumanist, but you're an accelerationist. In your mind, this stuff has to happen now and it has to happen as soon as possible. Would you agree with that statement?
Elise Bohan: No. I mean, no. I'm definitely not an accelerationist, but I am torn, I am very much in two minds. There is that question in a lot of the safety AI communities mulling over the same question: do we slow it down or do we speed it up? I think it's dubious that we're really in control of either. Again, when we signal boost the idea of who's deciding what the future is, the billionaires will just go on and invent the AI according to their standards and their desires. I think that gives humanity way too much credit and humans way too much agency in the story of history and evolution. We know that it's microbes. It's platetechtonics. It's the availability of domesticable crops that has determined so much of the course of history over time. Things that humans haven't really been in control of.
If we think about other big revolutions in human history - particularly the transition from Paleolithic lifeways to Agrarian lifeways - it wasn't like all of our hunter-gatherer ancestors had a tribal council meeting, sat down and said, "Right, see those crops over there? What if we collect the seeds, plant them and build huts here? Instead of being nomadic, we'll be sedentary. We'll do this thing called farming. It'll be terrific." It wasn't like some guy was really trying to sell it to them and get them on board. They did it because their environment changed. Their niche changed. It became the more adaptive way of life for people in lots of regions of the world.
Luke Robert Mason: It was an adaptive way of life and then, you know, a couple of thousand years later, enter Monsanto. They genetically modified crops and you can only use their forms of crops. It makes it so that the soil gets all its nutrients sucked from it and there's no such thing as circular farming anymore. We deal with these climate crises. It's all good intentions until suddenly it's not. To your point, the perverse incentives suddenly come into play. What's stopping us as we continue down that path of progress, let's call it, from polluting these ambitions with certain forms of perverse incentive?
Elise Bohan: Well I have to push back first of all on pesticides. Modern agriculture has enabled the eight billion humans on the planet to be here today. Yes, there are perverse incentives in corporate realities. There has been a lot of pollution. There have been negative consequences of agriculture and industrialisation, very famously. We do know this. As I say in the book, in many ways we elect the children of the Industrial Revolution. The air was shrouded in smog, working in mines. Lives upended in ways that were a rug-pull for the people of the 19th century as much as our rug-pulls are felt that way for us today, in different ways. All these trade-offs.
I think as you're rightly highlighting there, there are always unforeseen consequences of these big paradigm shifts. Again, I don't think our hunter-gatherer ancestors had a clear concept of 'we're transitioning to agriculture now’, the way we have a clear concept of the information age or a transhuman era.
My message is a hard one to swallow. We're not going to decide in advance how this plays out. We don't know enough, and it's happening too fast. This is one of the things that makes this transition even scarier and even more make-or-break than any of the other transitions in human history.
Luke Robert Mason: This is the weird inconsistency I find in the transhumanist movement when they describe transhumanism. For some, it's about fixing things. Fixing, repairing, and ensuring humans can live longer or fixing the limitations of our intelligence.
For others, it's about fixing things. Holding what the human is in this special moment in time and space. Looking at the 21st-century human being and going, "You know what? More of that, please. I'm going to cryogenically freeze the human as it exists now in the assumption that it'll continue ad infinitum. Or I'm going to continuously replace this human being with different biological inputs or prosthetics and body parts so that it maintains some form of consistency. Or I'm going to take that human brain as it exists now and upload it into a computer so that this form of human being can live on ad infinitum."
It's less about being open to the multitude of possibilities for what the human could become because that's when posthumanism comes into the frame. It's like, well, this could be radically different from the human we have now.
Of those two different visions of transhumanism, which one do you feel most at home with? The idea that 21st-century human is a thing that's worth preserving but preserving indefinitely into the future, or 21st human is something that we should kind of use as the prototype for whatever we may become in the near future?
Elise Bohan: Definitely the latter. The prototype for what we become. I don't think fixing in rigid terms is realistic. I don't think many transhumanists subscribe to that, but I do know the flavour of transhumanism that you're talking about there. I think you put it really beautifully. I love the idea of fixing and fixing. It's a terrific way of putting it.
I think when I was younger, when I first encountered transhumanist ideas, you find more of this in media clickbait and things like that. There is more of a signal boosting of the razzle-dazzle stories that are like, "Upload your brain! Live forever! Transcend!" But you know, the idea that you're still you but you get to bootstrap and come along for the ride.
I don't think it's even remotely plausible that a post-human being - whether maybe we have integrated in some sense with artificial super-intelligence - that thing is so far beyond us, it would have altered all of our aspects and cognitive landscape to a point where it's utterly unrecognisable. You can't add orders of magnitude more intelligence into the system and interface with silicon intelligence and still be you.
In the same way - not quite in the same way - but in a cruder way, I don't think five-year-old Elise is still with us. I think we die many times over. There is continuity. We know philosophers talk about patternism and the fact that memories and patterns of experience survive within the brain over time. I think that's a really interesting idea because I still have memories - whether they're accurate or not - from five-year-old Elise. There's still a sense that I was once her, but I'm very clearly also not her. She's not here in this room today, in every sense. In a literal sense. The atoms in my body are different.
Transhumanism is often accused - is it a quasi-religion? Is it for tech geeks? Is it really all just about bringing back this vision of immortality? But not more poetic immortality where some nebulous aspects survive but the idea that 'I need the ego' survives. I find that idea less interesting than ever, and kind of distracting, because I just don't think that that can happen or will happen. The idea of whole brain emulation - for anyone who's familiar with that - the idea of using an electron microscope to precisely scan every aspect of a human brain and replicate it...the minute that that consciousness wakes up and starts engaging in thought and with the world, it's experience is diverged from the original.
You can talk about destructive uploading where the original doesn't survive, and I fear that's a bit of a minefield, so I won't deep-dive too hard into that. The future that I think...it's not necessarily what I'm the most interested in, but I think we should be talking about the things that are most probable and most practical. With that mindset of, as you say, fixing things and solving problems in the world, the values that we have today - we've talked about values a bunch - I think again, there are some really common values that basically all of humanity shares. The idea is that we want some sense of sustainability, we don't want extinction, and we don't want the things we know and love about humanity to be wiped out. I don't think any of us are particularly keen on the idea of there not being intelligent life on this planet anymore, or in the universe anymore. The idea that we fix the very real, very immediate challenges that we have - I'm not saying that'll be done or dusted easily, but we turn our minds to it and we make that the priority of our age - that's the transhumanism I'm interested in.
I don't think the label is necessarily helpful there. It's really about using the best modern tools, whatever they happen to be, to deal with the challenges that are on your plate. I think what's useful as a mindset is to be the kind of thinker that can look at the short, medium and long-range horizons in concert. Humans are really apt to pick something they like - usually it's a short term horizon, roughly the next ten years but often the next few hours or the next few months: am I getting promoted this year? I think we need to be able to balance our needs in the immediate present with the needs of the next few decades, the next few centuries, and then ideally the next few millenia. The amount of intelligent life that could exist if we get this right, if we make it through this century, we're talking trillions of possible people in the future.
Luke Robert Mason: Is that even possible, to live in those three different timezones simultaneously?
Elise Bohan: It's hard.
Luke Robert Mason: Well it's hard, but we've already proven the fact that we clearly as a society care more about what's going to happen tomorrow and today than we do about what's going to happen in a millenia. That's just evidenced by our consumption habits. We're sitting here with plastic water bottles infront of us. We made those decisions today. Hopefully there will be some technology in the future that will deal with that plastic problem that we have. This idea that we can discount our decision making today in the hope that some form of future technology will save us from the stupid decisions we've made in our present seems both empowering and deeply disempowering. What if there isn't something that's going to help us in the future and provide us with the safety net that we so desperately think we're going to be able to build once we just get slightly more intelligent?
Elise Bohan: Yeah, there may not be. Be real. I think the belief that we are going to be able to build forms of intelligence that will help us solve the problem, as we've been talking about, that is a really open question. There is lots that could go wrong. Again, I'm more interested in foregrounding what is likely to be true and what challenges we are going to have to face, realistically.
You're dead right - and I argue this quite strongly in the book - that the biggest impediment to our solving global sustainability challenges in the 21st century is our ape brains. It's humanity itself, which again is not to say we loathe humanity and want to put it in the bin. It's that we love the best of humanity so much, but look at us struggling in this complex system that we've built, that's kind of run ahead of our ability to map it and understand it, and rein it in.
Realistically, we have to talk about the fact that the ape brain is almost the next risk category in itself. It doesn't follow from there, right? The ape brain is struggling. We've got to build new brains. We've got technology so we'll definitely use that to cobble something together and it'll be fine. It doesn't follow at all. It may very well play out - again, make-or-break century - that humanity faffs about, we get mired in our short-term thinking and our perverse incentives, and we fail to build safe and robust AI whether that's because we've failed to get to strong AI at all or whether that's because we succeed and we do it in a really dangerous fashion. This is a high-stakes juggling act, but again, I just don't think we can afford at this late hour to be lying to ourselves about the gravity of the challenge that we face.
Luke Robert Mason: Two things make me slightly sad. Firstly, your inner child is dead. That's the first concern that I have for you. The second concern I have is when you talk about the human in that way - and this is where a lot of transhumanists start - they go, "We're weak, we've got these ape brains. We'd be so much better if we were enabled by technology." - we so quickly fall into this self-esteem crisis.
Human beings are absolutely awesome in so many ways. Yet to argue for a transhumanist future you have to begin with the premise that in actual fact, we're not that great. Something is wrong with us, whether it's the fact that we die or the fact that we're not strong enough, or the fact that we're not intelligent enough. We end up beating ourselves up and putting ourselves in this self-esteem crisis. Then we end up in the 21st century with a mindset of, "You know what, it would just be better off if we weren't here in the first place. It'd be a blessing if we just blitz off of planet Earth and that some other form of intelligence came and replaced us." You have to argue just a little bit about what's wonderful about humanity. I think the question I'm asking, Elise, is, “What is worth preserving?”
Elise Bohan: There's a risk of making humanity seem like a kind of emo teenager in the basement. Like we're thinking about 21st-century lifeways in this really ineffectual way and that we're just hiding under the blankets because it's all too hard.
Luke Robert Mason: Well we're told we cause the problem.
Elise Bohan: Yeah, but then think about what you...
Luke Robert Mason: We are the problem.
Elise Bohan: Right. But maybe we can be both, right? Maybe we can be part of the problem in a system that's really, really complex. We've got flaws that make up some aspects of our human nature and liability but we've also got incredible attributes and assets that can help us be part of the solution, too. I think that both of those portraits of humanity can coexist very, very comfortably. Think about what you were saying earlier, about human short-termism. Look at all these perverse incentives. Look at this spiral into vapid forms of capitalism that suck people into less-than-ideal, unimpeachable behaviours.
You've already signal boosted in this conversation that there are aspects of humanity that are kind of failing us. When we talk about the human ability to build these machines, you're sort of foregrounding, yeah, but look at us. What if we trip up? What if we fail? What if we're not even smart enough to build these machines we think are going to be so useful? You're absolutely right that our propensity for short-termism is dangerous. I don't think it's having low self-esteem to admit that. I think it's being honest. I think it just has an even-handed evolutionary view of how humanity fits into the broader complex system that it's created. In terms of what we go on and do with that, I think there's incredible hope.
To bring it around to what you were really driving at: what's so great about humanity? What do we want to preserve? Some of the things are things that we share with other conscious creatures, starting with consciousness. Our ability to have a sense of our existence in the world; to be self-reflexive. That allows us to have experiences that I think at least within our anthropocentric perspective, we would consider richer, deeper and more beautiful than, say, what a cockroach or a bacterium experiences - at least what we expect they would experience.
It is a remarkable privilege to be a human being, dancing around on this planet in the sun for however long we get. To be able to unravel some of the workings of the natural world; to again signal boost intelligence here; to be given the journey of knowing myself and through knowing myself, knowing others; the exploration of psychology, of cognitive science; how we are able to commune with each other in ways that can be at the best transcendently beautiful. That can be instantiated in the physical world when we make love to a partner or when we have a beautiful conversation with our deepest friends and family members, or when we hold our children in our arms. It can also be instantiated through space and time because we are a storytelling animals. We have this incredible ability to travel through time, to tell stories that connect us to minds long dead. We, at least, think that this is a really, really beautiful thing and that it could be the seed, I think, of even more beautiful forms of experience, conscious connection, and exploration in the future.
Luke Robert Mason: So if we're so great, why are you so obsessed with hoping that we find out we're just like machines? If all of that is true and we are these special creatures, individuals from anything else in nature or anything else that we can make, why is so much of the focus in some of the transhumanist or posthumanist technologies that you're mentioning on seeing the human as not special, as just some other container for intelligence? We can create this thing externally from the human being. Why are we so quick to discount all those things when it comes to talking about future technologies or techno-human hybrids? Why can't we just sit back and enjoy what it means to be human as opposed to being like, "Yeah, you know all of these wonderful things. This storytelling thing is great but it would be cooler if I could story-tell with a prosthetic limb."
Elise Bohan: Once again, two things can both be true.
Luke Robert Mason: Oh right.
Elise Bohan: So we can have incredible abilities that are really, really beautiful, that are enriching and that we revere, and we can say that this is unlike anything that's ever emerged on Earth, and that's remarkable. It doesn't follow - as any evolutionist will tell you - it doesn't follow that means we kick back and go, "Right, clearly we've reached the pinnacle of evolutionary potential. Job done. We really can't improve on this. This is where it's at, guys." Again, as I say in the book, I stand by all the beautiful things that I've highlighted about human nature. Yet we are the species that enslave each other by the millions. As we speak, we've got seven billion chickens in factory farmings living in farms and conditions of abject torture, right now. We are the species that mutilate, by the hundreds of millions, the genitals of young girls. The idea that we've got it sorted and that we don't need to think about self-improvement is patently absurd to me.
Again, to the real question that brings it around to transhumanism: why can't we just kick back? Why can't we just enjoy what we've got? It's a nice idea. I think there's a romantic part of me and so many people in the world today as we face all of these complex global challenges, there is this resurgence of nostalgia for the past. This looking back to an age where we had things we feel like we're losing in an age of digitisation and atomisation. Community, family, connection. I think people are hungry for, at the very least, a romantic vision of what they think it was like in the past.
I think there still were some positive instantiations of that. Some things just came easier by virtue of the fact that you had larger families. There was less atomisation. There are always trade-offs come with that; a lot of suffering is attached to some of those arrangements. But the reason that we can't just go, "Right, freeze it all here. Let's re-engage with the most beautiful aspects of ancient cultures and human traditions that really do help us feel fulfilled in our engrained biological impulses that are being stymied in various ways by our modern, built-in environments." Again, the reason why even if we want to - in many ways it's sad - even if we want to, we can't freeze it here, is again that risk-anti.
Luke Robert Mason: We have put progress on pause previously. You just look at the proliferation of nuclear weapons. The fact we haven't used them means we've put that form of progress on pause. Partly because the stakes are so incredibly high. That's game over.
Elise Bohan: I mean the stockpile of nuclear weapons globally has declined, but we've gone from a handful of state actors possessing them to nine state actors possessing them, and counting. Again, I have to remind you that 70 years with multiple major near-misses and two deader nations of nuclear weapons is not a great track record. It doesn't inspire a lot of confidence. As we've said, nukes are one risk category that we have to sit on. We're quickly racking up multiple other risk categories that we need to keep in check. Not just for another 70 years but in perpetuity. That is just no longer a realistic prospect.
Luke Robert Mason: It feels like as we're speaking, we're talking about two things here. One which is the human level, the human story; the story of us. The second thing is the big picture. Us in time and space, and the environment around us. When we're thinking about technological progress and transhumanism, how do we hold those two in balance? How can we think about both at the same time without going crazy or spending hours on a podcast?
Elise Bohan: It's really hard. I think the reason I wrote the second half of my book as the human story and really brought it back to Earth. Really brought it back to, okay, forget all the make-or-break stuff, it's there but I know what you most want to hear about is...
Luke Robert Mason: Sex robots.
Elise Bohan: Well, often anything to do with sex. But how advanced technologies are directly standing to impact or are already impacting the stuff you care about in your everyday life. Work, healthcare, sex, dating, mating, and procreation. All the bread and butter human stuff. My hope is that by talking honestly, and trying to talk through some of the less explored future prospects there, we can start mapping out stories of possible futures that seem more tangible, close to home, and more realistic.
Maybe they're not always comfortable, but I think it's through the lens of those stories that people can start to make the links between the technologies that are maybe changing their job prospects of hacking their children's brains on Instagram or whatever it may be, to: okay, that AI thing is also doing this other stuff that you're talking about. I see how it is all part of this larger idea of this species in transition, these technologies that are simultaneously upending multiple different facets of our lives.
How to hold both of those stories? I don't think we can really hold them front of mind all the time. I think our default mode is short-term. But to try and prise open that time window just a little bit further and encourage people to look a little bit farther. I think also the idea of a make-or-break century helps. It's the idea that a lot of this stuff stands to happen in your lifetime. It isn't that your grandchildren have to worry about climate change or nuclear war. Actually no, you do. This could change your life, and your world, and everything you care about. Once you have a personal buy-in and skin in the game, I'd like to think - there is the risk of paralysis - but I do like to think that some people will at least mobilise in the face of that.
Luke Robert Mason: It does feel like, individually, we don't have a lot of agency over the sorts of futures we're going to get. The future is not a thing that we do. It's a thing that happens to us. Is there anything - unless we end up in an AI research institute or in some form of nuclear arms decommissioning company, or advocacy or something - is there anything we can do to generate or at least try to create some of the futures that we may want? Or should we just leave it up to the folks who are at the forefront of these industries, technologies or scientific innovations?
Elise Bohan: I think potentially one of the most important things is keeping some of those folks in check, some of the time. Some of them do wonderful work whilst some of them go rogue. It's kind of hard to know who's who and what's what sometimes. The idea of trying to be informed. We say this about a lot of issues - it can be a somewhat ineffectual nudge sometimes because we're suffering from a deluge of information and it's very hard to be informed about all of this stuff - but informed enough to understand the correct place of existential risks in policy and in democratic systems is of real value. A more informed populus that values existential risk mitigation and that truly understands how vast the potential of human and post-human futures can be - and votes in that direction. I think that really can make a difference.
Luke Robert Mason: That's tricky about this idea of existential risk. It's both a noun and it's a verb. Existential risk as a noun is the list of things that could happen to us. Whether it's a nuclear war, AI, nanotech, or asteroid - those are the existential risks that threaten humanity. Also, there's the existential risks worth taking; existential risk as a verb. The things we should do as humanity to mitigate against negative consequences.
I worry that we don't get the chance to propagandise the sorts of things that we should do. We spend a lot of time worrying about the things that may happen to us. Again, we a victim of circumstance as opposed to allowing ourselves to feel like we have any stake in the game. This then leads to that feeling of hopelessness and a self-esteem crisis in humanity. But you said something quite revealing in your previous answer, which is this idea of a species in transition. In other words, we are already in a transhumanist era whether we know it or not. What does that look and feel like? How do we know that we're floating within this transhumanist era and what do we need to do to stay on that trajectory, as we're currently in this transhumanist era?
Elise Bohan: There are a lot of arguments that humanity has always been a transhuman species ever since we picked up the first stone tool and invented symbolic language, did cave painting and engaged in storytelling. In some sense, we've always been augmenting our natural abilities by using tools to expand our power and reach over the natural world. That's basically true, but there's a big difference between stone tools and the modern technologies we have today.
I think the thing which makes the 21st century a markedly transhuman era, a point where we're actually in transition - and I mean we're always in transition in some sense but I mean paradigmatic phase change on the horizon here - is the idea that for the first time, post-Industrial Revolution, we've invented the technologies of the Information Age. We have augmented humanity with what is now, effectively, a global digital brain. We have five billion humans and counting connected to this augmented intelligence, I suppose you could call it. It's getting more and more intelligent as time goes by. I think that Google is the biggest AI company in the world, has the most data, and is mining all of these insights into human psychology and human behaviour into cognition. That is getting more and more powerful at a really astonishing clip.
We've already felt the effects of this in ways that - again, to bring it back to that electricity comparison - that we kind of eye-roll at as being really banal, now. If I hear another futurist get up on stage and say, "Do you know the smartphone in your pocket has more computing power than something that sent astronauts to the moon?"...it's become such a dull factoid, and yet the true implications of it are mindblowing. The fact that if 10 or 15 years ago you said to pretty much anyone alive today, "In 10 years you'll be walking around with a super-computer in your pocket and it'll be glued to your palm. You'll be crossing the road looking at this thing, scrolling through these digital, virtual, social landscapes where you're curating your identity and beaming it into cyberspace into these Uncanny Valley visions of your ideal personhood." Not only would we not have believed it, but so many people would have said, "Some weirdos might be into that but not me. Never me. I'll resist. I don't be interested."
I think we can see abundant evidence - the same with the internet - that once we had high bandwidth internet and it was democratised and became incredibly cheap, the lure of these technologies and the speed at which they become integrated into every part of the human global system - including now interfacing in much deeper ways with our cognition, we have AI that's built into our search engines and our emails and all the rest of it - that is now pre-empting and predicting our thoughts. It's writing them before we type them ourselves. The idea that this is not going to continue to accelerate over the coming years and decades is frankly implausible to me.
Luke Robert Mason: Now, that can sound extremely exciting with the right cadence, and it can also sound extremely terrifying with the wrong cadence. It's always fascinated me that individuals like Steve Bannon with his online platform, The War Room - war room dot org - they have a tab on his website dedicated to transhumanism. It's all the sorts of things you've been saying, but emphasising the negative consequences of those.
There's a large percentage of the US and the Western world who sees transhumanism not as this wonderful, egalitarian thing, but as a massive threat to our freedoms. The idea of having something tracking you as you're walking down the street as you interconnect and go anywhere with this device is amazing, but also it can be used as a system of control.
Again, it feels like the thing we keep coming back to in this episode which is, there's the good and there's the bad. Ultimately, transhumanism feels like it's at the cusp - and it's constantly been at this cusp - of this PR crisis. Is there a way we can save transhumanism from co-option by individuals like Steve Bannon, who want to communicate that it's the most evil, great reset, control mechanism over humanity that we are ever going to see? Is there a way we can reclaim it for collective good?
Elise Bohan: I don't think so. But I don't think it matters. I'm not really interested in the meme wars. Steve Bannon can publish whatever he likes. I think it was you, Luke...
Luke Robert Mason: That makes you an evil, globalist transhumanist. That makes you one of them, Elise.
Elise Bohan: Come at me, sure. Sure. I think Steve Bannon knows very well what he's doing. I don't think he believes a word of what he's saying. He's very much tapping into a zeitgeist in America at this time of incredible middle and working-class status anxiety. Anxiety about their future livelihoods and the cost of living in a more and more fractured society. What he's speaking to are incredibly legitimate grievances and concerns. Concerns that we should all be taking extremely seriously.
The idea that we need to go in defending this meme, this word, this thing called transhumanism - or we should be afraid of even saying the word because there are a bunch of people that associate it with a kind of globalist, New World Order mindset - I just think is a waste of headspace. There is such a long, rich, intellectual tradition and history attached to the word. When it comes to academic scholarship and delving into those scholarly ideas, I am going to keep using the word. If people want to brand me with pejoratives, they may. That's fine.
The grievances are hugely important. I do think this is something that a lot of people who are in the big tech world, for example, often don't do very well. They have these larger visions for the future and they're maybe in some sense comfortable with a degree of collateral damage there.
To be clear, I am a macro-historian. If you look back on the arc of human history, there is no point in human history at which everyone was doing fine. In fact, for most of human history, no one was doing particularly well. There is always, in a sense, 'collateral damage. There are always people who suffer and are often in particularly unique ways at times of revolution or transition. This should motivate any and all of us who care about building a bright future for humanity or its successes.
The idea is that we want to make sure that these transition times, however they play out - and they may play out...I might be wrong about a posthuman future. They might play out with humans surviving hundreds of years into the future or more, sort of roughly as we are today, with a few setbacks. Then we claw our way back to where we are now and it's kind of this plateau phase. Maybe that's how it goes. But whatever the arc of the story, we should always be trying to mitigate as much of the damage and suffering of conscious human beings as we can, anywhere in the world.
The fact that there are swathes of people - and it's obviously the people in Western developed nations who are not the people who are suffering most in the world, by and large - yet the fact they're suffering in such profound ways with suicide rates increasing, with a mental health crisis emerging across developed nations, is a very serious thing. I think to speak brazenly in a way that makes people feel utterly steamrolled by the future like they don't matter and like the loss of their jobs and livelihoods - to which they may have attached identity and a sense of purpose, and also self-esteem and ability to support their families, for example - these are losses to which we need to be striving to mitigate.
I think one of the ways that we are dropping the ball like crazy, particularly politicians and policymakers, is in the absolute imperative to overhaul education systems. The idea that we are still pumping children through a factory farm model and engaging in this now quite ludicrous arms race of higher education credentialism, of over-credentialed people who are spending 25 or 30 years before they enter the workforce at all. This is then pushing out delayed ages of the first marriage, childbearing, and lots of other things that are interfering in a lot of the life scripts that people find fulfilment and meaning in. This is not good. The prominence at which exams take in the lives of young children today, the stress and pressure that is attached to that and the sense that meaning, identity and purpose is cleaved to getting good grades, getting into one of the top colleges...I'm not saying there's no value to doing that, but the message that it's a one size fits all model and that that is the only way to live and matter or be someone, or live with dignity - that model has to go. That's where our politicians and policymakers are letting us down.
Luke Robert Mason: Well look, it feels like if we're going to pursue a transhumanist future, then transhumanists need to learn to speak to the disenfranchised communities that feel like certain transhumanist technologies might leave them by the wayside. AI is the prime example. "It's going to steal your jobs." "Sorry, we don't have a solution for you." Apart from UBI. "Have you heard of UBI?"
Elise Bohan: Not just feel like - they're right.
Luke Robert Mason: Yeah.
Elise Bohan: They're right.
Luke Robert Mason: Then it needs to not just be a PR exercise. It also needs to be a communications exercise, in many ways, to realise that there are collective advantages that may not be seen for decades. That's the other problem. It's like, I'm sorry that you were born within this period of time - which is, to your point, the transitional period of time. The future has the potential to be a lot better but unfortunately, the present that you're existing in right now is one of those transitional moments where we're still working out all the kinks from what we did in the past, so sorry, grit your teeth and bear it. As you just said there, this has been the history of humanity in so many ways. There are both winners and losers and a pathway to hell is paved with good intentions.
I guess, what you were saying there about reclaiming words, so there's 'transhumanism' that you're quite happy to reclaim. There's also another word you flirt with: 'eugenics'. Are you not just a transhumanist, but are you also a eugenicist, Elise?
Elise Bohan: It depends on what other pejoratives you want to slap onto that word at the same time. I think this has become an untouchable word, which is why it's a word that interests me. Like so many writers and like so many thinkers, I'm interested in and love language. I don't believe that there are words that we should ever really stick in the bin or make unmentionable. It kind of irks me that that is what has occurred with eugenics. I absolutely understand the reasoning for it, and all the very valid emotional forces behind that.
Obviously, eugenics dates back a few hundred years. The great meaning of the word is 'good births'. It's the idea that, basically, we want children to be happy, healthy, and as unencumbered by forms of suffering, disease and ill health as possible. This is an aspiration that almost all of us live by in the modern world. If you think about the prevalence of prenatal genetic screening and the fact you can just do a simple blood test now to screen for lots of heritable conditions. The effort, the care, and the money that modern parents and parents-to-be throw at anything and everything that might give their child an advantage in the world - whether that's teaching them sign language as kids, or teaching them Mandarin, or making sure they can play the violin - it's the idea that in all sorts of ways, we are constantly trying to enhance the skills, the abilities and the prowess of our children in the world. That's very much what we're programmed to do. They are a package of our genetic potential, toddling around. We normally want them to survive and we want them to thrive, so we're kind of wired for that.
Around the time of the World Wars - particularly after the First World War - there was a lot of concern amongst humanist thinkers in Britain and Europe. Particularly thinkers like H.G. Wells, Julian Huxley, J.B.S Haldane - people who were working for UNESCO and who were thinking about the prospect of a Second World War. They were really, really concerned with the idea that there seemed to be a demographic issue, where there were, I think as Huxley put it, too many children being born in the slums in these conditions of abject poverty. There was rising damp, more children than they could feed, terrible living conditions, and these cycles of intergenerational poverty. Not enough children were being born to well-to-do families, who once they got rich pared back on their breeding endeavours. The slow live strategy is where you invest heavily in the one or two children that you do have.
He thought this was setting us up for a condition where there were so many people being raised in underprivileged conditions who would struggle to pull themselves out of those conditions and would struggle to get an education. They would then be the kind of informed citizens who would not allow another World War to occur. These thinkers meant so, so well.
Of course, there is a distinction between liberal eugenics and coercive eugenics. The idea is that we want to encourage people to be able to have the freedom to choose great pre-natal conditions so that they can gestate a healthy baby and give it the best advantages. Then there's coercive eugenics which we now associate with Nazi Germany and the idea that we're in a very authoritarian way, declaring from on high what sort of people there should be. Saying, "These are the in-group and these are the out-group."
The after-effects, I think...It's also, in a sense been associated with nature. Erroneously, the idea that nature's ideas were co-opted by Hitler, in a sense, and reframed really just as a maniacal pogrom and anti-Semitic campaign that has nothing to do with improving the character of a lot of humanity.
There was enough propaganda around the idea of, in a sense, making Germany great again, that the word 'eugenics' has been - it seems forever in the Western liberal imagination - cleaved to this idea of Nazi atrocities and to the Holocaust.
I understand why. There has been such a push, particularly throughout the 20th century and particularly in academia to want to signal boost, rightly, the disgust of it. To affirm that we want to ensure that we live in a world where such an outrageous violation of human rights can never happen again. But I don't think we need to throw the baby out with the bath water. Hitler wasn't a man who is about good births. Hitler was a man about scapegoating a minority in order to boost his own political popularity and be an authoritarian leader. Most of us want good births for our children.
I'm totally receptive to the idea that many people would prefer just to use other terms and just talk about pre-natal nutrition or whatever it happens to be. Some people like the more futuristic designer babies idea when talking about genetic technologies coming into play in the future. I don't see a need to throw words in the bin. I think we ought to be adult enough to distinguish the ways in which words have been co-opted and propagandised with their true meaning and true intent, and be able to apply them in different contexts, in nuanced ways.
Luke Robert Mason: So let me ask that question another way. What sort of people do you think there should be?
Elise Bohan: Not any type of person that I circumscribe as a model from on high, let's start there.
Luke Robert Mason: Alright, you got around that question. When we mention this idea of genetically modifying the human for the future, a lot of people argue that we need human diversity. We don't need homogenised humans. I'm not just talking about the look of human beings but also their ability to not have or to have certain diseases. We say that certain things are bad, obviously, because of the way it inhibits living a healthy life. But again, we're don't know what sort of human environment we'll be living in and therefore we don't know if the human we're designing for today is going to design the environment of the future.
Elise Bohan: Yes, which is another reason to make these bioethical questions front and centre of the modern human imagination. You're absolutely right. We will have an enhanced potential to make choices, make tweaks. At least for single gene disorders, that's the first thing on the horizon, right? Editing out genes for Huntington's or cystic fibrosis. I think most of us can converge on the ideal that it would be pretty good to have the ability to do that, or at least have the option to do it. I think that's always important - that nobody commands from on high that you must engage in any genetic intervention, but you do have the option if you so desire.
What's really interesting about that is again, as you say, the evolution of the environment, of the niche. But also the ape brains overlaid with that. Once humans start to get a very clever, grand, unifying theory or map in their heads, they can get very stuck on that and think, look at how clever this is. Look, I'll tweak this and I'll tweak that and it'll be terrific. Nothing will possibly go wrong. It will all be beautiful. We usually do see that there are unintended consequences from tweaking complex systems.
What I really worry about is that one of the things I could imagine - this is maybe a long shot in the near future - but if we could start to tweak personality traits, I think that's introversion done and dusted. I think most parents-to-be will select for high extroversion and high agreeableness. Some version of your most prevalent human traits and most pro-social, most socially revered, most conventionally rewarded traits.
What we miss is that I think the lion's share of the greatest inventions and innovations in human history have come from very introverted, very weird, sometimes slightly autistic people.
Luke Robert Mason: If everybody's an extrovert, who's going to design the AI?
Elise Bohan: Correct.
Luke Robert Mason: You know? Who's going to be the engineer at the forefront of saving us in the near future? That is the key issue. Another example I've heard is about cystic fibrosis which is obviously a challenging disease to live with, if you have it. There's too much phlegm in the lungs. You have to lie on your back and have your back pummelled to expel the phlegm in your chest and in your lungs. Say, for example, Yellowstone National Park exploded and we had all this ash and pollution in the atmosphere, all these human beings with hyper-efficient lungs would take it deeply down into their lungs and choke and die. Potentially, those with the ability to capture some of that pollution in their throat and expel that pollution will be the future model for an idealised human.
If we start designing and get Darwinist evolution out of the way, it presupposes a certain set of values about what we think it means to live a good life in the current environment we're in.
Elise Bohan: Yeah, I think that's exactly right. I think it's a far-out example but I definitely take the point. As the niche changes, new things become adaptive and they go onto survive and thrive, and become normative. Why not, then, translate that very principle to silicon intelligence? Could that not be the thing that goes on to survive and thrive, also?
Luke Robert Mason: Well this is where I wonder where you actually stand. Whether it's that we're going to become some sort of cyborg and be some techno-human hybrid, or if you believe that human beings are just the progenitor of some form of future entity - whether that's some form of disembodied intelligence that exists on silicon - or if you think it's going to be the case that we continue to maintain our biological existence and just try and push the boundaries of death a little bit further and further down the road. Do you have a preferential future, I guess?
Elise Bohan: Yeah, audiences are always really keen to be like, "Give me the story. Characterise it for me. What's it going to look like. Tell us!"
Luke Robert Mason: "What's gonna happen?"
Elise Bohan: "What's gonna happen?" It can be really tempting at times to just lead with your dominant picture and overly signal boost that. I have to pull back a little bit and say multiple pathways are plausible. There are so many possible instantiations of post-humanity. There are many possible pathways to human level AI or to super-intelligence. That could come through biological avenues or it could come through digital avenues.
Having said that, I will play ball. I tend to err on the side that digital intelligence will evolve faster than a digital biological hybrid. I just think that tweaking the biological system of the human is such a complex feat. Brain uploading is theoretically possible but very speculative, and the precision that would be required for that is kind of mindbending. Whereas you can just start with a system that's got its own architecture and its own eco-system, as it were. Unlike us, all the classic cliches: it doesn't need to eat like us. It doesn't need to sleep. You are just packing computation in there. I suspect that if we reach and exceed human levels of intelligence in non-human forms, that silicon intelligence will be the dominant pathway.
But some of those examples you raised, they raise the question of thinking on different scales. I think that's really, really important. The idea of whether we could be in this mode where we're just pushing out the human lifespan a little bit more. We're just adding better medical care, contracting that period of ill-health at the end of life. I think that's imminently plausible because there's a really wide range of estimates for when we might see AI or super-intelligence. Yeah, maybe we see it in 10 years. Maybe we see it in 300 years. In the meantime, there's ample capacity for our current technological tools to intervene in their own small feedback loops, and push at the limits of human cognition and human biological endurance, without having catapulting us into this kind of event horizons, singularity [inaudible: 7:33] from the world.
Luke Robert Mason: The more of these sorts of conversations I have, the more I start to believe it's going to be some sort of forking future, whereby it's all of the possibilities all at once. The challenge is going to be having to deal with a society that has cyborgs with additional appendages at the same time as people who've chosen to live in silicon, at the same time as we have people frozen in cryonics - which we do today. We do quite happily with the fact that they've frozen themselves. All the multitude of possibilities on the spectrum from digital existence all the way to some sort of elongated physical, biological existence. That, I think, is going to be the true challenge, because none of this happens in a vacuum and none of this will be prescribed top-down. At least that's the hope. It won't be a case of, "Alright, we're all going to jump to the silicon now, well done for the last couple of thousand years. Line up, off we go into cyberspace." That's the challenge. That's the political challenge. That's the societal challenge - to not just flirt with a multitude of possibilities but to take seriously the fact that they may come to pass. It's no longer science-fiction. It's a proper experiment in potentialities of world-building.
Elise Bohan: With that vision, I'm desperately sad that you don't write science-fiction. To be able to hold all of those things in your head gives me the sense that you may be able to actually characterise such a vision. I think those stories are really, really important for humans to hold, as you say, multiple possibilities and multiple futures - these gradients of change - in mind. I would love to see somebody render that in a convincing way.
Luke Robert Mason: There's the challenge to any pioneering listener. I do have to ask, because it's the FUTURES Podcast, I do have to ask how much do you think the decisions we make today really matter? After everything you've said about us being unintelligent meatsacks with ape brains, are we ever going to be the generation that creates a meaningful future, that will ensure our survival? Or are we going to have to rely on future generations or something completely alien from us?
Elise Bohan: Our decisions really matter. The hard part to focus on, I suppose, is the reality that it's the aggregate of all these tiny decisions happening in parallel. Again, the complex systems dilemma. It's so hard for us to be this overarching strategist, playing God and moving the pieces around on the chessboard. We don't have that level of omniscience. So we are nudging the future in all sorts of different directions. We're doing it, I would say, semi consciously. It's not totally unconscious. It's not, "Right, don't worry about it. We're drifting." We're not totally drifting, but the complexity of the decision making we're engaged in is too much for any single human to keep front of mind, which reinforces that we need really, really robust institutions in this day and age.
Luke Robert Mason: You mention the 'G word' there. Is the God like thing not human individuals acting like Gods but the fact that when human beings work collectively together, they can create something which ensures a collective future, that feels a little bit like intelligent design even if it doesn't have the same aesthetics as religion with a capital 'R'?
Elise Bohan: I don't think it feels a little bit like intelligent design. I think it is intelligent design. There's no evidence of any intelligent design up to this point in human history.
Luke Robert Mason: I mean, we're here. We might be the evidence, we just refuse to accept it.
Elise Bohan: If we're in a simulation then we would have had intelligent designers but if not, what form of intelligence designed you? It was a blind force of evolution.
Luke Robert Mason: That's the thing that I think scares transhumanists. The idea that there's something blindly groping around, that accidentally led to us. Then we say that us is such a wonderful thing, and we'll take it from here. It's like, well, the whole blind groping at different forms of biology and mashing different chemicals together got us this far. Why don't we allow for the continuation of that blind process? Because it did alright so far. Why do we feel like we have the human hubris to now take the reins?
Elise Bohan: In a sense, this is the continuation of the same process. If you think about it as the evolution of information in the universe, Ray Kurzweil's version of this is that you have these six epochs of evolution and that information is encoded in atomic structures, in simple forms, in the early universe. Then you have biology. You have single-celled organisms. You have brains. Then you get big human brains, big old human brains. That's just another way of storing and processing information. These memory computers that we've got in our heads. Then these memory computers see computer computers. That's another way of processing and storing information. That system starts evolving.
You can say that the technology is subject to evolutionary laws as much as biology is and that the most adaptive forms of technology get selected for. In a sense, we're just the biological machines tinkering away. The gene-machines that are assembling the techno-machines - and evolution carries on.
Luke Robert Mason: So you heard it here first. AI is completely natural because it was designed by human beings that came from nature.
Elise Bohan: Well I mean there's nothing in existence that's not natural.
Luke Robert Mason: So it's all meant to go this way anyway, so let's just grit our teeth and bear it. As you so wonderfully say, the only way is through. Is that the case?
Elise Bohan: I wouldn't say grit out teeth and bear it. Again, the only way out is through and we will only make it through if we try really, really hard and we focus in a really concerted way in this tight timeframe of this make or break century.
Luke Robert Mason: Depending on where you put the emphasis, that is either deeply scary or wonderfully egalitarian.
Elise Bohan: Both in the same breath.
Luke Robert Mason: Both in the same breath. A multitude of possibilities. And on that wonderful, weird, exciting and terrifying note, I just want to say thank you, Elise Bohan, for being a guest on the FUTURES Podcast.
Elise Bohan: Thank you so much for having me Luke.
Luke Robert Mason:Thank you to Elise for sharing her vision for the future of humanity. You can find out more by purchasing her new book, 'Future Superhuman: Our Transhuman Lives in a Make-or-Break Century', available now.
If you like what you've heard, then you can subscribe for our latest episode. Or follow us on Twitter, Facebook, or Instagram: @FUTURESPodcast.
More episodes, transcripts and show notes can be found at FUTURES Podcast dot net.
Thank you for listening to the FUTURES Podcast.
Credits
If you enjoyed listening to this episode of the FUTURES Podcast you can help support the show by doing the following:
Subscribe on Apple Podcasts | Spotify | Google Podcasts | YouTube | SoundCloud | CastBox | RSS Feed
Write us a review on Apple Podcasts or Spotify
Subscribe to our mailing list through Substack
Producer & Host: Luke Robert Mason
Assistant Audio Editor: Ramzan Bashir
Transcription: Beth Colquhoun
Join the conversation on Facebook, Instagram, and Twitter at @FUTURESPodcast
Follow Luke Robert Mason on Twitter at @LukeRobertMason
Subscribe & Support the Podcast at http://futurespodcast.net