Moral Enhancement Technologies w/ James Hughes

EPISODE #63

Listen on Apple Podcasts | Spotify | Google Podcasts

Recorded on 26 May 2022

Summary

Sociologist James Hughes shares his thoughts on how libertarian transhumanism allows for cognitive liberty and bodily autonomy, the ethical implications of using enhancement technologies to amplify human virtues, and the challenge of being a techno-optimist.

Guest Bio

James Hughes, the Executive Director of the Institute for Ethics and Emerging Technologies, is a bioethicist and sociologist who serves as the Associate Provost for Institutional Research, Assessment and Planning for the University of Massachusetts Boston (UMB), and as Senior Research Fellow at UMB’s Center for Applied Ethics. He holds a doctorate in Sociology from the University of Chicago where he taught bioethics at the MacLean Center for Clinical Medical Ethics.

Dr. Hughes has taught health policy, bioethics, medical sociology and research methods at Northwestern University, the University of Connecticut, and Trinity College. Dr. Hughes is author of Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future (2004) and is co-editor of Surviving the Machine Age: Intelligent Technology and the Transformation of Human Work (2017). In 2005 Dr. Hughes co-founded the Institute for Ethics and Emerging Technologies (IEET) with Oxford philosopher Nick Bostrom, and since then has served as its Executive Director. Dr. Hughes serves as Associate Editor of the Journal of Evolution and Technology, and as co-founder of the Journal of Posthuman Studies.

Show Notes

The Institute for Ethics and Emerging Technologies Website


Transcript 

Luke Robert Mason: You're listening to the FUTURES Podcast with me, Luke Robert Mason.

On this episode, I speak to sociologist, James Hughes.

"If we all have maximal control over our brain and our body, then what we want is to have the option to experiment with all of the different kinds of personalities that we could have and find the ones that work best for us." - James Hughes, excerpt from the interview.

James shared his thoughts on how libertarian transhumanism allows for cognitive liberty and bodily autonomy, the ethical implications of using enhancement technologies to amplify human virtues, and the challenge of being a techno-optimist.

James, you're known for being a techno-optimist. What are some of the challenges of being a techno-optimist in the 21st century?

James Hughes: Well it used to be - in the 19th century, 18th century - that if you were an optimist or an empiricist - if you were optimistic about the future of technology and the future evolution of mankind - then you were probably a political reformer of some kind. You were perhaps on the liberal wing of the enlightenment, advocating for the end of monarchies but also free markets. You might be on the left wing of things like 19th-century or early 20th-century Marxists who believed that having a socialist society would unlock human potential.

After World War II though, the attitudes towards technology shifted in the political landscape. The purview of techno-optimism was mostly on the right, and the left was critical of technology for a lot of different reasons. When I entered into these debates in the 90s, I began to realise that although I had left-leaning political views, I was a staunch techno-optimist, at least within the debates. I acknowledged the risks of technology; I'm not a Pollyanna person.

The techno-progressive perspective - which is something that we've been trying to shape and describe from the IET has been an effort to say, "Look, there are issues in public policy that in the general political debate that people still consider too science-fictional to take seriously and to argue about what kinds of policies we need to have." We're clearly on the side of saying, "Look, if something's going to happen in 10 or 20 years, the time to start talking about what to do about it is now." That's the techno part of it. The progressive part of it is that there are lots of different kinds of political views in futurism and transhumanism.

There are various communities. Only some of the folks in those communities take questions seriously about, for instance, ensuring that technology creates an equitable future, that we protect people's privacy, or that the technologies that we roll out are going to be safe and well-regulated. Arguing with those folks and trying to carve out this space with those folks is a matter of arguing about particular policy directions and the progressive critique of technology.

In both of those communities, I wouldn't say we're a minority. I think we're a plurality, actually, in the Futures community. Most futurists actually lean left but the kind of dominant, hegemonic discourse is from people like Musk, Peter Thiel and people like that who are definitely not on the left.

I think we are in this kind of unusual position. I think that's an absolutely vital job to do because one of the real challenges that we have in the world today is a lack of a vision of what a sexy, high-tech, attractive future might be. We're so inundated with dystopian expectations at this point and the collapse of all the old narratives about what progress might look like. We need to really carve out a new one, and I think that's what the techno-progressive projects are about.

Mason: That's the challenge as I see it, as well. To be utopian in this day and age almost seems to be naive. To want a better future seems like such a tragic thing to wish for because of the potential negative consequences or the unexpected negative consequences that some of these technologies can wreak. Is it even possible to write a utopian narrative for the future, from a Western perspective, in the 21st century? Are we struggling there because we're so quick to critique the future and future possibilities?

Hughes: I think as soon as you describe yourself as a utopian or advocating for utopian visions, you're automatically saying that you're leaving a lot of things off the table. That's not really what we're coming from. In the context of the debates that are being had, if you say, "Hey, look. I think there are problems with human genetic engineering. We definitely need to address these problems, but those problems can be addressed and we could end up with a future with less disease and more capabilities." People call you utopian, even though you're acknowledging all of the critiques. You're saying yes, we need to address equality, we need to address safety, and unintended consequences. There are ways to do that. We've done that with lots of different things, but they still call you utopian. I'm ready to be called that, but I don't think it's technically a utopian stance. I think there are utopians out there who aren't taking those things seriously, and they should.

Mason: So in your mind, to be utopian means that you're almost blind to the possibility of negative consequences that could arise due to new forms of science and technology?

Hughes: I think to be a futurist in the last couple of decades is to acknowledge that the progress that we expect is going to be so chaotic and so unpredictable. There are going to be black swan events and therefore, it doesn't really make any sense to have a straight, linear projection. I have very much liked some of the linearity - or it's actually non-linearity - but the algorithmic curves that Kurzweil and others have come up with. I think there are some general trends in human history. Demography will shape a lot of history in an inescapable way, at least for the next century. Between technology, demography and maybe ecology - the inevitability of climate change - there are some fundamental drivers of change that you can talk about trends with. In terms of whether we end up with fascism or democracy, for example, I think that's a pretty open question right now, and that's an important question.

Mason: Would you see yourself as a futurist? The challenge with the term 'futurist' is it seems very passive. A futurist just looks at the trends and tell us individuals what may or may not happen. Douglas Rushkoff has argued this. He goes, "The problem with futurists is they're not propagandists. They don't look at a future that they would like to see emerge and then try to drive the narrative towards that." I see your work, James, as more of a propagandist for the future. You have a vision for a positive future and you're not just writing the trend graphs or the trend reports to show that this future may occur. You're actually talking in a way in which it will allow that future to come to be.

Hughes: That's right. I think some of the critical reception of my work in the past has been that people see that kind of argument as Pollyanna-ish. It's not at all. It's saying look, we could end up in this really bad place. We could end up in this really good place. Why don't we try ending up in the good place? Absolutely. I see a lot of it as a political struggle. I think one of the reasons why I've focused more on politics in the last five years and am just now reorienting my attention back to my original problems as futurism and emerging technologies is because the whole situation has gotten pretty dark because of the global decline of democracy. When Trump got elected here, it kind of underlined that in the United States.

I think if we don't take a serious look at things like the Ukraine war - how transformative of the global situation and technological progress; that delinking of the global economy - all these things are happening right now and they have to be included in our analysis of how we get to the future that we want.

Mason: It does feel like when we talk about the future - it's a critique of this podcast, in actual fact - it's done from a very Western perspective. It certainly feels that countries like Britain and America have lost the spark that we once had for controlling the narrative around what the future might look like because it's so difficult for us to create generalised, grand narratives that sit across both left and right. Everybody picks their tribe and then their narrative is driven by their politics, rather than picking the destination in the future and collectively driving towards that.

It feels like a country that perhaps has done much better at owning the future in the 21st century - at least owning a unified vision for the future in the 21st century - is somewhere like China, which for all of its problems, doesn't necessarily have a four-year plan because they're not looking at electoral cycles. They have 10-year or 50-year plans. Do you think that looking at certain forms of Eastern politics and applying some of the methodologies that they're using to think about the sorts of future narratives that they're creating for their nation-state are useful for the West?

Hughes: Well, I think they're at the highest possible level. Some of the successes that China has had do validate the necessary, inescapable need for guided technological development, infrastructure planning, climate change mitigation efforts and so forth. I don't think it requires the Chinese level of totalitarianism to accomplish those goals. You can see that totalitarianism can also go insanely off the rails, as you could see from Shanghai and other cities under the Chinese lockdown, where they'll go in and spray people's homes with disinfectant. Nowhere else in the world do epidemiologists think that you need to be spraying down the streets, spraying people, spraying their homes, and spraying their bedsheets with disinfectant to control Covid. But they got locked into this because of the dynamic of totalitarianism.

We saw with the Lysenko affair and the Soviet Union, Stalin got locked into the vision that Darwinism was reactionary and therefore the correct biological science had to be Lamarckian. Lysenko was his favourite biologist and they led Soviet science astray for 30 or 40 years. I think there is a complicated relationship between being in a democratic society, freedom of speech and the progress of science. It doesn't always work the way it should.

There are lots of other competing forces that can deform science - even in democratic societies. But in general, I think that the development of science and the application of science to public policy works a lot better in democratic societies than in totalitarian ones. That's why I think the model should be more of a Nordic model - one where there is strong political coordination in society and strong state-level initiatives - but it's still a democratic society.

Mason: The only reason I mention China was because for me, that moment where I suddenly realised that perhaps we don't control the narrative around the future in the West anymore was when, in the same week that Jeff Bezos sent William Shatner to the edge of space for 11 minutes - in that same week - China sent their own space shuttle to their own space station. They sent three astronauts there. Whilst America was running 11-minute tourist trips with a fake Star Captain, China was essentially building the future in space. It feels like that's what we should be doing. Those are the agendas that we should have on the table. We've given those agendas to these sorts of individuals you've mentioned very briefly, like Elon Mulk, where it's an agenda not driven by large government projects anymore. It's driven by American capitalism, free-market forms of politics. It feels like there are some limiting factors to doing things that way.

Hughes: Right. Especially in space, I've never been attracted to the vision of a private sector-led space exploration model. I guess I'm okay with if we could have some sort of international coordination about who gets to own an asteroid and then Musk or Bezos sends out robots to mine it; I'm okay with that kind of thing. But the idea of a Musk-controlled corporate colony on Mars is - in the first place, whether Mars is the right thing to do at all - that's not too attractive to me. I much prefer state-level, state-guided adventures. Partly for the reason you mentioned which was, they are far more likely to be successful and have the resources that are necessary. But I think there are mega projects that we need to be focusing on here on planet Earth. Again, it's easier to imagine China committing to, for instance, spending a hundred billion dollars to build carbon capture technologies that would start sucking carbon out of the air or engaging in some kind of geoengineering. I would far prefer governments to do geoengineering than corporations.

Mason: It always feels like the techno-libertarians on the transhumanist side are the ones are arguing for rampant technological innovation; the freedom - really it's about freedom - to do exactly the sorts of things they wanted to do, whether it was to the environment, to each other, to their own bodies. Yet the government was always seen as that barrier to doing those kinds of proactionary things. Do you believe that we can have a techno-libertarian approach to technological development whilst also having the checks and balances that something like government allows? Do we need those forces in equal measure to allow us to navigate carefully and considerately into the future?

Hughes: One of the things I'm very interested in is how to understand innovation ecosystems. If you think about the constraints that human civilisation has been under throughout its history, the capacity to innovate new technologies is what has the most dramatic effect when pushing back those constraints - ecological constraints, demographic constraints, and competition from other societies.

I'm very interested in how we create a society that maximally produces innovation. Within that, I think you do have to recognise that basically, planned economies lost the argument in the 20th century. To the extent that the Soviet Union or other societies have planned economies, they were not so great at innovation. They were relatively good at frog-marching a bunch of peasants into factories, building those factories, and building tanks. The early stages of industrialisation were what planned economy was relatively good at. But the socialist calculation debate, it's called, basically argued that when you get to a certain level of complexity, you need market mechanisms in order to progress.

Then there's the kind of geopolitical angle which is that if the innovation's happening outside your society, the only way that you can get that talent, the experience and the technologies - unless you steal them - is to invite outside investment. That was China's strategy, and Russia's, too, in a more catastrophic sense. China invited people to come in and invest, and then they began to build up their own indigenous industrial infrastructure. I think in the 21st century, one of the questions that's on the table is whether the many different kinds of planning that we do in our democratic and non-democratic societies will become increasingly sophisticated with artificial intelligence, in ways that will overcome the defects of planned economies in the past.

The only place where I don't think that that's possible is innovation. I think artificial intelligence is going to get better and better at helping us distribute stuff. One of the examples of this is Walmart. There's a great book out called 'The People's Republic of Walmart' about how Walmart is a world economy unto itself. It pioneered the internet of things by having RFID tracking on all of its products and it does anticipatory distribution. It knows that there's going to be a game in this particular town and therefore we need to ship more toilet paper to that particular Walmart. All of that, you can imagine being rolled up at the societal level to do more of that kind of thing.

What that kind of planning is not so good at is figuring out the next thing that people might want. That's where the capitalists have to come in. I think there's always going to be a role for capital investment at the venture, angel level. I don't foresee it being desirable to get rid of stock markets, for instance. There's a certain argument for more widespread stock ownership. Perhaps weaving in stock ownership of some kind or ownership of large mutual funds into a UBI - all of those are kind of interesting ways of democratising our existing capitalist infrastructure.

If we want societies to be innovative, we do have to accept that there's going to be a capitalist component so far, to that. The place where that argument is so weak from the libertarian's side is that there are many, many other things that you need to have an innovative society. You need to have a well-educated workforce. You need to have a workforce that's free. There's this argument about the creative class that doesn't go too far without argument. It's certainly true that if you want to attract young, interesting workers from around the world, to your state or to your city, you need to have a certain kind of infrastructure and a certain kind of policy environment that will be attractive to them. You need to have higher education. You need to have investments in basic research. You need to have a regulatory system that works functionally well. At many different levels, without those things, you could have as many venture capitalists as you want and you wouldn't have the same high level of productivity that we do.

Mason: Well then I guess the question is, what sorts of innovations should we direct our attention to? There's so much we could do but then there's the question of, what sort of agendas drive the sorts of paths that we go down and the sorts of paths we direct our talent - our creative class, as you've called them - what we direct them down. To some degree, the best minds of my generation are working at large technology companies trying to work out how to optimise advertising dollars around digital ads - whether it's Facebook or Google - or pick your poison, there.

When we're taking that sort of talent out of the collective pool and putting them into these corporations that are doing a very specific thing, we're losing that talent from being able to focus on these larger issues and problems in society - whether those are directed by certain forms of AI or otherwise. How do we deal with that talent piece?

Hughes: Well I think you have to create the incentives. If we're going to have a labour market where people get to choose their jobs - and one of the things is how well it pays - you need to have an incentive structure in your society so that not all of your intellectual talent goes to Wall Street instead of to innovation; real innovation.

Just to go back to China for a second. I'm a political opponent of Xi. He's a barbaric dictator. But one of the things that China has been doing over the last two years is cracking down on technology. They have looked abroad and internally at what's happening in China. They see the growth of tech titans and their politically distortive effects. They see the stress that kids having to go to a couple of hours of after-school training to prepare for their exams is causing, and the exploitation and inequality that it causes when you can have some kids who can afford to do that and others who can't. They've been trying to crack down on all of that. I think there's some interesting parallels between the kind of technology regulation that China is doing.

Part of that is that Xi is worried about all of the intellectual talents in China going into game production, in particular - that's one of the areas he's cracking down on - instead of real stuff. They're very explicit about that. They say they want kids to start doing real stuff instead of just messing around on the internet.

At any rate, I think in a democratic society, the way you change those kinds of incentive structures is you provide good funding for your higher education institutions and programmes of basic research so that scientists like my daughter are guaranteed a reasonable living from their work and get grants to do important things. Then you need to have a transitional programme in your society - not so much to pick winners, but to, in broad areas, say we're going to have more nanotechnology and so we want to encourage particular industries to start collaborating with us around a technology. Again, some of it's actually led by the People's Liberation Army. They have their own research initiatives and their own firms. The PLA, for 50 years, has been funding itself by doing commercial activity, not all of it legal.

One of the areas in China is to have state-directed coordination for things like quantum computing, artificial intelligence, and genetic engineering. The PLA does some of that. At any rate, I think we do need to have state initiatives that create the incentive structures to do the kinds of innovation we need.

One of the classic examples of this is pharmaceutical research. If you look at the big pharmaceutical firms, they don't have big-budget research programmes on the major killers in the world - cholera, malaria and things like that. They do have them on erectile dysfunction, you know. The reason for that is that the people with the money to buy drugs worry more about those kinds of things. This is a big issue with biosecurity and biosafety. We've just gone through two years of Covid policy where we saw the relative efficacy of state guarantees. "You five firms come up with some vaccines, we'll test them, and we'll guarantee that we'll pay for X, Y and Z." Otherwise, why would anyone...pharmaceutical firms don't have much incentive to engage in a 20-year smallpox [vaccine programme] - or monkeypox now that monkeypox is spreading, you know. There is a monkeypox vaccine out there but we don't have enough of it. To have the kind of biosecurity programme that would say, we'd need 200 million doses of monkeypox vaccine. That is only something that governments would do.

Mason: It seems like that only happens when the incentives arise. As you're saying there, it's not until monkeypox is a real problem do we then drive the funding and the allocation, and the attention towards those sorts of things. To have a truly long-term view of the future, surely you must get ahead of those things before they occur. Do you feel like those doing the work to identify the possible future trends and possible future problems that may come over the horizon are shouting into the void when it comes to some of these things? Are they still effective in helping at least nudge some of these big government organisations or even corporations towards focusing on what may be important, not today but in the future?

Hughes: Well this goes back to the IET's raison d'etre which is to try to make that argument and to say, "Look, I know what you're concerned about right now is cybersecurity, but we expect that within 5 or 10 years, certain kinds of tools are going to become increasingly intelligent in these ways. That would create potential problems of these sorts. To try and orient the existing policy debates to include that emerging technology prospect and to create policies which would also prepare for that. In the bio arena, one of the examples is with the longevity dividend argument, which is, if you're concerned about diseases of ageing - and Covid is also a disease of ageing because of its age-differential effects - if you're concerned about these things, what we should really be focusing on is slowing down the ageing process. That would be a far more effective way to control cancer, heart disease, stroke, and infectious diseases that are differentially bad for old people, as well as reducing the amount of expenditure that we have in our society on nursing homes and caring for the disabled.

There are many, many ways in every technological sphere...I once talked to an Obama official about whether they had had any discussions. I was excited that the Obama administration had some initial discussions about geoengineering so thought maybe there were some interesting futurists involved in the Obama administration. I asked someone who was working in the health and human services domain, "Has the Obama administration ever talked about human longevity and what effect it might have on, for instance, social security, the pension debate and things like that?" Nothing. Washington is very insular to those kinds of arguments.

The one place where you can get a decent conversation is in the defence department. They do have folks...they bring in science fiction authors to talk to them about what things they should be thinking about - the CIA issues, the defence department issues, the world in 2030 and the world in 2040. This is the kind of defense environment they're going to face. They include things like human enhancement and nanotechnology or artificial intelligence in their projections. I just wish it was more widespread.

The Trump administration set us back a long way in terms of the capacity to address emerging challenges. Now we're re-litigating the right to abortion and the right to interracial marriage instead of the things we should be debating.

Mason: It certainly feels like there has been some form of cultural kick-back. If we look at the Trump administration and look at some of the few good things they did do, they did allow right to try. In other words, if you have a terminal illness, they would allow you to try hyperexperimental drugs. That sort of policy helped some of the human enhancement agendas that you're thinking about.

Can I just stop on the 'right to try'. I'm very interested in this debate. I think the technoprogressive answer to it is what I call open clinical trials, which is to say we don't want to just tell people to go and do stuff, and not collect any information about it. Especially if they don't actually have the condition that is being targetted by this particular drug. We saw with Covid people taking horse pills and all kinds of things that had no evidence behind them. They were poisoning themselves. People forget all the time why we have a Food and Drug Administration, why we have a clinical trial regime and why we have prescriptive control over certain kinds of drugs. I don't want to throw all of that out the window in a libertarian move.

I think the intermediary and much more progressive position is to say, look, we have the capacity now...I have four medical devices right here on my desk. I have my blood pressure monitor, and my Fitbit watch, and I have blood sugar testing equipment. We have the capacity for anyone who wants to try an investigatory substance to have home health monitoring and make a quick arrangement with them to say, "Look, if you have the diagnosis and you want to try this, this is what you have to do. You have to keep getting this drug. You're going to have to give us all this information and come in for a monthly health checkup" or whatever. Then we get the information about what's actually going on with that. Why this is innovative is that it's not a randomised clinical trial. It allows everybody in and then it uses statistical methods to say, "Look, is this more effective for African Americans than Whites or for women than men?" Only at the level of a population study with 10,000 participants as opposed to 100 can you do that kind of analysis. I think that's a far better way to go.

I think that relates to something you said earlier which is about libertarianism. I think it's important to distinguish between the libertarianism of the market and the libertarianism of the body. I think for my kind of politics - social-democratic politics - that with the libertarianism of the body, the state has to meet a very high bar to tell you what to do with your body. For that reason, I understand and appreciate the argument about the right to try. If you have a terminal diagnosis and society just hasn't gotten around to testing or validating any particular substance, why the hell not? If you only have three months to live. That's why I would like the information to come back, but I understand that and I don't have a problem with just saying, "Okay, they should be able to get it."

The right to control your own brain - psychoactive substances. The right to control your own reproduction - to have the time, your children, and have full access to abortion and contraception. Also to be able to control the kind of children that you're having. To be able to have prenatal testing, and eventually genetic engineering. Those are the three kinds of personal privacies or rights that I think have to be able to be respected by both people of the social-democratic left, as well as people on the republican or far right. That is a distinct kind of libertarianism from saying that you should never be taxed, or that there should never be any regulation of technology. I make a very strong distinction between those.

Mason: I think that's an incredibly important point. With regard to the liberation of the body, the assumption is that people can make informed decisions for themselves. As we've seen with the emergence of things like the anti-vax movement, it's difficult for people to make scientifically informed decisions about what to do and when to do it with their bodies. Surely the thing we need to fix first is scientific literacy - that education piece - before people are allowed to have this full bodily libertarian approach.

Hughes: Dude, I'm an educator so anybody who says we need more education, I'm down with that. I don't think that the research is on cognitive biases, confirmation bias and things like that has really demonstrated that scientific literacy is a solution to all the irrationalities and conspiracy theories. Especially when you're talking about medicine. One of the huge problems has always been that you do an informed consent for a stage 1 cancer investigatory substance. You tell them, "Look, the only thing we know about this right now is that it might work. We're trying to figure out whether this dose will poison you or not." They say, "Okay, great. Thanks. Thanks for the informed consent. Give me the drug." Then you say to them, "Well what do you think you just enrolled in?" They say, "It's trying to cure my cancer." Well, kind of, but it's also true that we're basically using vulnerable cancer patients as fodder for clinical medicine, and making your life more miserable in the last three months than it had to be.

There's just a huge problem with the whole idea of informed consent. At any rate, in general, I think of course, we need more scientific literacy and over the course of the last 100 years - the spread of literacy and so-forth - there's a lot less magical thinking than there used to be. The last 2 years have revealed an awful lot of magical thinking. I don't know if giving them more science books is going to help.

Mason: Is one of the opportunities there for the futurist community to generate alternative - I've got to be careful with the word 'alternative' - but alternative counter-narratives to the sorts of conspiracy narratives that we saw? For example, "There are going to be nanobots in your vaccines." The assumption there is that it's a negative thing, that the government is tracking you. Bill Gates has developed this thing.

Hughes: What if they're good nanobots? Exactly.

Mason: That's the opportunity. What about the futurist community turning around and going, "Look, if we were able to develop nanotech that could be flowing constantly through your bloodstream, here's the list of incredible and wonderful things that we could do to improve an individual's health. Do you think, perhaps, that's the way to counter some of these negative assumptions that come with technologically guided innovation that people don't fully understand?

Hughes: Yes. Let me just mention something else I've been working on recently which is why there is such a politicisation of science and why we're seeing this rise of conspiracy theories. I think it has a lot to do with the changing structure of industrial society where college-educated people have become a significant plurality - not a maturity in most cases - but a plurality of the population. As a consequence, at least in the United States but up to a certain degree in the OECD countries in general, the wage gap between college-educated workers and non-college educated has been growing. Certainly during Covid, if you had a college education you were far more likely to be in a job that could work from home. They take science seriously, mask and try to vaccinate and everything.

The long tradition of populist scepticism about the elitism of the educated. We college-educated people used to only be, 100 years ago, 3 or 4 percent of the population. Now we are dominating all the major institutions - the media, the universities, and the political parties. Part of what the blue-collar far right is responding to is that inequality. They see the rejection of expertise and the rejection of...Fauci became the litmus test or the poster boy for this in the United States. If you believed Fauci, you probably had a college education. If you didn't, you probably were high school educated. Same with masking and so forth.

I think we have to confront the fact that there are a lot of people who feel left behind in society. Their frustration at being left behind is manifesting in all of these counterproductive ways, including rejecting science and having conspiracy theories about it. You see that reflected in, for instance, Pugh has been doing great polls about human enhancement technology, artificial intelligence and so forth. They've been asking, for instance, "Do you think it'd be good for society if we had brain-computer implants? Do you think you would use one?" In general, education is one of the drivers of this. The more education you have, the more optimistic you are about these technologies and their effects on society. The more optimistic you are about using them yourself, about driving in a self-driving, autonomous car and so forth.

We really need to confront that. If our political vision of what a fully enhanced future is going to be is going to leave behind half the population, we may not get there at all. Those people are a serious roadblock right now.

Mason: You have to take into consideration how they're discovering or engaging with these forms of technology. It was fascinating to me to discover that Steve Bannon, on his news platform - he has a news platform called the War Room dot org - they have a tab dedicated to transhumanism. It's rather pithy but some of the negative arguments they're making about the sorts of transhumanist technologies that people like you and I would see as exciting positivistic futures - they have legitimate critiques of some of that stuff. It's critiques that we're both familiar with but it's about the language in which those critiques are told. They're seen as these forms of surveillance; they're used as weapons of control as opposed to things that could liberate us from our current circumstances.

Really, it's the same technology with slightly different communication strategies, which is what I'm finding so fascinating about this new political realm that transhumanism has to navigate through. I really think that movement isn't ready to realise the scale of the potential kickback that the last, say, 10 years I guess - of Twitter and individuals like Steve Bannon having a very fixed perspective on the negative aspects of transhumanism - will do to the movement.

Hughes: Well part of my background is religious sociology. One of the things I've written about in the past is how narratives about the long term, about escatology - what you call in religious sociology 'escatology' - can lead very easily to violence. If you think about 'Terminator 2''s Sarah Connor, she knows that this one scientist builds the robot that ends the world. All you have to do is just kill that one guy. It's usually not that simple, but ever since the Unabomber, there have been people that believe that.

The Unabomber Manifesto was pretty on the nose about some of the things that we talk about, about genetic engineering and so forth. He thought all he had to do is start waging this bombing campaign and kill the right people. Fortunately, the level of political violence is still very low in the United States, even though we're incredibly polarised. Political violence. We are killing each other with handguns all the time, but the political violence is still very low.

I have been very worried for decades that these kinds of narratives about the potential catastrophic risks that emerging technologies pose will get welded to political and religious agendas that will cause political violence. I don't see so much of that. Eliezer Yudkowsky and people that have been talking about friendly AI for a long time. When they came to the conclusion that friendly AI could be an existential risk, none of them picked up a gun. There are lots of people in the United States who are worrying about that now, who have lots of guns. I think it is an ongoing concern.

Mason: One of the ways we can resolve that, I guess, is through the sorts of things that you're interested in currently, which is this idea of enhancing our virtues. I guess if you enhance these people's virtues, they won't care so much about this stuff. I'm being facetious, of course. Could you tell me just a little bit about this idea of using enhancement technologies to change or manipulate certain virtues that we have as human beings?

Hughes: Part of this is a debate in bioethics in the academic literature, about the concept of moral enhancement. People like Julian Savulescu and John Harris have been talking about the ways that we could use technologies to change our moral emotions and cognitions, and behaviour. My particular axe to grind in this debate is that most of these people are completely unaware of a certain history of religious discussion about what it takes to have a good moral character. They come out of a secular bioethics background, consequentialism. They latch onto something, like saying, "Oh we'll just make everybody nicer. Wouldn't that be a better world? If we just make everybody smarter, wouldn't it be a better world?" It's like, you know what, if you go back to Aristotle or the Buddha, they were very clear that - Aristotle in particular - every virtue has a downside. People should be somewhat fearless but if you make them too fearless, they're foolish. If you make them too compassionate and open-hearted, they get trampled on by other people.

For a long time, for thousands of years, people have been trying to enumerate what the virtues might be and how they relate to each other, and what a well-balanced moral character looks like. When we try to talk about writing morality into robots, this has been part of the problem. Do you want a perfectly consequentialist robot? The trolley problems that they propose. A perfectly consequentialist robot would have no problem pushing the fat man onto the tracks. There's a train, there's a fat man, let's push him onto the tracks.

We want, maybe, that outcome, but we want a moral creature - in this case, a robot - to have some qualms about it. To have some competing...make it feel bad, and make it think for a second. That comes from this kind of complex moral character building. If you read Wendell Wallach's 'Moral Machines', they basically make this argument that the best way to build a moral machine is to put them through the same moral character building process that we put human beings through, ideally.

What are the characters or virtues that we might choose? I don't believe that there's an absolute ideal human character. I'm not a perfectionist. I'm not trying to get everybody to become like X. But I do think if you look around the world, you can find certain virtues that have been very common in many different cultures, and kind of distill the essence of what they're trying to get at. Then you have to have that conversation with moral psychology, with personality psychology and with neuroscience. What I've been trying to do with this project is to build a model of how the virtue conversation in the philosophical religious literature relates to the moral neuroscience. That's what everyone's been trying to do, to some extent.

Out of that, I've been making the argument for a decade. I shall finally get the book finished, but I've written lots about it. It's out there. The book is basically structured around happiness or positivity as a virtue. Again, if you have too much of it you can become [inaudible: 46:51] so don't get too much, but we could probably all use it a little bit more. Self-control, which is one of the few virtues where having too much of it doesn't seem to have much of a downside. If you don't have self-control and you have addictive behaviours and so forth, then we need to overcome that. Niceness, caring about other people, empathy, intelligence - which is decision-making capacity, the capacity to remember things, to learn, and to make good decisions. Fairness, which is being able to be dispassionate about your own biases and understand a concept of justice that is beyond just your own needs. Then transcendence, which is the ability to step out of yourself and your ego cage. With each of these, the moral neuroscience has shown us that there are chemicals, parts of the brain, the genetic inheritability of different kinds of traits.

They've also made important distinctions. In Buddhism, there are four terms for different kinds of compassion. Neuroscience really supports two basic kinds of compassion which are cognitive empathy and visceral empathy. Visceral empathy is when you see somebody step on a nail. You jerk and feel that from your mirror neurons. Cognitive empathy is having a theory of mind, and understanding that other people might have the same kind of experiences and feelings that you do. When you understand that someone has had an experience, then you feel sorry for them because you might have had a similar reaction to it. At any rate, I think neuroscience informs and can shape the way that we think about virtue as a philosophical level, as well. In between is personality theory, for me, which is the evidence that certain kinds of personality traits are inheritable, largely set at birth, and seem to be related to certain genetic variants for controlling serotonin, dopamine or whatever.

I think the ultimate goal is for us to have a kind of control panel for our brain, so that we can say, "Well, when I go to a party, I want to be happier. When I go to work, all I need is seven instead of eight." Self-control, again, turn it down for the party and turn it up for work. We already do that in the sense that you don't drink alcohol before you go to work. You drink caffeine and then you can drink alcohol when you go to a party. We are already trying to control our moral sentiments and behaviour in that way. It will have increasingly refined methods to do that.

Finally, the project I'm also writing is one in which drugs are not just the only thing we're talking about. There's also genetic engineering of humans - somatic genetic engineering - so people genetically engineer themselves. It's not that attractive an option in most cases because you would like to have more control. If it doesn't work very well, you'd want to be able to turn it off. There are ways to do genetic engineering theoretically and it would be turn-off-able. Brain-computer interfaces, external brain-computer relationships, and then what I call the exo-cortex which is to surround yourself with an electronic superego that is reminding you and guiding you, and trying to keep you on the right path. That kind of developing and those kinds of technologies are pretty inevitable.

The political question right now has always been, I think, is how do we regulate those? Or do we even want to go down this path, given that there will inevitably be totalitarian cults, parties, businesses and countries that will try to use these technologies in ways to control workers, citizens, and cult members or whoever in ways that we don't want them to? The ultimate fantasy of this is the Borg. In a lesser sense, I've joked with Catholics. If you could have had testosterone suppression 200 years ago, imagine how many Catholic kids would have been saved from predatory priests. Don't you think that that would have been a good idea, to be able to give them a shot and reduce the instances of sexual predation? I'm not opposed to cults using these kinds of things, it's just that I think we need to think very carefully about the political context within that we want to have them be regulated within, to make sure that they're safe and that people understand the consequences of using them, and that there are going to be forms of moral enhancement that we don't want people to use - certain kinds that need to be banned.

For instance, I don't think this is going to be true, but if you found that you'd be a better soldier if you completely suppressed empathy - I think empathy actually plays a very important role for soldiers on the battlefield, or empathy with each other, at any rate, and not committing war crimes and so forth. But if you find theoretically that suppressing a soldier's empathy would be good for their battlefield performance, would that be a good thing to do? Those are the kinds of questions we have to answer.

Mason: To what extent can we already modulate these things? Our moral sentiments, our emotions, our cognitions and our behaviours? What human enhancement technologies are already available to us? I guess apart from things like education which could be classed as a form of human enhancement.

Hughes: Sure. That's usually where I start the conversation. The best thing that you can do for your virtues right now is get enough sleep. In almost every one of the things that I mentioned, not having enough sleep means you're not as nice, you don't have as much self-control, you're not as happy and you're not as fair. There's a nutritional component to our brain and so the diet does have an influence in many of these areas, as well as exercise.

Then there's a behavioural and environmental component as well. It's very hard to be a virtuous person in an unvirtuous occupation or an unvirtuous society. For Buddhists like myself, the hateful path includes livelihood. You should try to pursue a job where you're not constantly trying to be a good person in a really terrible job, and similarly with the society that you're in. Not to be overly political but the images of Russian kids participating in fascist rallies in the context of this war, how difficult would it be to be a conscientious objector to Russian fascism in this context, or to not want to believe that your society is not committing the atrocities that it's committing? At any rate, I think we need to have these kinds of moral enhancements within the context of a programme for a better society, as well.

Mason: Do you think there's a danger, though, of putting certain value judgements on certain personality traits? Many of the things that you listed there - those six virtues - they seem like very important things but by putting a certain value judgement of them being preferential virtues, does that negate all the other virtues that we have, that fundamentally makes us human?

Hughes: There are going to be debates about what kinds of virtue models we should aspire to. Faith, hope and virtue classic formula from Catholicism - the faith part of it illustrates that for a lot of virtue models, there are components that we would accept in secular ethical frameworks. There are components that we wouldn't. After the enlightenment, we said you should be critical of the world. You should have an empirical, critical approach to the world, and find things out for yourself. That's not in every virtue system, and usually the opposite.

Sexual prescriptions. I think our contemporary secular ethics are not...self-control, yes. Don't be a nymphomanic, don't rape people and things like that. It's not a lack of self-control if you're gay, trans or whatever, but for some systems obviously it is. We're going to have debates over what the ideal moral character should be. I think from a liberal democratic or socio-democratic point of view, it's possible still to say that there is a scope of kinds of people that we think are the best kinds of people for our society. Not to be a flaming racist, or not to be a sexual predator.

Usually, our definitions of the idea of personality are so broad that the boundaries are at psychiatry and criminal justice. I wouldn't want to be more specific about it. I wouldn't want to say that the ideal person is the person who praised Allah five times a day, or whatever. I want there to be a huge diversity of possible life choices. But at the boundaries we have to say when do we pick someone up off the street and put them in involuntary psychiatric confinement or treatment, and when do we put them in a prison? We are all doing this all the time. It's not as if I'm proposing something new.

My mother was part of the campaign to decriminalise mental illness and create group homes for the mentally ill. One of the consequences of that in the 1970s was that we let a lot of people out of mental hospitals and then didn't build the group homes that they were meant to be living in afterwards. A lot of people ended up on the streets. For a certain kind of libertarian mindset, it's fine. You decide to live on the street, that's fine. I would like to live in a society where if you're so mentally ill that you're living on the street, we step in as a society and try to help you.

But anyway, we are constantly renegotiating these boundaries of what is just difference that should be acceptable, and what is so different that you're harming yourself, others and society as a whole. I think this is going to be part of this discussion, for instance, with self-control. Self-control enhancement - it's hard to argue, I think, against the idea that people in the criminal justice system who have had problems with violence in particular shouldn't be offered the technologies of self-control enhancement.

For instance, ADD drugs and stimulant medications. People with severe ADD have a huge problem with self-control. There's a much higher incidence of ADD in the criminal population. If we were able to effectively treat ADD before they got to become involved in criminal justice, then they'd be less likely to get into it. Once they're in it, they should be evaluated and treated. Some people consider that unacceptable enhancement, blah blah blah. I just consider it a normal part of what a caring society should do. If you've got a problem, you will not be able to live in our society effectively unless you change your brain in this particular way.

I think there are other places where we do that. As I mentioned, testosterone suppression for sexual violence. I think you can also make that argument for racism. This is where I think it gets a lot more controversial. The evidence of extreme misogyny. The incel murders or mass murders that guys who can't get a date one day decide to blame all women and go and shoot a bunch of college students or whatever. But extreme racism and extreme misogyny is implicated in violence in our societies. The ability to screen for and treat. This is where we get into saying we don't want to live in a police state that is constantly evaluating us for whether we're going to be thought criminals, and we don't want to have a Soviet psychiatry system that decides that every political dissident needs to get electric shock treatment. But we do want a society where if a kid is fantasising all the time about guns and is doodling Swastika's in his notebook, talking about how all women are evil and should be killed, these kinds of signs can be taken seriously in your society and you could then offer them something. Something better than talking therapy, being expelled or whatever.

Mason: Doesn't this sit at the heart of the nature-nurture debate? What you're talking about is that the environment, or even better the society in which we live, will eventually define the types of people that are allowed to be. Surely it should be the case that the environment should create the circumstances under which people don't exhibit these problematic behaviours. I say 'problematic' to not give it a value judgement. Do you understand what I'm saying there? It feels like if you don't fit into society, we will change you as opposed to dealing with the societal issues that gave rise to that sort of person who wants to be an incel or a school shooter.

Hughes: We've been doing this for a long time. From an evolutionary perspective, both from killing the people in your tribe that weren't behaving the way you wanted them to differential mating, we've been producing a new social outcome. We've been selecting for the people who will be best to fit into the social project, however it's defined. Just imagine the amount of natural selection that we did to dogs. We've been doing the same natural selection to ourselves, through our own behaviour. It's had demonstrable genetic consequences, such as the evolution of various kinds of food allergies, for instance.

From an evolutionary perspective, I think this is already something that we're doing. From the perspective of risk - this is where Pearson and Savulescu went with their book, 'Unfit for the Future' - they argued that the existential risks of the future are going to become increasingly enabled by technologies that individuals or small groups would be able to control. As a consequence, their argument was that we need to ramp up mandatory universal moral enhancement in order to address these risks. I think there are a lot of missteps in the logic there and I'm not prepared to go to their conclusion of universal moral enhancement.

I think the more general point however is that people will try to use these technologies to become less moral in the capacities that our society generally agrees are necessary. We want to discourage that, even though we may because of the high bar that I think we should be trying to meet of cognitive liberty and model autonomy. We may allow people to, say, turn off all of their empathy and see what it's like, but we wouldn't want a lot of people doing it. We may want to allow certain kinds of psychotropic drugs such as methamphetamine legalisation or whatever. But as a public policy matter, we don't want everybody taking methamphetamine.

I think it's an inescapable question for any society that if it wants to continue to reproruce itself, it has to define certain boundaries on human behaviour, and say, "If discouraging these doesn't work then we're going to ban them. Let's try discouraging them and see if we can keep everybody broadly in this ball of wax."

Mason: Again, it goes back to what society is encouraging. Rutger Bregman said in his latest book around humankind that it's unfortunate that the traits around psychpathy and lack of empathy are often those that are found within our political leaders - Trump being a prime example. The fact that he was so utterly shameless unfortunately made him an effective candidate for the President of America. Could you have a highly empathetic US President when the stakes are incredibly high?

Hughes:  I think your example is right on. One of the places where I see the applicability of the moral enhancement regulatory regime is a case a couple of years ago where they found that the guys who controlled the nuclear weapons in the United States - in these bunkers where they have to turn the two keys and all of that - had cheated on the exams to get to that place, and some of them had drug problems as well, like cocaine. If there's a job where your cognitive capacities and moral decision-making are vital to the livelihood of millions...let's start with bus drivers. You don't want bus drivers to be drunk at work. That's why you lose your driver's license if you're discovered to have an alcohol and driving problem.

Mason: Or airline pilots. I think pilots are a prime example.

Hughes: Right. With politicians, what do we do? We say, "Tell us you've gone to see a doctor." Then Trump comes up with some guy who said, "He's the healthiest person I've ever seen." We put pressure on Trump, saying, "We think you're losing your mind." He said "No, I've taken a cognitive test. I could say that it was an elephant and a camel." Okay. He took one of the mini cognitive tests which are like, five questions. These are terrible ways to figure out if a person who has their finger on the nuclear trigger is competent enough - not to mention moral enough - to actually be a leader.

The proxies we use as morality, for the most part, are religion and whether they beat their wife or not. I think you can be a very immoral and incompetent person and be very religious. Unfortunately, I think you could probably beat your wife and also be a good leader. We don't want you to beat your wife either. What we may evolve towards - it's hard to predict how this comes about - are increasingly sophisticated ways of drug testing, for example. Drug testing is widespread. I don't support it for most workers. I don't think it's necessary for most workers. But for airline pilots, for instance, I think drug testing can work.

I think what we need is also cognitive and moral testing. One of the ways that some DUI technologies are being rolled out is to say you have to do a particularly sophisticated number puzzle before you turn on your car, just to make sure you're not intoxicated. Also, tests for whether you're so distracted by rage, or saying you're too sleepy to drive if you can't do this particular test. I think that's the direction that I imagine things going in, especially with these high-stakes occupations. The ultimate end of that, I joke, is that you show: I installed Morality 8.42. I have the current version and I've got all of the updates in my head from my moral code. Then the question is, who's deciding the moral code and do they work as well as we expect? That's what I would like Trump to have been required to show.

Mason: I guess it presupposes us to a certain type of society under which we live, one that is more collaborative whereby being empathetic to our fellow humans is a great thing. If you've got a threat like Putin who is, to a degree, psychopathic in his own right or has certain social-problematic issues and you're in competition with other nation states or other individuals, surely you don't want to heighten all of these virtues that wouldn't allow you to operate effectively in a competitive capitalist environment. The things we are so obsessed with enhancing in the 21st century are usually around intelligence, simply because it's assumed that our level of intelligence is what can get us a better job, for example.

Hughes: You didn't say this explicitly but I think one of the things that often gets brought up here is that if we were to make everybody more compassionate, we would be more vulnerable to predation from less compassionate and more militaristic societies. I don't think that the evidence actually supports that, because it turns out that for instance, we can test how common trust in your neighbour is in society. I think for the most part, ramping up compassion has a beneficial effect on social coordination, and that is one of the reasons why this capacity is so central to social reproduction. Tribes and groups who had more compassion for each other were able to fight on each other's behalf more effectively.

I think there are other forms of moral enhancement that do raise that kind of issue. One is fairness; the degree to which you can separate yourself from your amygdalin hindbrain impulses to only care about people like you. That's part of our ancient monkey brain. This is part of the oxytocin research. The impulse for oxytocin which is also called the cuddle drug is also the impulse to trust people. The more oxytocin you get, the more it wakes up the hindbrain and you only trust people like you. It tends to actually increase outgroup hostility, suspicion and aggression. In the case of fairness, you really want to tamp down the amygdala and have more of a cognitive process of people deciding who they should care about.

If you want to have animal rights, for instance, it can't just be on the basis of caring about people like you. It has to be at a more cognitive level where you're generalising the concept of compassion to other people. On those grounds, you could imagine that a militaristic society or a society facing a military threat - say, Ukraine - if the landscape could give everyone a fairness shot, it might have a negative influence. They might be more inclined to want to argue with a Russian soldier rather than hide from them, or something. I think you could imagine that.

That gets to the issue of reversibility and the fact that not all virtues, or settings of virtues, are appropriate for every situation. Soldiers would need different ones than partygoers. Countries under threat would need different virtues than countries at peace. Of course, we want to create a world at peace where virtues are not constrained in that way.

Mason: That's what I guess you mean by your virtues control panel. You can dial these things in and dial these things out. As you said earlier, we kind of already do that. We live in societies with a spectrum of differentiated humans and some make effective leaders whereas some don't. Everybody finds, eventually, their role in that society. Don't we need a mixed range of virtues expressed in a multitude of individuals for an effective society? I know that you're not talking about dialling everybody up in the same sort of way and this is not David Pearce's hedonistic imperative in any way, shape or form where he wants every human and species to feel bliss and be blissed out to a large degree. My question, I guess, is why should we dial things for ourselves individually when we can just find our place within society based on our current virtue set? Surely that sort of virtues control panel only works for those outliers you're talking about. Those with an extreme lack of certain virtues or an extreme overabundance of certain virtues that may cause unintended negative consequences in their life.

Hughes: You make me think about a couple of things. One is that there's a longstanding question in evolutionary psychology or evolutionary biology about why certain kinds of variants get preserved and why the evolutionary process doesn't converge on one ideal type, for instance, of personality. For human beings, you've got, for instance, ADD. Potentially non-standard sexual behaviour. You've got a number of things that seem to have a genetic component and the question is how did that get preserved? How did non-reproductive sexual behaviour get preserved as a minority tenancy within your gene pool?

One of the answers to that is if you have a group selection approach as opposed to an individual selection approach. In a group selection approach, the groups who survive best are the ones that have the division of labour or functional differentiation internally that is ideal for their environment. If it's ideal for surviving in an environment to have a lawyer cast, a priest cast and a farmer cast so that you specialise in these various ways, that may be the dominant social form that emerges. They may have different psychologies. It's not very well supported but it's one of the hypotheses about ADD personalities. You want to have people in your society who have that kind of non-linear openness to experience, or whatever you attribute to ADD. You also want people who are good rule followers. Those two work in some kind of complementary. If that's the case, then maybe that will be a public policy question in the future. I would like to live in a liberal society where everyone gets to choose and experiment with what works best for them and their life.

Now, we know that when a bunch of people start making a bunch of decisions for themselves on their own interests, that can lead to negative outcomes. That's why we have collective discussions about how we all decided to invest in housing, then we created a housing bubble and now it's collapsing. That was bad for our economy, how do we stop doing that in the future? We can slow down some of those individual processes that lead to negative externalities. In the case of moral enhancement, if everyone decided to become, for instance, a little bit less nice but super intelligent and if they started to go towards the Lex Luther end of things, we'd say that's great because you're inventing lots of new technologies, but we need people to actually care for the sick and the elderly. You'd then need to have a public policy discussion about whether things need to be tweaked.

I think the starting point for the discussion should be that if we all have maximal control over our brain and our body, then what we want is to have the option to experiment with all the different kinds of personalities that we could have and find the ones that work best for us. My hunch is that that will not lead to one ideal type. That's going to lead in a myriad of different directions. We're going to have a lot more fights about, "Hey, you're going in some crazy direction that we can't agree with." than we do about, "Hey, why is everybody adopting the Justin Bieber moral personality."

Mason: Doesn't that go back to what we were saying earlier about the ability to test these sorts of things at scale? If we did have the ability to dial our virtues and to discover what that may look like, surely we'd want to know beforehand. One of the ways we could potentially do that is through other forms of technology such as AI or virtual reality. We could create avatars which are versions of ourselves with hyper-empathy or perhaps create AIs that have certain moral and personality traits that are so cranked up that we can see what the unexpected consequences of that sort of mindset are. Do you think those sorts of technologies are going to help us at least understand better how these virtues affect us as a society?

Hughes: You probably know about Robert Axelrod's evolution of cooperation research from 45 years ago now. He basically had a computer run a bunch of simulations of tit-for-tat games. Are you more likely to end with a positive outcome if you always respond with the same hostile or positive response of your opponent? Should you always respond positively or always respond negatively? Eventually, his computer algorithm showed that the tit-for-tat was the best. If they give you negativity, give them negativity until they give you positivity. Then you give them positivity. That was the best method.

I think we've already been doing computer modelling of moral enhancement or at least moral outcomes. Our models are always far less sophisticated than the mess of human cognition. That's true in biology as well. I have written and do expect that in silicone, biology is going to become increasingly important. Our ability to simulate the myriad interactions that drugs and biological systems have and to try to say if we do this particular gene tweak, where the cascade of effects will potentially be.

We know from animal studies that a lot of things that work in animals don't work in humans, in ways that we still don't understand. We know that modelling everything in the human body is impossible. The levels of modelling and sophistication is going to be trial and error. I do think we're already using, for instance, in silicone models to try and find potential drug candidates to test. I think that's the appropriate use at this point, to say of all of the different drugs out there, what are the different kinds of drugs and classes of drugs that we should be looking at for this particular kind of efficacy. I wouldn't want to go too far, quite yet, with saying, "We tested this kind of drug in our silicone human and it worked fine, so here you go."

In fact, there's a Star Trek episode about that. There's a Star Trek called 'Medical Ethics' from 'Star Trek: Next Generation', where Worf breaks his back and wants to commit suicide. It's got 50 different ethical issues in it but one of the ethical issues is that a research scientist comes on board and says, "I want to try this thing on him. It's worked brilliantly in all my simulations." Then his human, actual doctor says, "That's not good enough. The simulations are not good enough."

Mason: Well then I guess the flipside of that is, do we need technological intervention in any of these things? You're quite publically a Buddhist. You wrote a book called 'Cyborg Buddha'. Surely something like meditation is a form of human enhancement. One that doesn't require any external influence, doesn't require a plug, external drugs or whatever else it may be. It's an internal process that can radically change our virtues, our morals, and the way in which we present ourselves in the world. It can have a massive effect and yet it's not seen as a form of technology. It's seen as something on a spiritual spectrum. For that reason, it's often dismissed so quickly as a form of, I guess, human enhancement.

Hughes: Absolutely. Half of the discussion I have with most people about these types of topics is to point to the history of what human beings have already been doing, to say look, this thing you're worried about is going to be new, but it's not necessarily different. The cyborg argument and the extended mind argument that says for instance, when we became cyborgs was when we learned literacy and were able to write things down. It was when we were able to download the contents of our brain into an external storage media and upload it again later. Literacy has a comprehensive effect on the structure of our cognition, the shape of our brain, and the functioning of our brain.

If you go back further to the mastery of fire between a million and two million years ago, it was after the mastery of fire that we had the capacity as hominids to eat enough food to feed an increasingly hungry brain. Our prefrontal cortex only grew to its current size and hunger after we mastered fire. We've been coevolving with technology and enhancing ourselves biologically for a very long time. I think once you show those continuities to people, it can ease the conversation and say how is this new case any different from this thing before?

Again with meditation as you said, meditation is a technology. It's treated as a technology in the Buddhist tradition. A third of the Buddhist canon is devoted to psychological analysis. It's called the Abhidharma and it's incredibly dry material. There are 30 different kinds of mental poisons, 20 different blisses and 15 different [inaudible: 1:22:27]. Basically, it was a self-diagnostic manual. You would study these materials, figure out what your particular issues were and then there are particular meditations that were prescribed for that particular kind of problem, to get you to where you needed to go.

I've written a lot about Buddhism and enhancement as well, and this is one of the reasons why I think Buddhism is very compatible and complementary to the enhancement literature, especially moral enhancement but human enhancement in general. In the Abrahamic tradition, there is this idea that 5000 years ago, God created the world and then created this special creature called humans. We've always been like this. If we meet Jesus in the future, we don't want to have wings and green skin because then we won't look like God or Jesus.

There's this idea of human essentialism and unchangeability in the Abrahamic traditions - Islam, Christianity, Judaism - that you don't have in the Hindu and Buddhist traditions or the Confucian tradition. There, animals, humans and Gods are always changing into each other. There are these permeable boundaries between them. Now, Hindu nationalists - that doesn't mean these are good arguments - Hindu nationalists are making the argument that because Ganesha had an elephant head, obviously the ancient Vedic society was a master of biotechnology and genetic engineering. They invented it all first. You can be silly with these things as well.

The more general point I try to make is that if you're in a tradition that accepts that human beings are in a multi-billion-year evolution from point A to point B, and through that, they're going to go through animal, human and God forms, and they're going to have various kinds of superpowers and green skin, and eight arms and things like that - you're probably going to be a lot better prepped for the future that we're entering than if you don't think any of that stuff.

Mason: That's a fascinating perspective. I think we just undervalue the sorts of technologies that we have available to us that don't require us to purchase them, don't require us to plug them in, and don't require some form of large corporation to develop them through innovation. We have superpowers innately inside of ourselves that we can exploit when we use things like meditation.

Moving on from that, I just want to ask you the final question that we always ask our guests here on the FUTURES Podcast, which is about how to develop inspiring visions for the future. So much has changed since you wrote your work 'Citizen Cyborg' back in 2004. Seeing how our future has emerged over the past few years, how do you think we can collectively develop the best strategies for moving towards the sorts of visions that you've shared with us today?

Hughes: Hmm, the best strategies. I've always been political and so I do think that this has to be a part of our political movements, dialogue and structures. There has been a debate in futurist circles about whether there should be a transhumanist party. There was a brief experiment with a transhumanist party there in the UK and there have been some European experiments. There's an American transhumanist party. I think, for the most part, that's not a way to advance this kind of vision. There are countries like Isreal or Italy where a very small party can get into parliament and advance a vision or agenda. I think there are very few countries like that. Certainly in the United States of the UK, the idea of any tiny party - you've had the Monster Raving Loony Party running for a long time, though - but minor parties don't have that kind of influence in most societies. I think the better way to do it is through NGOs, through think tanks and journals.

In terms of how to have an inspiring vision, I think one of the big problems on the left is that you are either a governance leftist and therefore constantly trying to figure out how to get closer to the centre in order to articulate the perfectly crafted, technocratic solution that won't lose you the votes on either side. You end up like Albanese who just got elected in Australia. I'm glad Labour won in Australia, but their platform was like two percent more of this, one percent more of that. Or, you're a protest leftist who is like, "I'm against everything. Don't ask me what I'm for, but I'm against that." Neither of those is where we need to be. As I said, and as you've agreed, we need an inspiring vision of the future and it can't be communism because that didn't work. It's certainly not fascism. For me, liberal democracy looks better and better. If you just preserve liberal democracy, I'd be fine.

If we had a vision, for instance, a society with UBI and universal healthcare that included access to anti-ageing medicine, a programme for space exploration and a serious technological programme for climate remediation, I think that kind of a vision for a future that is both technologically forward-looking and politically inspirational could be the basis of a new politics in our future.

Mason: James Hughes, on that important note, I just want to thank you for being a guest on the FUTURES Podcast.

Hughes: Thank you

Mason: Thank you to James for showing us how moral enhancement technologies may dramatically alter society. You can find out more about James' work by visiting the Institute for Ethics and Emerging Technology at IEET dot org.

If you like what you've heard, then you can download the FUTURES Podcast on all of your favourite podcasting apps. Or follow us on Twitter, Facebook, or Instagram: @FUTURESPodcast.

More episodes, live events, transcripts and show notes can be found at FUTURES Podcast dot net.

Thank you for listening to the FUTURES Podcast.


Credits

If you enjoyed listening to this episode of the FUTURES Podcast you can help support the show by doing the following:

Subscribe on Apple Podcasts | Spotify | Google Podcasts | YouTube | SoundCloud | CastBox | RSS Feed

Write us a review on Apple Podcasts or Spotify

Subscribe to our mailing list through Substack

Producer & Host: Luke Robert Mason

Assistant Audio Editor: Ramzan Bashir

Transcription: Beth Colquhoun

Join the conversation on Facebook, Instagram, and Twitter at @FUTURESPodcast

Follow Luke Robert Mason on Twitter at @LukeRobertMason

Subscribe & Support the Podcast at http://futurespodcast.net

Previous
Previous

Biofabrication w/ Ritu Raman

Next
Next

We Have Always Been Cyborgs w/ Stefan Lorenz Sorgner