The Psychoanalysis of Artificial Intelligence w/ Isabel Millar

EPISODE #61

Listen on Apple Podcasts | Spotify | Google Podcasts | YouTube | SoundCloud

Recorded on 15 December 2021

Summary

Psychoanalytic theorist Isabel Millar explains the role psychoanalysis can play in helping us to understand what artificial intelligence means for humanity, what modern science fiction reveals about our fascination with sex-robots, and what is driving a desire to replicate human attributes in silicon.

Guest Bio

Dr Isabel Millar is a philosopher and cultural critic from London. Her work focuses on AI, sex, the body, film and the future. Her book The Psychoanalysis of Artificial Intelligence was published with the Palgrave Lacan Series in 2021. As well as extensive international academic speaking and publishing across philosophy, psychoanalysis and cultural theory, Isabel has made numerous TV, documentary and podcast appearances including for BBC2 (Frankie Boyles' New World Order), Russia Today (Entrevista), Tomorrow Unlocked (Build me Somebody to Love), Schizotopia, Machinic Unconscious Happy Hour and Parallax Views among others. Isabel has recently been a psychoanalytic script consultant for BBC Drama and interviewed for a book by Ai-Da Robot, the world's first AI artist. She has contributed to the forthcoming AI Glossary, Chimeras: Inventory of Synthetic Cognition for Onassis Publications and is one of 50 global thinkers writing Manifesto - A Struggle of Universalities edited by Nicol A. Barria- Asenjo and Slavoj Zizek. She is a research fellow at The Centre for Critical Thought, the University of Kent and is currently writing her next book Patipolitics.

Show Notes

Isabel Millar’s Website


Transcript 

Luke Robert Mason: You're listening to the FUTURES Podcast with me, Luke Robert Mason. 

On this episode, I speak to philosopher and psychoanalytic theorist, Isabel Millar.

"Humans are very complicated things and very delicate things, and very powerful things. We need to understand them before we can understand how AI will interact with them." - Isabel Millar, excerpt from the interview.

Isabel explained the role psychoanalysis can play in helping us to understand what artificial intelligence means for humanity, what modern science fiction reveals about our fascination with sex robots, and what is driving a desire to replicate human attributes in silicon.

So, Isabel, your new book, 'The Psychoanalysis of Artificial Intelligence' is a provocative text that looks at AI through a psychoanalytical lens. What is the relationship between AI and psychoanalysis, and why have you decided to take this approach?

Isabel Millar: The question of the psychoanalysis of AI came about through various different journeys into thinking about what AI meant for us as speaking sex subjects. When I say 'speaking sex subjects', of course, as a psychoanalytic thinker, the fundamental question that we're always thinking about is the question of speech and the question of sexuality. These are subjects which cross over within the history of philosophy and which also define the crux between the paradigms of psychoanalysis and philosophy, and the way they think about being large, or consciousness, or what subjectivity means.

What I was really interested in was trying to think about how we can take the question of subjectivity and the question of artificial intelligence, and think about that through a psychoanalytic perspective as opposed to what, from my perspective, has been previously the approach which has always been a philosophical one. One which attempts to think about the self-present-thinking-being conscious subject in relation to this big monolithic question of AI.

As you can see, there are already a million different things going on there. It was a question of trying to find a way into putting two disciplines into conversation with each other.

Mason: It is certainly a provocative title. It was the reason that I wanted to have you on this podcast. When I first read it, I thought, what is Isabel doing here? Is she attempting to take algorithms to therapy? Are we planning to psychoanalyse AI? If so, what would that mean? Or, in some cases, are you considering AI to become a psychoanalyst? What were some of the playful ways in which you were trying to introduce psychoanalytical theory to AI?

Millar: I mean, I think first of all, the sort of task is to establish what I'm talking about when I'm talking about 'psychoanalysis'. First of all, I'll say I'm a theorist and a philosopher. I'm not a psychoanalyst in the sense that I don't have a clinic; I don't practice clinical psychoanalysis. But I have trained within the arena of clinical psychoanalysis and it's very much part and parcel of the theory that I invoke in the book.

I shall stay out front, I'm a Lacanian. My psychoanalysis is from the Lacanian heritage, as it were, but I'm sure anyone who knows anything about Lacan will immediately know that this is an intellectual figure who crosses over the huge section of intellectual history that is both in philosophy and psychoanalysis, and social theory, and political theory. For a lot of people, he's a controversial figure. For other people, he's the only psychoanalyst who exists. Of course, I would side with the latter. He's somebody who truly contributes something extremely subversive and radical to the thinking of the psychoanalytic subject. He's somebody who really questions the place of psychoanalysis within the sciences and within philosophy itself. It's because of his rigour and radical approach to the question of the psychoanalytic subject that to me, he was the ideal thinker to think with for this question.

Immediately, the question of artificial intelligence makes you think: what are you talking about when you're talking about intelligence? It's very much this problematisation of the question of intelligence which is at the heart of psychoanalytic thinking itself. This is one of the first things that I try to engage with in the book, to unpack this question: what is intelligence? What do we mean when we're talking about intelligence?

Mason: Let's spend some time unpacking that idea of intelligence. You focus less on the artifice of artificial intelligence and more on that key term, 'intelligence'. The only way we currently are able to understand what intelligence is, is through the lens of human subjectivity and our own understanding of human intelligence. How is a radical understanding of the human key to understanding how something like AI might actually develop?

Millar: First of all, if we just take this concept of intelligence, we'll very quickly see that it's a problematic concept itself which entails within it a whole genealogy of political, scientific and social shifts in the ways that we understood intelligence and the way we have instrumentalised intelligence.

First of all, the first thing to say is that intelligence is not just a scientific category. It's almost not even a scientific category. It's a philosophical category. It's one which has been at the heart of philosophical thinking for centuries and millennia. We can think of the distinction between episteme and techne that Plato refers to in the Meno and Lacon refers to in Seminar 17; the distinction between theoretical knowledge and practical know-how.

In fact, one of the ways that Lacon would first engage with the question of knowledge is to very much, following on from Alexander Kojève is to draw attention to this extraction of knowledge from the slave by the master. He thinks of the little slave boy in the Meno as the boy drawing symbols on the sand, which are actually extracted by the master for the use of Euclidian geometry, to create a school of theoretical knowledge that the slave himself doesn't know he has, but extracted via the master in order to put to use. He uses this very much as a paradigm of the extraction of technical knowledge into epistemic knowledge.

We can of course see how you could take that quite basic philosophical idea...It seems basic but it's an idea that a lot forget. It is actually at the heart of lots of the ways that we think about artificial intelligence and the distinctions between, for example, sensory knowledge, haptic knowledge, unconscious knowledge and all of the different ways we think about what intelligence is.

All of those different types of knowledge are already within this question of intelligence before we even start the question of AI. I would be very surprised if there are many AI people out there who would even have known that distinction between episteme and techne. Maybe there are. Of course, these traditions already exist within philosophy but they're often not applied to, for example, the location of algorithms. You see that there's such a gulf between these two fields and yet they should be talking to each other because they have a lot to learn.

Mason: You say provocatively again in the book that there's always something stupid about the way in which we currently think and talk about intelligence. What is stupid about intelligence?

Millar: The stupid element of intelligence is sort of the heart of psychoanalysis, in a way. Psychoanalysis acknowledges the fact that our own blindspot is very much subjectivity itself. We can't really ever know ourselves. We can't ever grasp all of the knowledge that we have because we are our own void. We occupy a nothingness. We have to cast out into the world some sense of identity by virtue of language. As soon as we've said, "I am this person who is doing this.", we've already detached ourselves from the 'I' which speaks.

Lacon famously plays on Descartes 'cogito' as the 'I think therefore I am' and says no, it's not 'I think therefore I am'. It's 'there is thinking, therefore something exists'. This is not the 'I'. The 'I' is in language. The 'I' exists in language. There is something behind that which is a nothingness and which is a sort of negativity that will always persist. It's this negativity that persists which is also in scientific discourse, which is what he was very interested in. What is this negativity which persists in scientific discourse but is foreclosed to science itself?

Mason: I guess in that case, that negativity that you're talking about there - the ability to acknowledge that there is thinking - is what we're trying to do when we recognise something we consider to be artificially intelligent. We're almost obsessed with the thinking piece. As you prove in your book, Isabel, there's so much more to who we are, what we are and what artificial intelligence could be than just merely thought.

Exactly. For my book, the central concept that of course emerges as the key question that I'm trying to ask is not 'can it think?' which is a sort of philosophical question about AI, but 'does it enjoy?' For psychoanalysis, enjoyment - or to use Lacon's term, 'jouissance' - is a sort of paradoxical combination of pleasure and pain but more fundamentally it is a mode of enjoyment or a mode of existence. It's what all human beings suffer from. When I say 'suffer', I mean that in the existential sense. That is what it is to be a human. We all operate according to some sort of mode of jouissance; some sort of mode of pleasurable suffering.

It's really this question which is at the heart of subjectivity, which is at the heart of being, which is at the heart of consciousness, which you can't really get to just by scientific ways of trying to understand artificial intelligence. You can't really understand consciousness and all of these very delineated ideas of what an individual subject is without thinking about the question of enjoyment and sex. Of course, for psychoanalysis, when I say 'sex' I don't mean necessarily actual sex. I mean sex as a philosophical concept and a philosophical problem. When I say 'problem', that doesn't have a problem. That's what makes it philosophical. It doesn't have an answer to it. It's an impasse. It's an existential quandary for all humans. It's this nugget of the sexual question of it which sits at the heart of the book, I should say, yeah.

Mason: It certainly does feel like there's something procreative about the way in which we currently talk about artificial intelligence. We're creating these entities, often in the image and likeness of us. Because we perceive that the future of this stuff will look in some way like us, we project our fears onto AI and onto robots. We project these paranoid fears about how we're going to end up in this horrific future where AI will treat us - as you said previously there - as these slaves, almost. Do you think the way in which we envisage artificial intelligence is really just a presentation of our own human psyche?

Millar: I think part of the interesting thing about AI is that the discourses surrounding AI have a lot of confusion around what really exists out there and what is a product of fantasy science-fiction, literature and cinema. People kind of get all those things confused. Having said that, that actually makes sense because part of the way of producing these actual products of AI are products of fantasy. They are products of fiction.

That's also kind of key, I suppose, in my thinking about the book. The history of scientific paradigms leading up to the creation of AI are also built on metaphysical bases which themselves are influenced by different historical, social, and political factors. Of course, they manifest themselves in the way we think and talk about AI. At the moment, for example, the very obvious one we can think of is the very phallic, masculine manifestations that we have exemplified in the trifecta of Elon Musk, Ray Kurzweil and the metaverse, for example. All of these very masculinist - perhaps I could say - visions of what the brave new future of striding forth into this amazing universe will be, are very naive. They're very naive and they're very un-nuanced about what human beings actually are.

Mason: Hearing you talk about individuals like Musk and Kurzweil there, they have become the sort of faces for this potential AI future. In Kurzweil's case, there's the possibility of uploading our minds to computers. In Musk's case, it's this fear of AI and yet he has his own ambitions to create this thing called Tesla AI which will be, again, some form of a humanoid robot. That's the promise that was shown to the public a couple of months ago. In being so dogmatic about these singular visions for what AI might be, how it might present itself or how it might look, what are they missing? What is not being discussed about AI?

Millar: Yeah, I mean they're missing a lot. First of all, the sort of common criticism which should be the main criticism is the political and social question of power; of the fact that these people are people with enormous amounts of power, money and influence. They are interested in keeping that money and keeping that influence, and lots of interested people who are around them. Of course, it can't be without danger. It can't be without exploitation. It can't be without prejudice. All of the critiques that surround that are very important. We need to keep doing those critiques.

Also, it's not just the critique of capitalism, for example, which I'm interested in. It's also the critique of, like I say, looking at the ways in which the concept of AI is one which is really neglected and it's not properly thought about in relation to what we think about when we think about embodied consciousness or a form of thought that isn't human. I think that what happens is we have the sort of fantasy of the idea of the Gestalt of a thinking thing outside of the body. There are fantasies of cyborgs, sex robots and stuff like this.

Actually, the question is what types of thought are going on already that aren't embodied, that aren't within recognisable forms of humanoid or recognisable types of non-human intelligence? We can say, "Oh look. The robots are coming." Well no, there are lots of more subtle ways where this is happening that influence all of the ways that we interact with each other and all of the very micro-processes that we go through in becoming subjects.

Artificial intelligence is doing work for us that we don't even know. We're already immersed in such insidious networks that we probably don't even know what forms of thinking are happening. We don't necessarily recognise them as human types of thinking because we still don't recognise how human thinking works anyway. We still don't understand, I should say. We still don't properly understand that. We use very poor methods of trying to model computers on humans when actually, what makes no sense is that we've already started thinking to model the human mind on a computer. Then we're trying to reverse that and do it the other way around, which doesn't make any sense. You're sort of reversing a metaphor back on itself when you were the one who invented the metaphor. That's what happens in cognitive science.

Mason: That's what's interesting about your work. It's the difference between what psychoanalysis allows versus what cognitive science or psychology permits, let's say. Just to help our audience understand what those differences are, could you help explain how they look at the human in very differentiated ways?

Millar: I suppose the difference between the psychoanalytic as opposed to the cognitive or psychologised way of thinking about the human mind is that there's no sort of normal human. For psychoanalysis, it's not a question of a mind that you can model certain ideas of sanity or insanity, health and unhealth, or prediction - 'If this person does this, then it means that.'

Very basically, in psychoanalysis, we don't talk about trying to cure a symptom because a symptom by definition is a clue towards something else. It's pointing you towards something else. In cognitive science, it's, "You've got a foible with this, let me get rid of that foible. Then you'll be fine because you no longer do this." There's a lot of very basic understanding of the mechanistic process that people go through in just going about the world.

It's normalising because it's a tool for trying to fit people into a system, whether a system of work, capital, or society - which by definition is ideological. Psychoanalysis at its most radical is the complete opposite. What it wants to do is not make you better - it's not trying to cure you of anything - it's allowing you to come to terms with the knowledge that you already have. That knowledge comes through language, discourse and all of the things that have come together to make you a human being. You can't just simplify that by reading some notes to try and tick off - "Is this person doing this? Okay, they're that sort of person. Here's your diagnosis. You're an obsessive-compulsive person who needs to..."

For psychoanalysis, all you have is a structure which orients the analysis and 'via' which means you find out exactly the singularity of that person's particular form of suffering. It's very much more singular - not singular, but it's much freer - because you're not trying to assimilate somebody into society. That's a very basic way of drawing attention to the sort of clinical question of psychoanalysis and cognitive science, at least.

Mason: It certainly feels like the trick is in the language there. We talk about human intelligence as if it was a singular thing. We talk about artificial intelligence as if it's going to express itself all in the same way. We don't talk about human intelligences or artificial intelligences. We just make these grand assumptions that these things will present in similar ways and therefore can have a singular category.

Millar: Yeah, exactly. For Lacan, he very much built on the Freudian edifice of psychoanalysis but re-interpreted Freud using the paradigms of his day which were structural linguistics, anthropology, literature, philosophy and all of the other tools he had at his disposal.

What he was primarily interested in - the most important thing - was to try and look at the unconscious as structured like a language. The material psychoanalysis is language and nothing else. When you're in an analysis, that's what is worked with. It's the language. It's very much there where you will find the stories that people tell themselves and within these stories, there are always patterns. There is always a structure that can be found to do with the positioning of oneself in relation to an object.

Mason: It feels like language does have its function in regards to how we identify artificial intelligence in being intelligent. I'm thinking specifically of the Turing test. You use language as the way in which we can understand something to be artificially intelligent or humanly intelligent. In what way is language so important for helping us to navigate these new territories?

Millar: It's fundamental. For psychoanalysis, language is actually the definition of subjectivity. Language is the entrance into the symbolic. For psychoanalysts, the speaking subject is a being who is trapped in language and who suffers from language. It's through language that our whole subjectivity is determined and how our sexuality is determined. Whether we identify ourselves as masculine or feminine is through language. I don't mean in the banal sense of whether you call yourself a man or a woman. I mean the structure of your language and discourse is one which determines your mode of enjoyment.

This is a fundamental idea in psychoanalysis - this idea of the mode of enjoyment - of how language and discourse themselves become a mode of being in the world. The whole edifice of, in a sense, abstract thought, language and philosophy is built on this mode of enjoyment. It's built on the possibility of the first to have said something that signifies something that then led to the creation of language is really where the whole messy thing of being a human starts. The whole question of speculative thought drops out and you have this new question of what it means to be a living being. That's really a question that isn't often understood by, for example, evolutionary biology. People tend to think that humans are just part of nature which they really aren't.

Mason: That's a fascinating assertion. You just drop that in at the end. Human beings aren't really part of nature. You can't do that. I was going to ask you about language. That's a tricky statement because then the question is, is artificial intelligence from nature by virtue of being created by human beings or is it completely artificial by virtue of being created by something that you consider not being of nature, the human?

Millar: When I say not of nature, of course evolutionarily we're all from nature. We've all evolved. The denaturing of the human - not necessarily the human body but the idea of a human as a thinking thing - is something that is only possible by the virtue of symbolisation and the virtue of the existence of abstraction.

It's this abstraction that allows any form of postulating about the nature of the human and its place in nature itself. It becomes this Möbius strip. You cross over into, "Am I part of nature?" I can only be part of nature by taking myself outside of nature and asking myself that question.

You talk there about humans getting themselves to a certain point where they can then hand over the reins, as it were, to superior forms of human intelligence. You remind me of James Lovelock and his idea of the novacene which is his idea that we've passed the Anthropocene and now we're in the novacene which has the possibility of species that can think ten thousand times faster than humans. We need to preserve the environmental condition to allow these forms of intelligence to survive. Once they survive, we're kind of obsolete. It's the parable of giving birth to the next form of thinking. I think it's funny. I find it quite sweet. I'm not saying I don't think that's possible, maybe. It's interesting because it's such a fantastical and philosophical gamut you're making. You're basically banking on this question of the possibility of abstraction.

The idea that you can think about this supposed species that can think ten thousand times faster than the human - what does that mean? Computers think ten thousand times faster about some things. For other things, they can't because human intelligence and human consciousness are not like that. It's not like this one whole thing that you can place and say, "This is a thinking being." It's made up of lots of different things. Actually, when you detach it from this idea of having to be within a body, then the question of thinking and consciousness is a lot more complex than that.

Mason: In that case, it does feel like the direction of travel for the creation of artificial intelligence is really about taking something which is wonderful because it's fundamentally beyond language, and finding ways to trap it inside of language. For us to make it intelligible to the human, we need to either trap it in a body or trap it in language. It needs to present in a way in which we will fundamentally understand it. We hear these ideas of the black box algorithm. What is that algorithm doing? No one really knows. It's beyond our human understanding. It's beyond our human understanding because it's ones and zeroes. It's beyond the language discourse that we use to subjectively communicate with each other.

Millar: Yeah. Exactly. This idea of the outside of intelligence, the outside of thinking or mystery unknown is part of science. It's necessary for the generation of true science. For Lacon, he would say that the true hysteric discourse is the discourse of science in the sense that theoretical physics, for example, doesn't have the answer to everything. It's constantly looking for new ways of thinking and understanding the universe. There's no fully comprehensive way of understanding it.

That's often where scientific reductionists go wrong. They think one day, science is going to give them the answers to everything which of course it's not. Science, when it does its job, is never going to be satisfied. It's always looking for the outside and is constantly veraciously chewing up new ideas and then throwing out old ones.

Part of the fear of AI is positing a form of intelligence that is beyond us. Not only beyond us, but then it also wants to torture or hurt us. It's all-knowing. It's omnipotent. That's why I've been given the book with the Roko's basilisk example. I think it's a real fantasy that scares a lot of people in the world of AI.

Mason: For those that may not know, what is Roko's basilisk?

Millar: It was a thought experiment that appeared on the 'Less Wrong' forum. Basically, it was this idea that in a future world, there would be an AI that could be infinitely intelligent. Because this AI was possible to imagine, just the very thought of it would compel you to have to try and do everything in your power to bring it into existence. If you didn't, it meant that you were doing something against the greatest being that could be imagined.

It very much follows Saint Anselm's ontological argument for the existence of God, which is, what is the definition of God? It's that than which nothing greater can be conceived. Therefore if you could conceive something greater than that, then you've conceived of God. It's this self-fulfilling prophecy about what is God. It's a bit like Pascal's wager as well. It's better to believe in it because if I don't believe in it, what if it comes and kills me?

The thing is, these people that were thinking about this idea of a terrible, all-powerful AI were intelligent boys who knew a lot about Bayesian theory and probability. They were thinking very complex thoughts about how they might end up getting themselves into serious shit if they didn't do the right things and bring Roko's basilisk into existence. Basically, it became this urban legend sort of thing. It had to be taken down because people were having psychological breakdowns. They thought that once they thought it, they couldn't unthink it. This AI could come back and torture them for eternity.

What I found really interesting about that apart from the obvious fantastical elements of it was the paradox between, on the one hand, the idea of the flesh and bones of the human that is just this bag of physical pain that could be tortured. Then also the question of infinite simulation. You're trapped between your body and also the possibility of never being able to die. If you can be tortured infinitely, you can't die. Your body is going on and on and on. This is a very human problem. This is a problem of philosophy. We've got these bodies that we can suffer in but we also can't imagine our own death. Because we can't imagine our own death, it's very terrifying to think that our suffering may never end. This, to me, kind of articulates a fundamental fantasy about the problem of AI; the fears of AI.

Mason: Listening to you there makes me realise that we have to really be careful what we wish for. It may just come true. If we do project all these paranoid fears associated with AI onto the development of this sort of technology, are we in a tricky situation where we're going to end up basically generating the circumstances for this horrific future to occur anyway? The 'hyperstition' that you talk about in the book. If we wish it into existence then it may actually happen on the provisor that we give those sorts of narratives enough libidinal power.

Millar: Yeah. I think that at the moment, so much power and influence are put into the hands of the wrong people. Time's Person of the Year, as we know, is going to be Elon Musk. This is a person who is driving the narrative. Yes, he has resistance but actually how much resistance can you have when you have that much money and power?

What's important is to get more thinking from more different disciplines around these questions. I think at the moment, the direction of travel is all skewed towards the hard sciences, entrepreneurialism and expansionism when actually we need to think about the very nuanced nitty-gritty of what we're talking about here, which is humans. Humans are very complicated things and very delicate things, and very powerful things. We need to understand them before we can understand how AI will interact with them.

Mason: Now I'm far from being an Elon Musk fanboy, but hearing you say 'the wrong person' there, I mean, is it the wrong person? By which I mean, he's a unique individual insofar that he's able to play those games with the future whereby he can speak things into existence, even if they're purely fictional. We see that with the examples of how he can make proclamations about certain clean futures and that affects his stock market valuation for Tesla in the present, therefore opening the Overton window to create the future that he has envisioned at that present moment in time. Or simply with things like Bitcoin, he can make some sort of proclamation on Twitter and affect markets in real-time and in the future. Having that memetic power to speak the future and then for it to occur in the way in which you envision...I mean, it's certainly power.

Millar: It is.

Mason: It's a rare power. Does it necessarily mean it's a wrong thing?

Millar: Yeah. I think billionaires are wrong, first of all.

Mason: We can agree on that, but I'm saying the fundamental ability. The circumstances through which the world creates billionaires are extremely exploitative, but I'm saying the ability to think futures and by thinking and expressing those futures, generating circumstances through which those futures also come to pass. That's a power we all need to really learn, isn't it?

Millar: Well yes, but the idea that we can all learn that is impossible. The question of the existence of billionaires and the question of a person like Elon Musk being able to exist and project these ideas into existence are actually inextricably linked. In fact, you could also say that they're one and the same thing. I would say that without billionaires, Elon Musk couldn't exist. Without capitalism, Elon Musk wouldn't exist. He's a hugely talented person for what he does and he's obviously very intelligent and does amazing things.

That doesn't mean to say that a person such as Elon Musk - it could be Elon Musk or it could have been somebody else - someone with that profile which is only made possible through this enormous beast which is capitalism. This is very much part of the question of accelerationist ideas which is, to what extent is accelerationism bringing about the conditions for human emancipation, human greatness, human happiness - let's not even say 'human', let's say 'for great things'. At the moment, it's not doing that. It's patently obvious that it's not bringing about wonderful conditions for lots of human beings to thrive. The fantasy that these small - I won't call them 'little' - but this small group of men have such power and influence and they're the people who are literally able to create the discourse and set the tone, seems to me to be skewed. When I say he's the wrong person, any one person would be the wrong person.

Mason: Okay, that I can agree on. Though it does feel like it's only impossible because you say it's impossible. You've got to be careful about those sorts of language caves. It may be possible if you believe it to be so!

Millar: Yeah. I think what you were saying was that he kind of is a good sort of role model for this type of entrepreneurial spirit, I guess. I think that's what you were saying.

Mason: No. I disagree that he's a good role model, but I do find it fascinating that he has a rare ability to speak futures that then come to pass.

Millar: Don't you think that that's hyperstition? I would say that that's hyperstition.

Mason: I mean, it is hyperstition. It one hundred percent is hyperstition. He is accelerationism...

Millar: Embodied

Mason:...acting through hyperstition. I know!

Millar: He is, he is.

Mason: Doesn't that make him a fascinating character to study in that respect?

Millar: He is fascinating. He is fascinating. There's no doubt about it. He's almost like a parody of himself. Everything that's happened that he's done, you could say, "Wow, how did that happen?" It is a self-fulfilling prophecy in a way because of the conditions in which he has been engineered as this figure. If anyone prefaced it, it was Nick Land. Again, a controversial figure, but the idea of AI and capitalism being one and the same is very well exemplified by the figure and life history of somebody like Elon Musk. For good or ill, his ideas are very important to think about in terms of the relationship between the instrumentalisation of thinking, intelligence and all of the infrastructure around the digital age, and the creation of wealth and the billionaire as a figure.

I think that those ideas are ones which we should really examine and return to when we're thinking about who these people are, who are leading the future of what's going to happen in the next ten, fifteen, or twenty years - if we last that long as a species.

Mason: Yeah, if we last that long as a species. That's why we need to return to this idea of, I guess, collective intelligences. The very fact that a multitude of individuals contributes not to a singular notion of human intelligence but to human culture itself. We all have a contribution to human culture. If anything is worth preserving, it is human culture. It's not one individual brain. I think Ray Kurzweil's got things deeply, deeply wrong. It's about my intelligence that should be preserved, my brain, my being, my singular desire to have me continue ad infinitum and throughout time.

As you reveal in the book, that comes with some dangers, doesn't it? A lot of these longevity guys assume that hey, if I get to live forever, then I've overcome my suffering. That's it. It's all over. I've done what I needed to do. If anything, as you say, it's a prolonging of their suffering, especially for the male proponents of longevity who actually find it very difficult to do things like have sex, procreate and be with other human beings. They're still going to have that challenge. I've always found it fascinating that a lot of those men who are arguing for longevity very rarely have partners or want to have children or have the ability to have children. There's something deeply psychologically tied in with some of these visions, which is what I love about your work, actually.

Millar: Well thank you. I think that's a really good point, because the question of melancholy and the question of really being able to see psychoanalytic structures at work in the way that people relate to - things such as immortality, for example - you're right, having come across and seen stuff about those types of people who are interested in cryogenics, there's so much melancholy there. There are people who are really suffering, who are trying to find a way of holding onto something which they've clearly lost. The idea that somehow, being able to live forever would be something nice when you'd live and the people who you love would be dead. In itself, it's a very strange idea.

We're back to the classic philosophical problem of what is the being. The being towards death, as Heidegger would think of it. Of course, this is an idea that is very familiar to most people. You don't need to read Heidegger. The idea is that death is something that conditions humans and something that is very much part of how we project ourselves into the future.

The question of reproduction, of children and of all of these ideas of what it means to persist as a human makes a difference when you're thinking about, what do I mean in the grand scheme of the world. Am I going to be remembered because I have a child and they'll have children? Or am I going to be remembered because I built a building or wrote a book, or am I in a film? Everyone really has a different way of trying to find themselves etched onto the face of the earth, and to claim themselves as actually, really being here. Immortality is another one of those difficult questions around AI that people think is going to solve all of their problems but it really is probably not.

Mason: Yeah. The tricky thing is the 'I'. It doesn't necessarily need to be the continuation of a singular individual's contribution. The thing that is worth preserving - or at least it feels like the thing that's worth preserving when you really drill down with a lot of these guys who are worried about existential risk - is human culture itself. Sometimes, the reason why they're arguing so heavily for artificial intelligence is they just see it as the container for human culture. It's an easier thing to get off planet than human beings are.

Hey, if we want to preserve all of our knowledge in some way, shape or form, we've got to do it on a biological entity. This process of creating and preserving cultures by handing them to next generation, to the next generation, to the next generations of humans has a fault in it. This whole project could come to a halt if there is something that affects the human biology of the entire species. Whereas if we can port all of the knowledge that we've created throughout this human project into something else and get that off-world, at least it was worth something. At least this whole human suffering thing, for a couple of hundreds of thousands of years, at least it was worth something. I think that's what scares us so much. It's not our own individual death. It's the idea that human collective knowledge is in its entirety.

Millar: Yeah. It's the idea that we'd just be gone. Yeah. I mean, this is just lost. This is the very problem of...this was the beginning of art. It was somebody trying to blow some ink onto a wall to show that they were there. That was the first thing, just to mark. How would anyone know in the future that I was here and this is how it began? To abstract yourself from your immediate temporal and spatial position.

This is also a topic taken up by Bernard Steigler who recently died, sadly, but his work was really interested in the question of technics, the relationship of humans to the technical object, and how the human evolved with technology. He was interested in this pharmacological process which was both a poison and cure; the memo-technical outsourcing of human capacities to technological objects.

This process is inevitably one which goes off in a very exponential direction once the digital age hits. We've had gradual, gradual movements through technical objects over history. Suddenly, what happens is digitisation. The outsourcing of capacities that human beings are now dealing with is working on a whole other scale. That becomes scary because this idea of this human that you're trying to preserve or this harmoniously acting along with technology is now suddenly cut off from it, some people's minds. It's like suddenly, we're a different type of human. We were this type of human, now we're a different kind of human. The question is if that's true or whether it's a question of it just being a different form of abstraction into technology than we were previously used to. These are fascinating questions. We're only just beginning to come to grips with them, I think.

Mason: Luckily, one of the ways we can investigate some of those questions is through the fiction that we create - science fiction. They present themselves in a multitude of works that you mention in your book, from the work of Charlie Brooker to Alex Garland and Spike Johns. What is the importance of science fiction in allowing us to, I guess, prototype some of these visions for the future and then understand how they fundamentally make us feel as human beings?

Millar: I think science fiction is almost the best way to try and think about the future. It's one of the most potentially exciting and conceptually rich ways of imagining different forms of humans and different ways that we can imagine our future with AI, for example.

Film, for example, has always been a brilliant medium for exploring these different questions. One in a sense that is outside of textual mediums because it doesn't always have to be in the written word that we're thinking about these questions. That's why I'm really interested in using film. Film allows us to explore the different types of things at work when we're thinking about AI, i.e. not just thinking about the brain but thinking about the body, the way that we interact sexually with other beings, and the way that we conceptualise ourselves as body subjects who also have the capacity to separate ourselves off from our brain - this fantasy of brains that you can remove from your body and then put into another body which is a very sci-fi theme that we play around with.

In the book, I use different films to go through different iterations of my questions. As you know, the book uses Kant's three enlightenment questions - What can I know? What should I do? What can I hope for? - as frameworks for examining the question of the psychoanalysis of AI. I use three films to explore these questions and unpack them in a psychoanalytic way.

The first question - What can I know? - is really a question of epistemology and the question of the foundation of knowledge. I use the film 'Ex Machina' as a means to explore this question. It's a film which very beautifully enacts the Turing test, which is of course the test invented by Alan Turing in order to see if one could be fooled into thinking an AI was human. Interestingly, the test invokes the question of gender and the question of whether one could tell what gender AI was - not by anything they particularly said but by the way the language was constructed. At the time, this wasn't really picked up on. I found it very interesting in relation to my question because, of course, for psychoanalysis, the question of language is the fundamental question of whether one identifies as a feminine or a masculine subject. In this film, the human masculine subject talking to the feminine AI subject is really trying to discern her humanness. Actually, what he's trying to discern is her femininity. She proves herself a woman to him and by the end of it, he's in love with her because she has become a woman for him.

There's a very nice quote by Jacques-Alain Miller which is, "To what length man will go to make woman exist." This idea of woman having to exist for man - the idea of a woman. There is another famous quote from Lecan which is, "Woman doesn't exist." which people have taken to mean as lots of things over the years. Essentially what he's saying is this idea of the woman is not a real thing. It's a construction of language that is necessitated by masculinity. Masculinity is a position that necessitates the existence of woman. The epistemic question here in that film is really nicely articulated by the human need to make the feminine AI exist and come into existence via his interrogations of her.

Mason: The thing I find fascinating about the way in which you use science fiction is that it almost feels like we need to take a moment to psychoanalyse the creators of some of these shows. We have a very visceral effect when we see some of the stories they're telling us. It always turns out in recent times that these new sorts of science fiction end up in a situation whereby the AI in the image and likeness of us will either kill us, fuck us, or in the case of Ex Machina will do both. What is the reason for the drive towards these very popularist narratives around the hypersexualisation or hyperviolence that will emerge from these forms of technologies?

Millar: Well, the banal answer would be to say because they're men that create them. I'm not going to say that because that's too easy. Of course, these are masculine fantasies about the potential of a sort of infinitely fuckable woman. In a sense, it's more interesting to think that what's going on there isn't just something we can easily put down as misogyny or sexism. Actually, this question of the unkillable body is a theme which runs through a lot of sci-fi and particularly in the form of female bodies as well.

It's one that I pick up in the chapter on what should I do. Which is a question of the ethics of psychoanalysis - one which I use the film 'Ghost in the Shell' for. I think what's interesting here is this idea of the invincible woman; this body that you can do anything to. The sub-text is that you can't kill her but you can also keep fucking her. She'll be fine. It's the West Worldian idea of the sexbots that you can do horrible things to and they'll forget the next day.

This question of suffering and being able to either enact fantasies upon something or someone and not have any redress is obviously a very human, horrible reality. There's no point in us pretending it doesn't exist because it's everywhere. These things are not just in fiction, they're real things. They're things that humans do to each other - especially male humans to female humans, as we know. The idea that it only exists in fiction is ludicrous. Of course, it's going to be right there in the centre of our fantasies of AI. These fantasies are of us creating species that are essentially there at our beck and call. They're just patriarchal, colonial fantasies really.

That's why we're so scared of them. People are always scared of subjugated people. People are scared of groups of people who have traditionally been treated very badly. It's obvious what I'm talking about. This is no different from racism and misogyny. There's no difference from this form of fear of something fighting back after you've had it at your mercy. Of course, it's going to be a theme that is going to veer its ugly head. Yeah. Definitely.

Mason: Is there a normative relationship that we should have with AI? In all of these sorts of fiction, it ends up in a relationship that's either neurotic, psychotic or in some way perverse. Is there a way we can move towards a relationship with AI that doesn't amplify some of these negative aspects of human beings, I guess?

Millar: It's interesting that you said it's either neurotic, psychotic or perverse, and can't we just have a normal one? For psychoanalysis, that's all there is. You're either neurotic, psychotic or perverse. That's the only choice other than arguably the autistic subject which is another controversial subject within psychoanalytic discourse, on whether that's a feature of psychotic structure or something altogether different.

I digress, but the point is that in psychoanalysis, there is no normal subject. All you have is different modes of navigating the Oedipal drama and making up for the fact that we are essentially all fucked. We're all mad in different ways. The structure of your madness can either be neurotic, psychotic, or perverse. It's just different ways of navigating your relationship to signification. The fact that you said, "Oh, can't we just have a normal way?" Well actually, the only normal way is via these methods. We have to understand what we're working with when we talk about AI.

I think it's actually very useful to think about the neurotic subject or the psychotic subject, hence why a big part of the book is dedicated to the question of ordinary psychosis, which is a methodological shift within clinical psychoanalysis as a way of looking at human suffering in a slightly different model than has previously been thought. Instead of assuming that most people are neurotic and there's going to be the odd psychotic person, instead, it's more like the psychotic is the generalised state of most humans, and actually, it's less normal to have a neurotic subject nowadays than a psychotic subject. That's another interesting way to try and approach how we're thinking about AI in this day and age.

Mason: In that case, it's that realisation that we're all in some way fucked. Is that the thing that is driving our desire to create artificial versions of ourselves? Baked into that possibility is the idea that we can actually fix all those problems that we have identified and now have to acknowledge. We don't want to pass on those sorts of characteristics through, say, our genes or through traditional procreation.

The best way to start over or do over is to just build something from scratch that doesn't have all of those tricky, fuzzy things built into it, that is a hyperrational being that has this very clear and clean understanding of how it wants to navigate the world, rejecting all of those neurotic, psychotic and perverse things.

One of the reasons to do that is to make them, in many ways, ideal workers - if they're not worried about all of those things that can exist within that framework. The other way to do that as they used to do with slaves is to castrate them and make them non-sexual beings, these future robots.

Millar: That's true but here's the problem. No matter if you castrate someone, they're still sexual. The organ is not what makes the sexual being.

Mason: Of course, yeah.

Millar: What I was trying to say is to correspond with neurosis, psychosis and perversion, we can say neurosis is repression, perversion is disavowal and psychosis is closure. Those terms are essential terms in psychoanalysis to denote what mode you have dealt with the name of the father. The 'name of the father' is the incursion of the primary signifier in your early Oedipal development; the way that you have initiated yourself into language.

When we say 'repression', this is the normal neurotic way of being in the world and repressing the name of the father.

For closure, the scary idea of not even having this signifier even in your purview at all means that you are basically lost in language and at any moment, your whole symbolising framework can fall apart. It's only held together by some sort of compensatory mechanism which is something which if lost, can bring on a psychotic episode, for example. This is the classic idea of psychosis.

Of course, for the pervert, disavowal is that yes, I know there is such a thing as the name of the father, but I'm not going to acknowledge it. For psychoanalysis, people always say that perverts never come to the clinic because they're very happy with their symptoms. People think of a pervert as being someone who has had weird sexual activity. Everyone has weird sexual activity but a pervert is someone who perhaps may be the stereotype of somebody who has a fetish for a particular object, for example. The reason why this is a form of disavowal is that this particular object - say, for example, a stiletto shoe - stands in as this compensatory object which is hiding the nothingness that is very scary and consuming. It has become a way to survive for the pervert.

The point is, everybody has their way of being in the world. For the neurotic, these can take the form of a session for some people, and particular routines that they need to do. Hysterical questioning of one's position of whether you are a man or a woman, for example, is another neurotic symptom. Essentially, these are all just humans. We all do some form of these things.

The autistic subject is a very interesting subject because there are people that are currently looking at the ways in which autistic subjects are being very much honed in on and recruited for Silicon Valley. Often, autistic people are people who are very good with numbers, and processing large amounts of information but not people who are necessarily very good at human interaction. They're people who just want to get on with a job and are very hard-working. They're perfect subjects for capitalism.

Mason: Let's not forget that Elon Musk also recently came out as being on the autistic spectrum. Maybe there is something in that as being the future model of how humanity could present itself. Very quickly, you mention these different fetishes that people have. That becomes the issue with sex robots - whether they're a compensating object. Whether they're compensating for relations with human beings, or whether they in of themselves are the fetish. Whether what the individual wants from a sex robot is the very fact that it's a robot; that it is not a human. That is the thing that they want it for because that fundamwentally changes the debate. It's Trudy Barber who's been quite clear that in actual fact, people sometimes want to have relationships with these dolls and with these objects. It's not about using these dolls and objects as compensation for real humans.

Millar: No, certainly. I don't think they are compensation for humans. I think the idea of having a sexual object as a fetish object in the form of a perfunctory silicone thing is kind of a quite basic way of interacting in a masculine way with the female body. It's not much different from how porn works anyway. I wouldn't say it's massively different and a massive change.

There are so many questions surrounding what's happening with the sorts of people who are interested in sex robots. All of the politics around it and all of the questions intersect with, for example, sex work. It's not my area and there are lots of people doing much more on that specific question.

It's interesting to think, what is it that you want when you want to have sex with something that you know is an inanimate object? It can't be that you're pretending that they're real. Of course, you're not. There's something about this undead creature. Again, that's something that I find interesting to think about. In the book, there is the question of the undead female body which I think is particularly interesting for men; this question of what happens after you die.

To be horrible and gruesome about it, the history of serial killers shows us that men are often very fascinated by this question of the dead body and the dead woman - what it means and what you can do with it - much more than women. I'm sure women have done terrible, horrible things as well. There is obviously something about men's and women's bodies that concerns them; that they're interested in. This is a deeply psychoanalytic question. It's not just a question of the sex robot industry. It's a question of male and female sexuality, fundamentally.

Mason: Ultimately, are you actually saying that the way in which we are striving to understand each other now isn't by studying each other through something like psychoanalysis; it's by building versions of each other? It used to be that to understand humans, you'd study humans. Now it feels like our ability to fully understand humans is predicated on our ability to recreate the human in the form of artificial intelligence at that AI level.

As you mentioned right at the beginning of this show, this kind of weird dichotomy or trying to understand artificial intelligence by using the model of the human brain versus using the model of the human brain to understand artificial intelligence. There's all of this desire to understand who we are that is occurring through this process of not procreation, but creation.

Millar: Yeah. That's really interesting. I think that there's always this theme running through the literature on AI as well - who are the new Gods? Are we creating AI or is AI going to be the thing that is ultimately creative, and then itself creates something new? This idea of recursion is, again, central to the philosophy of mind and the philosophy of thinking about consciousness.

How do you think about extracting yourself from the process of thinking about human beings, being a dispassionate observer and being able to make models of the mind? This whole idea of neuroscience, of being able to model the brain and then the Blue Brain Project of making an accurate enough model of what is going on in the brain? Eventually, you'll have a human brain. Of course, by that time, God knows. The idea that that would be tantamount to creating a human subject is one that I think couldn't possibly be. We've already got these things and they're up there doing all their shit.

What's interesting about humans is we all potentially have this software. We all potentially could think, be and do all these things. Actually, most of us don't and most of us can't. Why? Because it's much more to do with what opportunities you're given, what society allows you to do, and how human beings are fostered.

The question which we should be thinking about when we're worried about the future of human thought and human intelligence is how to make the concitions for as many human beings as possible to do wonderful things with their brains; to be able to really thrive from the capacities of the human mind and the human brain. This whole nonsense about, "Let a thousand Beethoven's or Mozart's" - who said that? Bezos or something - "bloom." It's like yes, but actually they're all over the place. The reason why they're not thriving and not blooming is because people are poor and people are in horrible situations. People are exploited and they have to go and do shit jobs in offices. Get AI to do the shit jobs and let humans become Mozart. Perhaps that should be what we should aim for, I don't know.

Mason: That's it. That's the fundamental joke in the work. What it feels like is that we're moving towards a society where it's valuable not to create AI but just to get humans to act robotically. There's a move towards human behaviour becoming more mechanical. We've mistaken the machinic metaphor through which we understand human biology and the human brain for the actual. It's what's driving all of these narratives. What we're ending up with is a human being that only feels normal if it is able to express itself in very binary ways.

Partly, that's because of the environment which it is a living and digital-mediated environment in which the human being is living, but also the way in which society sees that as a nascent valuable. Whether it's that you don't think creatively and you just do the job at hand, in which case you're an ideal worker, or if you do have some sort of neurosis or psychosis, we'll provide you with the drugs to reset you towards a baseline. We'll treat you like a machine. It feels like that's what we've created; a society whereby artificial entities or entities who are thinking in an artificial way are here, and they just happen to be human beings.

Millar: Absolutely, yeah I completely agree with that. You're quite right. Humans are just becoming more programmable. We've created the conditions for making human beings the model of AI, because there is no space for humans to think for themselves. There are no conditions for them to thrive in that way, creatively. All the ways that we're supposed to be creative that we're told by neoliberalism are ways to be creative to make money. Obviously never to make money for yourself, but to make money for a corporation. If you do make money for yourself, it's because you've become the perfect neoliberal subject and you've monetised yourself. Conditions don't exist for people to just think and to think outside of all of these strictures of the algorithm.

Mason: In that case, how do we create the conditions to level up the human, or increase the resilience of the human as opposed to allowing them to slip towards a more machinic way of being? How do we use the tools that you mentioned in this podcast? How do we use tools like hyperstition to rewrite new narratives for what it means to be human in the 21st century? Do you believe, Isabel, that it's even possible?

Millar: It's a very big, important question, and a massive political question, isn't it? I don't think that it's just merely a question of big, bold thinkers coming and telling people what to do. I think it's a question of infrastructures, and things like universities needing to have proper humanities departments that are funded, which allow people to go and study philosophy, for example. It's basically impossible nowadays for people to study for PhDs without having a job. For example, it's impossible for people to become academics without also working in other industries and having to pimp themselves out to every person who comes along. You can't just make a living out of writing and thinking, which is something that you could do. People in the past - the great thinkers and great artists and everything - had...okay it's not good to be romantic, but the idea that these people could live and thrive was because they could have a roof over their head, they could have a place to work, a place to think, and could have a decent quality of life. That's becoming less and less possible. The people that are afforded those opportunities are rich people or the odd person who are able to sort of slither out of it. These are very few and far between.

I would say that the only hope for human beings is giving people the space to be creative and think. Also, to put a premium on how important it is to slow down thinking, to come off social media and to not be constantly looking at the next thing. To read books - you can't do thinking without reading books. That's just the bottom line. The thing about social media is that it makes you so anxious, it's very difficult to find the time to concentrate and slow your mind down enough to just follow a thought. I think that's one of the main problems as well, coming for this next generation who are coming up. They're not going to be able to do that because they don't have the concentration span. We're constantly bombarded with symbols all of the time. I think that's very anxiety-provoking.

Mason: There's a little part of me that wants to be hopeful for that next generation. The wonderful thing about the failure of the predicted A-levels here in the UK was that it started a small campaign among that next generation called 'Fuck the Algorithm' where they felt like they were so disenfranchised by the way in which the UK government assigned them their A-level grades, assigning predicted results that were defined by an algorithm. Little kernels of hope like that give me hope.

It pains me to believe that the accelerationists with all of their nihilism are right. Yet what we see is that artificial persons already exist. They aren't in the form of artificial intelligences in the form of androids and robots. They're in the form of corporations.

Millar: Exactly.

Mason: They're causing a great deal of harm to other humans and to the planet. If we are unable to find tools to mitigate the negative effects of artificial persons i.e. corporations, then it feels like we're never going to find the tools to mitigate the negative effects for any coming form of artificial intelligences.

Millar: Absolutely. I mean, capitalism is the bottom line. It always comes down to it. Whilst that dicttates our behaviour, it's very different to see out of it and it's very difficult to see how we're going to be able to have the time and space to think past this horrible impasse. We're just sort of creating these artificially stupid people, not even artificially intelligent. We're just going through the motions, thinking about the destruction of people who throw out the fantasy of the scary robot coming and taking over will actually...the scary robots are already here and the people you need to be really scared of are the humans. They're already doing pretty bad shit.

Mason: It's not also being scared of the humans. It's being empathetic to humans, by understanding through a process of psychoanalysis why human beings are sometimes a little fucked up, we might be able to forgive ourselves.

Millar: Yes, correct. Well, we're all mad. We're all fucked up. That's certainly true. There is no such thing as a healthy, happy human. All we have is ways of dealing with that fucked upness. Obviously, in my dictatorship, I would enforce psychoanalysis for everybody and that would be the way forward to a more functional, happy, and harmonious life.

Mason: Well on that highly challenging note, Isabel Millar, I just want to...I guess I want to thank you for being on the FUTURES Podcast.

Millar: Thank you so much for having me.

Mason: Thank you to Isabel for revealing the relationship between psychoanalysis and artificial intelligence. You can find out more by purchasing her new book, 'The Psychoanalysis of Artificial Intelligence', available now.

If you like what you've heard, then you can download the FUTURES Podcast on all of your favourite podcasting apps. Or follow us on Twitter, Facebook, or Instagram: @FUTURESPodcast.

More episodes, live events, transcripts and show notes can be found at FUTURES Podcast dot net.

Thank you for listening to the FUTURES Podcast.


Credits

If you enjoyed listening to this episode of the FUTURES Podcast you can help support the show by doing the following:

Subscribe on Apple Podcasts | Spotify | Google Podcasts | YouTube | SoundCloud | CastBox | RSS Feed

Write us a review on Apple Podcasts or Spotify

Subscribe to our mailing list through Substack

Producer & Host: Luke Robert Mason

Assistant Audio Editor: Ramzan Bashir

Transcription: Beth Colquhoun

Join the conversation on Facebook, Instagram, and Twitter at @FUTURESPodcast

Follow Luke Robert Mason on Twitter at @LukeRobertMason

Subscribe & Support the Podcast at http://futurespodcast.net

Previous
Previous

We Have Always Been Cyborgs w/ Stefan Lorenz Sorgner

Next
Next

Virtual Reality is Genuine Reality w/ David Chalmers