God in the Machine w/ Dr. Beth Singler
EPISODE #58
Summary
Anthropologist Dr. Beth Singler shares her thoughts on the misconceptions surrounding artificial intelligence, the dangers of treating humans like machines, and whether virtual reality could provide us with quasi-religious experiences.
Guest Bio
Dr Beth Singler is the Junior Research Fellow in Artificial Intelligence at Homerton College, University of Cambridge, where she is exploring the social, ethical, philosophical and religious implications of AI. As an associate fellow at the Leverhulme Centre for the Future of Intelligence she is collaborating on the AI Narratives and Global AI Narratives projects, as well as co-organising a series of Faith and AI workshops as a part of the AI: Trust and Society programme.
She has also produced a series of short films on the questions raised by AI, and the first, Pain in the Machine, won the AHRC Best Research Film of the Year Award in 2017. Beth has appeared on Radio4’s Today, Sunday and Start the Week, spoken at the Hay Festival as one of the ‘Hay 30’, the 30 best speakers to watch, as well as speaking at New Scientist Live, Edinburgh Science Festival, the Science Museum, Cheltenham Science Festival, and Ars Electronica. She was also one of the Evening Standard’s Progress 1000, a list of the most influential people, in both 2017 and 2018.
Show Notes
Transcript
Luke Robert Mason: You're listening to the FUTURES Podcast with me, Luke Robert Mason.
On this episode, I speak to anthropologist, Beth Singler.
"The more we personify AI and that is fed into by science fiction, the less we see the humans in the system. We see the AI more and more as the anthropomorphic entity and we don’t see the robomorphism of the humans being placed in the system." - Beth Singler, excerpt from the interview.
Beth shared her thoughts on the misconceptions surrounding artificial intelligence, the dangers of treating humans like machines, and whether virtual reality can provide us with quasi-religious experiences.
So, Beth, your work deals with the relationship between human beings and emerging technology, and questions whether there are similarities in how we engage with theology. As an anthropologist, how do you think our behaviour has already changed as technology has developed?
Beth Singler: Yeah, so I think a lot of my interest in this area comes from our assumptions and stories of what we tell ourselves about what AI is, where it’s going, and what it already is, as well; the misconceptions between actual practical uses of AI, and the dreams and imaginaries we have around technology even as it is now, as well as into the future.
When it comes to the sort of religious studies aspect of my work, I’m very interested in the continuities of narratives, tropes and imagery that we use to describe AI and represent it in various different spaces - even in the most overtly secular, business-like spaces. It’s to really unpack that distinction that we’ve already made there between the secular and the religious. It’s to see where this blending and merging of ideas from both sides - that we actually also see in religious groups around the adoption of technology and adoption of AI, and adoption of metaphors about machines. There’s a two-way interaction. As an anthropologist, I’m observing and, to a certain extent, participating in by getting into the spaces where that’s happening.
Mason: You mention ‘metaphor’ there. It certainly feels like, especially on this podcast, we talk more and more about machinic metaphors and how we apply them to human beings. Do you think we are, every day, engaging in mechanomorphism and robomorphism? Do you think we’re so willing to see the human being as some form of a machine?
Singler: Yeah. Obviously, this has a very long history. There are periods of time when we used whatever technological metaphors were around us. We thought of the human being as a factory. Before that, we looked to the stars. If we believed in astrology we used astrological systems to see the mechanistic view of the human with the metaphor of the age.
For now, our current metaphor is the AI brain. We blur the boundaries between the human and the machine by using that metaphorical language, absolutely.
Mason: But I think what’s different about the machinic metaphor and what’s different about the digital metaphor is that it has a feedback loop. I mean, that’s kind of built into cybernetics anyway. There’s an algorithmic feedback loop where if we assume that we are some form of machine and we behave on these digital technologies in a very robotic way, then the algorithmic biases within these technologies feed that back onto us. What is that doing for the conception of the human being in the 21st century?
Singler: Yeah, absolutely. In that feedback loop, though, there is another stage where we treat others as machine-like. This robomorphisation - it’s a word I really like using and I want more people to pick it up - is the treating of others as though they’re machines as well. You see it in the response of workers at Amazon in the factories who are not getting the breaks they need and are not getting the respect for being physical human beings. They’re actually protesting with science to say, “I’m not a robot. You don’t need to treat me this way.”
The metaphor has an impact. It’s very important that we don’t treat this as entirely a level of metaphorical discussion. It actually does have an impact on people’s lives. If we see people as completely quantifiable and treatable as machines then we lose something. I’m not necessarily a human essentialist but I still think there’s a certain level of respect that needs to be involved in our own interactions with human beings.
Mason: As an anthropologist, where do you think this move towards seeing the human as a machine came from?
Singler: It’s a combination of factors. Capitalism is at play there; the neoliberal approach to community and society; distance from each other that increases with social media. I will caveat all this by saying I try to be as agnostic about the benefits and negatives of technology as possible and see actual use cases rather than doing a broad-stroke, “Social media is bad. Get off Twitter.” If you know anything about me, you know I’m on Twitter an awful lot. Yeah. That’s one of my failings.
This interaction we have with other human beings through that one-step removal, that’s only increasing, I think, is having an effect on how we view others. It comes from so many different sources. I was looking recently again at the concept of philosophical zombies, which is something that comes up in more high-level academic discussions but is now filtering out to public discourse. It’s having an impact on how people treat each other. Some people say ‘philosophical zombies’, or they talk about people being ‘NPCs’ - non-player characters in a game. This perhaps links to simulation theory which is the idea that we all, in some way, are in a simulated universe. Those sorts of ideas do have an impact on how we work and treat each other.
Mason: Well it’s certainly having an impact on how we develop technology. There has been a lot of buzz in the past two weeks over this thing called the ‘metaverse’, a place in which we can both express our humanity and yet express our robotic tendencies as avatars. Do you have any initial thoughts on this concept of the metaverse?
Singler: I love, in so many ways, how un-novel this is if you’ve spent any time thinking about digital culture and looking at history. I started being online - I guess I’m going to show how old I am - in the late 90s, with chatrooms and first stumblings into things like ‘Second Life’, and massively multiple online roleplaying games. I was a big ‘Wowhead’ for a while. These spaces have already existed. Arguably, they exist in social media formats as well.
What concerns me is not so much the human move into metaverse space. This is Facebook becoming the metaverse. The openness of some of those early attempts at creating some form of the metaverse is not necessarily there when one corporate entity tries to lay claim to the ‘Meta’. Obviously, with the big name change, there are a few companies out there going, “We were called ‘meta’ first.” It becomes proprietary. That’s not what the first people on the electronic frontier of the internet were really about. Is this the direction? Everyone keeps talking about ‘Web3’, as well. Is this the direction we want something like Web3 to go into? Or is this just another way of bounding a space to sell it? That concerns me.
Mason: It’s not just about the proprietary nature of these virtual spaces and how they’re being locked down by large corporations. It’s also the nature of how we engage with our identity in these spaces. As you’ve teased there, originally the idea is that you’d be able to create an avatar and have it be fundamentally different from your verifiable identity in the ‘real world’. A lot of these platforms rely on the fact that you bring yourself, the version of yourself in IRL - in real life, for want of a better phrase - you digitise that and bring that with you into the metaverse.
The promise of the metaverse is that we could have complete freedom to explore a multitude of personalities, a multitude of genders and perhaps a multitude of species. In some cases, people went into these metaverse spaces, especially in ‘Second Life’, as animals rather than humans.
Singler: Yeah, absolutely. That freedom, as well, can be curtailed by the microeconomics or microtransactions that will necessarily end up being involved. Even that clip we saw of Zuckerberg saying, “I’ve seen this great graffiti artist. Let’s pay so that we can keep seeing this graffiti artist’s VR work.”...or AR, I suppose. I don’t know. Yeah. That sense of, what identity are you enabled to put on because you can purchase it? What is excluded from you because you can’t purchase it, or because the platform itself decides that this is not an identity that you’re allowed to put on?
I also find it quite amusing; the representations in some forms. I’ve seen the avatars sometimes don’t have legs. I’m not going to try and do a deep academic dive on that but I think it’s just interesting that a decision was made at some point that legs don’t matter in the metaverse. There are variations in form, obviously.
Mason: That’s partly because the form of engagement is you’re usually sitting at a shiny glowing rectangle. You’re usually sitting at a machine as we are right now, conversing through these shiny glowing screens, I guess. The legs really don’t matter in this form of communication. I guess the legs are just an unneeded aspect of engaging with the metaverse whereby the user is still going to choose to at least be sitting. A lot of people don’t have these large spaces where they can run around or even these suits that they can wear that allow them to control all of their limbs within the metaverse.
Singler: I can see that as an argument, but I suppose - I was saying this to you before we started - I tend to be quite an embodied speaker. I’ve got the Italian hands. I don’t know, I think I have Italian ancestors, several generations back. But also my legs are going, too. There’s more going on in the interaction. I suppose you’re right - until we get to the haptic suit and the ‘Ready Player One’ scenario where it’s on a treadmill.
I think that’s an interesting illustration of the distinction we choose to make between ‘IRL’ and the digital realm. Our representations are partly chosen by us but they will partly be chosen by platforms. They will make decisions on our behalf about who we can be as an avatar. That’s an interesting area to get into the whys and the wherefores.
Mason: Part of the adoption of Web 3.0 as a part of the metaverse is problematic. The promise was it would be decentralised and would allow for full creative freedom, yet really it’s being platformised again. They’re creating the stacks through which we will engage with Web 3.0 and we don’t truly have the freedom that is already being explored by individuals who are in the metaverse. There are VTubers, for example, Virtual YouTubers. Individuals like CodeMiko, who are donning these form-fitting cyber wear suits and are able to engage with their audiences through these Twitch channels. Yet, they come through as an avatar. A fully embodied, beautifully created, artistically performative avatar. It feels like all of that is going to be lost to the most common denominator here.
Singler: Yes, absolutely. I mean also, some of my previous work, I suppose, you could describe as being around problematic identities. My Masters work was on the pro-ana community - the pro-annorexia community - and how they personified and deified anorexia.
Just thinking about it in terms of who gets to form community, there would be prohibitions on certain representations that could be deemed harmful. There’s a whole conversation to be had there, as well, about who gets to choose identities that are acceptable. Where are the limits of how we allow people to present themselves? That’s a really difficult conversation. I wouldn’t want to be one of the people who is coming down on one position or another there. There are people’s choices that, to other people, seem like a bad idea. Yeah, it’s a whole area that’s going to be explored ethically. I think that’s such an exciting time for the ethical conversation, with the ethical technology developing in particular ways.
Mason: I guess we’ve already been creating those sorts of avatars through Facebook’s differentiated platforms such as Instagram. People create these versions of themselves whereby they don’t just Photoshop themselves; they Photoshop the entire reality around them. I guess that’s kind of been preparing the ground for being able to create the idealised version of you that’s representative of you, in virtual form.
Singler: Some of these ideas of using Google Glass AR to create mirrors that are like the Harry Potter Mirrors. It tells you a more perfected version of yourself, if only you make certain changes to your behaviour that you could then put into the algorithm and it will demonstrate what will happen to your physical form on the outcome of that. It’s a whole area of technology there, around the idealised self, as well.
Mason: Now, Beth, despite your incredible insights into technology, you’re background is actually theology. I just want to ask, how did it occur that you saw there was a theological basis through which we can understand emerging tech?
Singler: My degrees are in theology and religious studies but I’m a religious studies person. I’m more from the anthropological perspective. I’m not coming at this with any particular theological stance but I do pay attention to when people have theological approaches to technology. It’s a distinction that can be subtle, I suppose, but it’s the difference between being within the community and outside of the community, observing the community. I tend to be, as an anthropologist, a participant observer of peoples’ religious ideas and how they apply them to technology.
In that sense, as a religious studies scholar, I’m very interested in the stories that we tell ourselves. I include within those, the religious ideas. As a sort of agnostic researcher, I include within those the religious ideas and also where ideas about technology end up shaping themselves into familiar guises that we recognise as religious. There’s a whole debate about the ethics of calling something a religion or not. In some cases, some of the groups I look at would specifically see that as a harm. If I said that what they are doing is religious, they are so antithetical to religion that they would dislike that. They would see that as calling them ‘irrational’. It would be very problematic. It’s a difficult area to navigate. You want to say we’re doing things that look like religion, but with that pushback from that secular stance, or strongly atheist or new-atheist stance as well.
But there are so many occasions where the ways in which we talk about AI in particular take on the language, the narratives, the images and the tropes from religion. We have to pay attention to that, because it’s as much a distraction from what’s actually happening as some of the more speculative science-fiction ways we think of AI as well. We need to try and dig down into the actuality so people have a way to respond to technology as it changes.
Mason: In some ways, that’s very obvious. When we talk about creating robots we often talk about creating them in the image and likeness of us. The turn of phrase borrowed from the Bible - “things created in the image and likeness of us.” Stuart Brand famously said that “Technology allows us to become God so we’d better become good at being these Gods.” Are these, again, just metaphorical turns? Or is there something very tangibly similar to the way in which we engage with Religion - capital ‘R’?
Singler: So I’ve looked specifically at some of the illustrations we use of AI where you’ll have a version of the Sistine Chapel and the creation of Adam by God; the moment of divine creation as represented predominantly from a Christian perspective. I’ve looked at how that’s played out again and again with the human hand and the robot hand, or the ‘AI hand’ where it’s some sort of vector of lights. How we illustrate AI is very interesting. The colours we use, the shapes. There’s really interesting work being done on our representations of AI. I looked quite specifically at this.
It comes with a lot of cultural and emotional baggage. We know what this image is. It’s been remixed so many times. I call it the ‘AI creation meme’ because the Sistine Chapel imagery has been memed in so many different ways. I found some of my favourite ones, recently. I saw one that was created out of segments of a satsuma. You’re probably more familiar with pop culture versions, like Homer as God, reaching for a remote control, or various different versions of it. The flying spaghetti monster.
When it comes to AI and our specific ideas about this moment of creation, some illustrations also have the spark of life moment between the human hand and the robot hand. Something is being transmitted that we could call ‘life’. With that cultural baggage, therefore our ideas about what AI will be follow on in continuity with our ideas from a theistic interpretation of the creation of humankind. There’s something that we get from our creator, if you believe in one. There’s something that AI and robots will get from us as creators. It tells a story, already, about what that’s going to be like, and the connection that we should feel to our creation, as well.
All of that is bundled together into something that, as I said, often turns up in business spaces and finance spaces. The EU Commission used a version of this. It turns up in spaces you wouldn’t necessarily describe as religious. Again, I want to unpack that distinction between ‘religion’ and ‘secular’. It’s there. It’s permeating our public consciousness. It’s permeating the discourse. It leads people to interpret AI as a creation that’s here - or nearly here - in a way that reflects how humans are here and currently here. We’re not nearly here. We’re definitely here. That needs to be understood, I think; the illustrations that we use.
Mason: Do you think that in some way, it’s kind of good marketing? I always wonder - with these guys who get so concerned about the possibilities that robots could overtake humanity - if they are saying these things to drive a certain narrative that this stuff will be as good in the future. It’s almost like a self-fulfilling prophecy. If they say this thing is going to be smarter than us and is going to kill us then maybe it will. In actual fact, it’s still doing basic stuff.
Singler: There’s a lot of emotion and there’s a lot of affective impact that’s being drawn on when a company or corporation chooses to use that kind of imagery. It’s not limited to that. I see the word ‘faith’ and ‘belief’ being employed in corporate spaces. Obviously, the benefits of religious imagery and tropes want to be brought in so that people get this sense of trust, security, and that this is something we can build in a particular direction, absolutely. It has an effect that leads people into thinking that these things are inevitable.
The creation story is a story of rebellion, as well. I can link this to some of our robo-apocolypse fears. Some of my digital ethnographic work is about looking at the moments in which there’s an advance in AI or robotics, and how that is shared, disseminated or responded to on social media. People are immediately jumping to the big tropes - Skynet and Terminator. A Boston Dynamics dancing robot means we’re all going to die. That leap is also informed by those sorts of narratives and imagery that we get from the more religious ideas as well.
If we create something that’s just like us, won’t it be as bad as us, as well? The mistakes that we’ve made in some of the theistic discourse. Obviously, this is a very specific monotheistic interpretation. There are other religious communities and cultures that have a different conception of creation. Those images don’t get demonstratedly disseminated quite as much.
Mason: Largely, this is influenced by the fact that we’re swimming in the hangover of 2000 years of Judeo-Christian religion. I also know that you’re slightly critical of the idea that in the West, we think of it in a certain way but in the East they don’t have Judeo-Christian religion quite as ingrained in culture. Therefore they must have some sort of different relationship with robots, where robots have souls, for example. Could you explain why that common misconception exists and how you’ve come to critique it?
Singler: Yeah, so I’m really fascinated by our blindness in the so called ‘Western world’. I’m doing quotation marks again which really works very well on an audio podcast. In the so-called ‘Western world’, our blindness to animistic history and our own animistic culture still informs how we think. A lot of the time, the stereotype is, as you say, in the East people have Shintoism and Buddhism. That will lead, naturally, to an idea of everything having spirits and souls.
We can demonstrate this. There are examples where temples have done ceremonies for technology that has stopped working. People will point to the differences between utopianism and dystopianism. In the ‘West’, we’re more dystopic. We have Terminator. We’re more scared of robots. In the ‘East’ they love robots. It’s binary that doesn’t hold up. If you ever interact with those cultures, as I say, it ignores the animistic traditions we have in the so-called ‘West’. It ignores the points at which we quite happily have utopian views in the ‘West’. It ignores the ability to draw entities in the ‘West’ into our wider cosmology of beings. The number of people who name their cars and have conversations with their laptops when they go wrong. We personify and anthropomorphise all around the world.
There are cultural elements where we can look at different religions to see what they’ve done to inform the imagery and the narratives. Just as I’m saying with the AI creation meme, it gives you a story about the future of AI that involves personification and the possibility of being good. I mentioned the possibility of rebellion, but there’s also the possibility of being a faithful descendant of the human creator. We can look at some of the really impactful science fiction, longstanding narratives like Lieutenant Commander Data, in Star Trek, which gives you the vision of the aspirational robot who wants to be more and more human.
It’s more complex. It’s a hangover, I think, from the enlightenment era onwards meta-narrative of how we became more rational in the ‘West’. We became more intellectual and more othering of people in the so-called ‘East’. It’s a techno-orientalism that I want to push back against. I want to do it, as I say, through recognising what aspects of those characteristics we put on other people that we actually have here in the ‘West’, including animism, personification, anthropomorphism, and deification as well.
Mason: That drive towards increasing rationality - part of that is captured in technology. I know you’re critical of the idea that we’re becoming a more secular society on the whole because many people still maintain some form of faith whether that’s explicit or implicit and whether that involves going to church or not. Do you think in any way, shape or form, technology is the reason for a move towards a society that purports to be secular, even if, as you say, they may not be as secular as we think they are?
Singler: Yeah. I gave a presentation recently at an institute that’s filled with theologians and religious studies scholars. I mentioned secularisation theory and they were like, “Why are you bringing up this theory that’s like 50 years old and that no one’s really taking seriously anymore?” It’s because, in society, we do. We adopt that narrative that says religion is dying out, we’re becoming more intelligent and more rational, and these are the things we’ll leave behind as we, as a civilisation, mature.
I think it’s a story we’re telling ourselves that makes us feel more intelligent and more rational. If we can poo-poo people who do that strange religion stuff, then as I say, with the techno-orientalism we see with that East-West dichotomy, if we can see ourselves as superior, that’s a story that we’re telling ourselves about where we’re going. Technology doesn’t necessarily preclude religion. I think a lot of the religion and science debate at a larger level is about claiming spaces. When it comes to technology, it’s employed by religious groups. It has been since the year ‘dot’ with religions. It doesn’t necessarily lead to a diminishment of that.
In many cases, as I look at my work, the spaces that technology enables are around enabling more conversations around religion and spirituality, which lead to new religious movements which I’ve looked at, as well; the emergent religious groups that make a claim for having a new way of thinking about the world. They perhaps, as a community, don’t find each other without technology.
Mason: You mentioned there are some ‘new forms of religion’. Could you tell us a little bit about some of those forms of religion? Something that features heavily in this podcast is the idea of transhumanism. I know it causes a lot of kickback when you call transhumanism a science-based form of religion, but if you look at the thing, in many ways it kind of is.
Singler: Yeah, I mean that’s the problem with religion. It’s one of those. We could never come up with a sufficient definition but if it looks like a duck and quacks like a duck, people will say it’s a duck. With religion, if you see people behaving in particular ways…I tend to look more towards aesthetics, tropes and narratives, but if you look at people showing particular interest in something, that, for some people is the definition of a religion. If you have something that you hold up as most important in your life. It’s the sort of definition that Yuval Harari uses when he talks about dataism in his book ‘Homodaeus’. That’s not always my favourite definition, but that’s one way of understanding it.
By that criteria, if you say that people who are interested in transhumanist ideas are dedicating their lives to a pursuit of a future that they imagine in particular ways and they want to bring that about, it starts to sound very eschatological. The idea of religion looking to bring about the New Age - however they define that - whether that’s heaven on earth or some sort of rapture. It’s no surprise that people talk about the singularity, as a transhumanist idea, as the ‘rapture of the nerds’. There’s a famous book which I recently wrote a book chapter on, by Charles Stross and Cory Doctorow, of course.
It’s not surprising that there are these parallels. It’s partly, I think, because the language we use is so dependent on our cultural context. It’s hard to have big ideas that end up paralleling previous ideas without using similar sorts of language. You can talk about hopes and fears as well. The ideal utopia is also counterbalanced in some of these transhumanist narratives with the dystopia of the sometimes vengeful AI God. “Things will go wrong, we need to be careful. There may be a simulated hell that we’re all plonked into because we didn’t work towards producing the singularity.” - if you know ‘Roko’s basilisk’ as a concept.
There are these elements that seem to quack like a duck. It’s difficult, as I say. If you tell some people that what they’re doing seems religious, they don’t like that. Conversely, some transhumanist groups are very upfront about wanting to create transhumanist religions. There are Christian transhumanists. There are Mormon transhumanists. There are specific groups who talk about creating an ‘unreligion’. They want to hack the elements of religion that are successful and employ them to get people into transhumanist ideas. This is the way of the future, in a sense, for some people. This is the direction we’re going, but we need to get as much of the population engaged in it as possible. If religion gets people together, then that’s one tool that we should use. It’s a technology, after all.
Mason: I mean, could we use religious tropes to actually educate people about technology? Should we adopt more of the ways in which religion is communicated to help us communicate better about both the possibilities and, I guess, the dangers of digital technology?
Singler: There’s a wonderful speculation on this. If you’ve heard it before, you can correct me about who came up with it. This idea of warning future generations about nuclear waste by creating generations of cats who glow in the presence of radiation and then also having symbology around that. It presents the space in which the nuclear waste is as sort of profane and pulls on things like Mary Douglas’ concept of purity and danger that applies in so many religious conceptions; having to boo areas.
Likewise, I suppose you could make an argument for the safety of humanity if you believe that AI is going in a particularly dangerous direction using those sorts of effective elements and tools. It’s something, I suppose, that corporations have been playing with since at least the 70s or 80s. They’ve been creating a workplace culture that pulls on some of the community building that religions employ.
If you’re deeply cynical, you can talk about how they employ the elements that new religious movements do when they’re particularly negative, where they draw in membership and induce fervour but also isolate people from their families and their lives with long, long days and versions of love-bombing. There are people who have written about the Silicon Valley culture as a microcosm of religions interacting with each other in that sense.
It depends on your agenda and whether you want to do this for good or for ill, I suppose, which is arguably what you could say about any religious group as well.
Mason: It certainly feels like the way in which we interact with, at least, digital technology, has some mythos embedded into it. Some form of faith, if anything, is embedded into it. You’ve spoken before about being ‘blessed by the algorithm’. What do you mean by that phrase? How is it being used?
Singler: I want to be very clear, I didn’t come up with this expression. This is not me trying to inseminate the world with this concept of being ‘blessed by the algorithm’. As a digital ethnographer, I spend a lot of my time looking for interesting patterns, expressions, phrases and imagery on social media. One of the things I did notice was this expression, ‘blessed by the algorithm’. Digging around and finding examples of Tweets, it’s not on the scale of some of the memes that you might come across on the internet but I found a corpus of 170-odd specific Tweets with retweets, people copying and so forth.
That expression was being employed by people who felt like the so-called ‘algorithm behind the scenes’ on either a gig economy platform, social media platform, somewhere that they had produced content or been recommended content - this sense that the algorithm behind the scenes on those places was making a decision that blessed them in some way. If they were a gig economy worker - maybe a lift driver or an Uber driver - having a good day, they felt like they had been ‘blessed by the algorithm’. This is an expression some people were throwing out. Some people didn’t give context at all. They’d just Tweet: ‘Blessed by the algorithm’.
Again, as I say, it’s these continuities of religious form in a space that’s presumed to be secular. It also speaks to the non-transparency of these systems. We know if we dig around in the algorithm of Youtube, it’s not making decisions because it likes you, you’re lucky, or you’ve been virtuous in some way. These are some of the reasons people gave for being ‘blessed by the algorithm’. It’s weighted on values that relate to the corporate interest. If it’s Youtube, it’s getting eyes on videos. Certain videos get more eyes, more likes and more subscribers. That’s all built into that system. It’s non-transparent but people want to make sense of it.
One way to do that is to use this metaphorical language. “I did well and I got blessed.” “My song was uplifted on Spotify”, or, “My video got lots of hits because it was prominent, because of the algorithm.” or, “My Instagram picture was put on the popular feed.” All these elements. It’s interesting to me because again, it shows that for all our claims of secularity, these elements keep filtering through, and they’re unconscious for some people.
I did a previous project on a concept in the New Age movement called the ‘Indigo Children’. At one point, I was looking at Tweets where people just Tweeted the words, ‘Indigo Children’ with no content and no context. I asked them at the time, “Why did you?” and they just said, “It’s on my mind.” There’s this speaking out of instant thoughts that happens on social media. In this instance, even this small sample set, I think, gives you an indication that people have this kind of framing when it comes to non-transparent algorithms. We reach for AI as God, as a way to understand that.
Mason: I’ve always wondered, with things like Twitter, whether that’s a form of prayer in a weird sort of way. Even if you see it in a secular sense, prayer is useful insofar as it’s a conversation with yourself. It centres the mind to think about certain desires and what you want, but there’s no guarantee of an audience. There is no guarantee of an audience. Maybe there’s a guy in the sky or something up there, or maybe there’s nothing. At least with Twitter, you know that there’s the potentiality of some likes. Is Tweeting the 280-character version of prayer?
Singler: I think there are certainly similarities. I think it’s worth expanding on prayer’s involvement with technology. You’re absolutely right. Some forms of prayer are completely internal and who knows if anyone is listening or not. I tend to be agnostic on that point as well. But there are also occasions where prayer is enabled by physical spaces and enabled by technology. People were using prayer wheels to pray in their place for centuries. People use rosary beads to mark prayers off, and those are public demonstrations of lived religion. That’s something that people can see and pay attention to.
The same thing applies to Twitter. Perhaps your audience is much larger. It’s not exponential so much as just the enhancement of the public nature of religion through technology. I use the example of if you used to have one person on a street corner talking about their specific form of faith, and if that faith was different to the general populous, they might get a few dismissive glances as people walked past. Who’s that person and what are they saying? If you go online and you’ve got that very specific form of faith which doesn’t match the general, you’ll end up finding specific spaces where there will be people who have the same ideas as you. That’s where community starts budding from a very small seed.
Likewise, the connections that social media can make when you do broadcast these prayers, thoughts or whatever you want to describe them as, does enable communities to form together. Unfortunately, I’ve skewed the field a bit. As an anthropologist, we tend to do this. As soon as you start talking about a thing, having been a part of looking at a thing, it’s hard to then separate yourself from that. Having talked about ‘blessed by the algorithm’, if it increases in volume it’s partly because I’ve talked about it as well. That sounds egotistical, but it’s true, so I’m going to hold onto that.
If more and more people talk about it, this snowballing of connections could lead to something. As I say, there are already people who would quite happily describe what they’re doing as a new religious movement focused on AI as a conception. I don’t know if they themselves use ‘blessed by the algorithm’ - I’ll have to ask a few of them - but it’s something that’s out there in the public discourse.
Mason: It certainly feels like God - capital ‘G’ - served some form of function in society, as a form of surveillance and a way to assign rules to a populus so that they would live in a certain way. When we think about all-seeing entities that have the ability to assign certain rules to human individuals, we no longer assume that’s a God. We think of a tech company. Is there something God-like about surveillance, data harvesting and the other sorts of emerging technologies of control?
Singler: Commonly in our monotheistic traditions, we rely on the omnis.
Mason: Yeah, the omnis.
Singler: So omniscience. ‘The omnis’, as I call them. It sounds like a band or something, ‘The Omnis’. Omniscience, omnibenevolence, omnipotence. We hope, when we describe AI a lot of the time, that it’ll be omnibenevolent. But we conceive of these abilities as equivalent to omniscience and omnipotence. It’s not there yet, which is where this language is filling in the gaps. The ‘algorithm’ - quotation marks, because it can be so many different forms - but the ‘algorithm’ behind Youtube is not omnipotent and it’s not omniscient. It is driving actions that look to users like agencies. That’s where we also get omnibenevolence. “I’ve been blessed by the algorithm. The decision was made to help me.”
There were a few ‘cursed by the algorithm’ Tweets as well, but it really wasn’t as many and it’s such a small sample set anyway. Some people feel that, yeah, they had a bad day because the algorithm picked on them. You scale that up exponentially with ideas about the singularity, and then you get to the science-fiction form of the dreadful AI God that you get in things like ‘I Have No Mouth, and I Must Scream’. Even ‘The Terminator’ - the series of films and TV - pulls on those tropes to say that Skynet basically becomes a malevolent God that doesn’t like humans very much at all. We need some sort of Christ figure in John Conner to come and sort that all out. It’s a very complex, not very simplistic theology. It’s not really monotheism in the way we see it in Christianity, but there are a lot of religious tropes there that are being played with.
Mason: I’m almost sure, Beth, that there’s this cybersecurity company called ‘Omni’ out there somewhere, just doing exactly that. You mention science-fiction there, and someone who features quite heavily when we talk about these sorts of themes on the podcast is someone like Alex Garland who’s known for ‘Ex Machina’ and ‘Devs’. In both of those narratives, there were characters who were trying to achieve God-like status. In ‘Ex Machina’ it was Nathan; in ‘Devs’ it was Forest and his desire to create this quantum computer that could control time. In both cases, they were abusing their power.
Do you think that Silicon Valley, for example, has that form of God complex? Do you think there are ethical implications for people like Zuckerberg, Musk and Bezos having so much power over the sorts of tech that we use?
Singler: Yeah, it’s interesting. The kind of mad scientist trope and the playing God trope go all the way back. Obviously, we can think about ‘Frankenstein’ and the idea of putting yourself in God’s place in some ways of warning. In ‘Frankenstein’, I think it’s the warning. Definitely in ‘Ex Machina’. ‘Devs’ is slightly complicated, I think. Maybe because I’m a parent as well, but I feel so much for the main character in ‘Devs’ that he just wants to bend reality to bring his daughter back. Sorry, spoilers! Should have said that. I guess enough time has gone by now.
It’s this narrative we return to of the generally white male person who has decided to be God. I think it’s partly because we can fit that perception of the individual into our existing monotheistic, Abrahamic, idea of God as the white guy with the big beard on the cloud. There’s this sense of continuity. Again, with the AI creation meme, I really didn’t see any examples where the human hand was definitely not a white hand. There were a few small examples of female hands, but the majority were white male hands. There’s a sense of pattern continuity there.
There’s the whole subject of hubris. We want to make sure that no one is actually trying to build the Tower of Babel. Again, our linguistic metaphors come so often from our cultural context of the Bible. The sense of humanity trying to supersede God comes into our stories so often. It’s an interesting facet of our fears when it comes to AI that again, we’re worried about this sense of history repeating itself, so we’re doing these things again and again in our stories as a warning, perhaps, for people to not do these things now.
Mason: Do you reckon that in actual fact, if we got over the uncomfortableness of engaging with some of this thinking about technology in a religious framework, we could actually take things a lot further? I’m thinking about the transhumanists again. They are so averse to being called a ‘science-based religion’ and it’s because of what you mentioned previously, that they hate the idea of being assumed not to be rational. But then everything that they’re hoping will come to pass is very much based on faith; a faith in the fact that we can transfer minds to a brand new silicone substrate; a faith that we will be able to develop technology that can wake us up from our cryogenic freezing; a faith that we can potentially live forever. They constantly end up fighting against that faith and yet they’re quite happy to engage in these faith-based narratives. Should they just embrace the religious part of their ideology?
Singler: Quite often the response on my work from those sorts of sectors is around the fact that they don’t see it as faith because they have evidence that it’s mathematically provable that we’re going to get to these things. Specifically, I’m thinking of the old hat Moore’s Law. Accelerating the development of processing power proves that we’ll get up to human intelligence by a specific point, and beyond it into the singularity. All this is evidential and that’s their distinction - not mine, necessarily - that’s their distinction between what they’re doing versus religion. Religion is based on faith which doesn’t have evidence. I disagree because there are different forms of evidence for different things, but again, I remain agnostic so I don’t necessarily take on those evidential claims myself. But yeah, absolutely.
I think it’s interesting how extreme some of those reactions are. As I say, I’ve found examples where people are much more positive towards religion, but when it’s antithetical it’s quite strong. It does show an overlapping of that culture with some of the forms of new atheism and some of the rationalist movements. Obviously, it’s futurism, but entangled in there is a very strong idea of what it means to be ‘intelligent’. It’s embedded there from some of the earliest ideas of what AI would be, as well. There’s a great quote from, I think, Robert Wilenski talking about how the founding fathers of AI at Dartmouth College and just after defined intelligence as what they could do. They were good at theorems, maths, and playing chess. If an AI could do that, it must be intelligent. That’s how we define it. We need a broader understanding of what intelligence is.
There are so many people doing good work on this that it needs to be fed into that wider conversation. It does come with a stereotype, at the moment, that intelligence is putting away your childish toys. It’s humanity civilising itself to the extent that rationality is coded out. It’s about seeing the millions of years of evolution as a messy, destructive process that really not much good came out of, and bad things about religion came out of - but we can supersede that with technology. This is the transhumanist view. If you have that kind of narrative, I think you’re always going to push back against religion but you’re also going to push back against other things like embodiment, embodied intelligence, and some elements of community and culture that aren’t specifically religious but to partake of affect, and the emotional relationship that people have.
I think there’s a line of thinking there towards…science-fiction gets there and some people like it and some people don’t - but the idea of brains in vats, or the idea of jumping into the metaverse. We can denude our physicality because pure intelligence is something that is better, in some ways. For some people, that’s very appealing.
Mason: That’s the tricky word, isn’t it? Intelligence. You look at a lot of this stuff in the context of artificial intelligence and there always seems to be the drive towards the ‘I’ word. In actual fact, in becoming Gods, it might actually be artificial life that’s the thing that would really frighten us. If we believe that there has been some form of emergence to allow life and intelligence to occur, then if we were able to create lab-grown life, surely that would be the God-like thing we would be doing. The intelligence piece is something that emerges from that afterwards.
From a transhumanist perspective, the way in which intelligence is looked at and the way in which they look at evolution is the fact that they need to take control over that evolution. They need to now insert their own intelligence into the evolutionary narrative. This suggests a form of intelligent design. Oddly enough, the idea of an intelligent designer is a simile for something that may look like a God. The way in which they justify that is through both a rejection of God but also an acceptance of God. You ask these people, “Oh, so hold on, why do you believe that you should take control over evolution?” They go, “Well, we evolved to this state, we became human beings and we evolved intelligence. Our intelligence is the license for us, now, to take us into the next place, using technology to fuse us with machines or whatever it may be.” You go, “Hold on. You’re telling me that evolution had a plan to allow intelligence to emerge so that you could use that intelligence to then continue evolution?” That sounds a lot, to me, like intelligent design. There’s some sort of grand planner somewhere. How can they use that justification and also choose to be secular? This stuff is so fuzzy isn’t it, Beth?
Singler: Especially if you throw in simulation theory where - if you’re familiar - there’s this idea that because we can create computer games that simulate reality to various degrees of similitude, that someone else could have done that to a greater extent over the billions of years of the existence of the universe, or even prior to the existence of the universe. Perhaps the entire universe is a simulated universe and you can roll that backward. It kind of turtles all the way down. Where do we get to the point where there’s a prime universe?
It’s so interesting that - I don’t particularly enjoy reading Dan Brown - but some of the elements of that exact argument come out in ‘Origin’. If you’ve ever read Dan Brown’s ‘Origin’ - I’ve read it about five times now, which is five more than I really wanted to, but he has the same sort of argument. He’s drawing on the work of a physicist, I think, in Oxford. I forget the name, but this idea is that life exists in the universe to create an order that allows entropy. Therefore, the entire grand plan of the universe, and the main character - the pseudo main character, it’s not the Tom Hanks character whose name I always forget - but the main character in ‘Origin’ has done that. He’s an Elon Musk, Peter Thiel type. He has discovered this great reality. The truth of the universe is that life is necessary to lead to the end of the universe. He’s shown it to some religious people and some of them then kill themselves because, in the narrative of ‘Origin’, this is such a destructive story. I read this going, this is just another intelligent design. This is the same thing you’re saying.
If you start postulating on intention in the lead-up to the creation of our intelligence to lead onto our mind children or whatever you want to call them - what comes next - then you’re creating, at the very least, a telos for the universe. We are heading in a particular direction with a particular purpose. Again, that sounds religious. It’s quacking like a duck to me. We keep returning to this difficulty of defining religion but if you see elements like this, it’s hard, again, not to see us falling into some of the familiar tropes.
I spent time a while back at a transhumanist conference where people were talking about the greater good in a way that gave it, almost, agency. I brought this up at the time, saying, “You don’t want to speculate religiously but the language you use in describing the greater good and a meta-level of ethics and values is falling in line with religion.” They don’t want to hear that. They don’t particularly.
There are some, like I say, that are more willing to take that line. They see religion as not just a pragmatic tool to be used, but also an underlying reality where always, we’re going to come back to these same philosophical questions. We’ll frame them in different ways. The aesthetics will be different and we’ll call that supreme intelligence - or whatever you want to describe it as - different things. But these are the same roots in the same place. There’s a variety of responses to transhumanism.
Mason: It certainly feels like no matter what, God is always in the room. That’s because of culture. He’s an omnipresent being and so from a religious standpoint, he’s always in the room. Nascently speaking or metaphorically speaking, he’s always going to be in the room when we talk about these things. Listening to you there reminded me of a conversation I had with Thomas Moynihan on the podcast. He identified the fundamental difference between the extinction of humanity and the apocalypse. In his mind, an apocalypse is a decision for humanity to end. There has been some benevolent entity or some form of God that has decided that humanity will no longer continue.
Whereas extinction is so much worse because humanity has created the circumstances through which the culture and intelligence and consciousness will no longer propagate because they’ve created circumstances under which we cannot live anymore. In his mind, that was the more tragic thing.
Singler: To add the religious angle to that, apocalypse in a predominantly monotheistic tradition is not a disaster in the way that it’s become in public discourse. Our popular understanding of the word ‘apocalyptic’ can mean the moment at which there’s a huge shift and change, and a new age starts. The dispensationalism and the millennialism of various different Christian Evangelical faiths, or the idea that Jesus returns in a particular way. There’s that element to the apocalypse that sometimes gets lost in the more popular discourse.
Yeah, I have read his book at least as far as that part. I remember it very well, thinking that it’s an important distinction to make, because the lack of intentionality is actually, in some ways, the more dangerous thing. We ascribe intentionality. We come up with our stories of robot uprisings when actually, the result may just be artificial stupidity rather than artificial intelligence. Much more of a Nick Bostrom paperclip maximiser end. I don’t know. I still remain agnostic about that one as well.
It’s interesting that the assumption of agency that’s going on when people talk about ‘blessed by the algorithm’ or see the AI creation meme as talking about the story of AI creation in similarity to humans, there’s a positing of agency and choice there. A robo-apocalypse is often that moment, as with Skynet, of the turning on and the sudden decision by the AI that we have to go, where the truth is probably much banaler.
Mason: The reason I asked that question was that I wonder if the reason we’re so willing to see certain forms of technology as a threat is, again, because baked into Judeo-Christian religion are these ideas of raptures or apocalypses or some form of ends. It’s defined the way we think about time - past, present and future upon this linear progression. Again, I struggle to say it, but especially in the West where we have this progress narrative towards some form of either singularity or form of the complete and utter destruction of humanity. Do you think if we were to step away from those religious narratives, we’d find more appealing ways to engage with things like the future?
Singler: Yeah, I think a lot of the time, again, it gets forced into this dichotomy between the ‘Western’ arrow of time version versus various other faiths that have a cyclical version of time. Maybe we should learn a bit more from those. That’s often the assertion that I see, sometimes coming out of Silicon Valley which has had its long history of connection to the influence of Hinduism and Buddhism moving into the ‘West’ and a strong connection with the New Age movement. Some of these ideas that people were picking up from other cultures as we gained contact with them were being drawn into the discourse. The history of yoga in America has a core point on the West Coast.
This idea that we have an either-or comes into it. We can either have this strongly deterministic telos direction of time or we can see something more cyclical. Something more cyclical might be more beneficial. Again, this partakes in some of that techno-orientalism, seeing the differences in how we think about things and them over there, how they’re thinking about things. This is not always entirely useful.
But there are so many cultures that we’re stepping over when we divide everything up into Western Judeo-Christian - which also, Judeo-Christian is such a problematic term…sorry, that’s a whole other discussion - strongly Abrahamic faiths versus the more assumed animistic that misses out on so many locations, cultures and aspects of our own history. Again, I want to keep pushing this. We had an animistic history. We have an animistic presence in the so-called ‘West’.
It’s much more complicated than that, but I think we do tend to grab onto the useful narratives and the image of history and time progressing in an arrow going exponentially up is useful for some ends. Some of the people who push that narrative are doing so because they want you to buy into the determinism of this technology. We are definitely going in this direction. We can’t swerve away from it. There’s no way to jump off the highway at any of the exits. We’re absolutely going to that future. When it comes to talking about singularity, I get very concerned that that distracts from the present and gives people a sense of fatalism. If decisions are already, at a very small scale, being made about their lives and they’re told, “In the future, all decisions will be made about your lives.” they just see that as a natural progression.
I think it’s important to be critical. Not only because it’s the adoption of a cultural narrative that might not fit the times, but also because it leads people into particular views about the direction we’re going in.
Mason: It’s not just the progress narrative that’s problematic. It’s the inevitability of that progress narrative. Kevin Kelly had the book, ‘The Inevitable’, which was about these technologies that are going to come to pass whether we like it or not. The inevitability of some of this stuff is what makes me uncomfortable. It’s less the direction through which it’s going. It’s the fact that we’re told that this stuff will occur in a way in which we have to have faith that it will, in the way that we imagine it. Partly, that’s evangelised - to use another metaphor from religion - by these individuals who are often CEOs of companies who are beholden to their God, which is the stock market. They have to make certain proclamations - we can do this all day.
Singler: Yeah, we can.
Mason: They have to make certain proclamations which can…
Singler: But even some of them are self-proclaimed tech evangelists. It’s not a title that we’re putting onto them. That is on some people’s business cards. No one has business cards anymore, but you know what I mean.
Mason: There’s this form of tech-evangelism. They’re now saying it, and now listening to you saying it, there’s a certain way in which if you say it and you preach it, and you do it with enough fervour, then it will come to pass. It’s like coming down from the mountain and saying, “Hey, this is how the future of electric cars is going to roll out.” Then what happens is that proclamation about the future has a direct impact on the present. The direct impact on the present is the stock market valuation. Suddenly you see these companies’ prices increase. It feels like Silicon Valley has kind of hacked that part of religion in a really effective way.
Singler: Yeah, it goes back to what I was saying about the history of Silicon Valley, the development of corporation as a religious body in the sense of partaking in some of those elements of new religious movements - the more negative ones.
But also, I think it’s worth unpicking the word ‘prophecy’ a bit, as well. I guess because I have a religious studies background, I’ve spent a bit more time thinking about the different ways in which that word is used. It’s probably not one that comes up for most people. We have the expression ‘self-fulfilling prophecy’. But actually what a ‘prophecy’ was in the biblical sense was a commentary on the present by talking about what’s going to happen to you if you don’t change your ways. It’s a negative prophecy in that sense. We’re talking about, “If you behave in a particular way, the good things are coming.”
That’s different to a prediction. I think sometimes we conflate the two. We see these statements about what’s coming as predictions when we should see them as prophecies, in that biblical sense, of people making moral statements about what’s wrong with the world now and what they want the world to be. Sometimes, it’s Ray Kurzweil, who is, himself, also described as a prophet. I think Wired magazine, yonks ago, called him a ‘prophet of the singularity’. He’s played up to that. He’s held up the sign that singularity is near. That sense of what he’s saying about what the future will bring for humanity when he says things like We’ll be more intelligent. He even says we’ll be more sexy, at one point. There’s commentary inherent in what he’s describing what he thinks the deficit of the human is now.
The more we pay attention to that kind of underlying message what people say, “This technology is coming.” or “That technology is coming.” it tells you more and more about what they think is wrong with the world now. We should pay attention to that, specifically.
Mason: Two things, there. Firstly, everybody should go and Google Ray Kurzweil. Everything that he is saying is kind of coming to pass. You look at photos of him 20 years ago and he looks older then than he does now. He’s got dark black, long hair now. His skin looks great. Whatever he’s doing, popping those pills, seems to be working.
Singler: Yeah, he’s bribing the creators of the simulation so that he can get an avatar upgrade. He’s been paying with his microtransactions in the metaverse.
Mason: Quite possibly. I’m surprised there are not more conspiracy theories around him. They always thought that Paul McCartney died and was replaced by someone.
Singler: Yes, yeah.
Mason: I think maybe the same has happened with Ray Kurzweil. He’s been replaced by some form of a robot from Google. Joking aside, it’s important to understand the word ‘prophecy’ within that historical context. The original prophets would already prophecise what would happen within the immediate framework of the age. They certainly weren’t imagining volcanoes being set off and the destruction of humanity. It was really about what would happen to certain political families. I think we need to often remember that when we talk about the way in which we engage with the future through those narratives.
Beth, you’re such an incredible fan of science fiction. I do wonder, are science-fiction stories our new fables, I guess?
Singler: That’s sort of the impact that happens. We imbibe and imbue ourselves with our stories. That’s that interaction that we were talking about; the impact of how we see the world as influenced by science fiction. Science fiction is influenced by how we see the world. These two things can’t really be extracted from each other. It’s an area that’s so complex. You don’t want to put everything at the door of science fiction and say, “Hey, guys. You wrote these things. Now some of these things are coming to pass. It’s your fault.” There’s no effort at all from science-fiction authors to predict the future.
Again, it’s closer in some ways to the element of prophecy. It’s a commentary on now that science-fiction is creating these potential spaces to actually engage with what we’re up to at the moment. We should never blame science fiction for when people take Skynet or Terminator and run with it. There’s so much more that’s actually going on in those stories, but we just have those shorthand as it’s disseminated amongst other people. Doom, gloom, disaster, and robo-apocalypse.
In that case, I want to be careful that we’re not making any kind of moral judgment on science fiction, but it is very influential on the shapes of our imaginaries of the future. I talk, sometimes, about Welles [inaudible: 1:00:31]. If you know Orson Welles and his ‘War of the Worlds’ broadcast - that was advertised in a limited way as a radio broadcast but people tuned in part-way through and got the sense that there was an actual invasion happening. Some of that’s overblown in the story. Actually, not so many people thought that, but there’s that element where science-fiction is denuded of context and people take it up as truth.
I’ve done some work around fake robots where people have created robots specifically for science-fiction purposes but they’re shared, disseminated and treated as though they’re real. We have the real Boston Dynamics robots, but then you have Corridor Crew, a CGI production company that makes their ‘Bosstown’ robots. They play with the tropes of the robo-apocalpyse by using CGI to recreate the same sort of quadroplegal and bipedal robots, but they do run amock and they do attack humans. They run off with the pet dog. Some people react to these like they are genuine videos because although the content and the context are there initially, things can get shared and it becomes viral and memetic in their own way.
There’s this element in which science fiction informs our imaginations, but also we are very willing to adopt some of these shapes and run with them in a way that science-fiction authors probably never expected. In some of my previous work on new religious movements, I looked at Jediism. There’s no way that George Lucas - the good Methodist boy that he was, I’m not sure he’s still worshipping but hey - he would never have intended to produce a specific religion in the real world, but people have taken those ideas and they’ve run with them. You can’t really control that.
Mason: Hearing about Jediism makes me wonder, is science fiction collapsing into real life? Does science fiction have a lot to answer for? I’m just thinking about William Shatner being sent to space on Blue Origin. It’s so utterly surreal in so many ways that a figurehead from a fake - a science-fictional Captain of a starship - now actually got to go to space.
Singler: Yes.
Mason: The boundary, now, between what is fiction and what is real does feel like it’s getting a little fuzzier.
Singler: I think it was ever thus, though. Before we were telling ourselves stories about aliens and robots, we were telling ourselves stories about fairies. You don’t have to go too far back in history to a point at which people were producing pictures of fairies. It was the last 18th century. Sherlock Holmes is…I almost said Sherlock Holmes, got really involved. Sherlock Holmes’ creator, Sir Arthur Conan Doyle, got really involved in trying to prove the existence of fairies because of his spiritualist beliefs. This author who is upheld as the creator of the most rational detective that’s ever been was, himself, doing something that a lot of people would now class as irrational; trying to hunt down fairies and prove their existence. We’ve got this sense. We were always telling ourselves stories of the other.
AI makes a very good liminal other entity that we can tell stories about, in the same way that we talked about fairies before. In the way that we talked about dragons, or aliens for a long time. Aliens don’t seem as popular as they have been so I think perhaps AI is the next alien entity that we’re telling stories more and more about. There’s this thing that’s so malleable to our different concerns and interests. An object that we can tell different types of stories with. I think of a child with a Rubix cube. We can shift and change it for various different purposes, and that produces wonderful science fiction.
Just to go to your point about Shatner going to space, he wasn’t actually the first person in Star Trek in space. There was an astronaut who was then put in Star Trek, in an episode. Someone can write in the comments and answer who that was. It was a female astronaut. The sense of fiction becoming reality and reality becoming fiction - these boundaries have been permeable for a very, very long time.
The only concern I have is when those boundaries are permeable and it leads to a distraction from actual potential societal harm caused by technology. If AI is perceived as making God-like choices about our future, we’ll ignore the points at which it makes actual choices about our future. We had the whole disturbing A-level algorithm last summer where so many students in the UK didn’t get their predicted grades because an algorithm was implemented. Looking at the historical data from the school, it made a decision on what grade they should get. Not based on their singular merit, because the historical data was there. The narrative that was presented when there was obvious pushback against this, because so many people were downgraded, was that this was a mutant algorithm that had gone off the rails. Really, it just was fed the data it was fed. It was given values, weights and measures and it produced outcomes. It didn’t go mutant.
There’s the sense of the more we personify AI and that is fed into by science-fiction, the less we see the humans in the system. We see the AI more and more as the anthropomorphic entity, and we don’t see the robo-morphism of the humans being placed in the system. That’s a concern for me.
Mason: As we’ve been talking, it feels as though we’re constantly talking about the move towards more rationality. Hearing you describe some of these authors, it feels like what makes them so interesting is their ability to play with the irrational. Philip K. Dick famously used to converse with aliens. Do you think that despite technology’s drive towards making human beings more rational entities, the thing that allows us to truly engage with the future is to embrace the aspects of what it means to be human that also make us irrational?
Singler: There are two parts to my response to that. Firstly, as an anthropologist, I want to observe and be involved with the messiness of humanity. The cleaning up of humanity doesn’t make any sense to me at all. We are messy creatures. But I also, slightly - again, I’m not a big fan of dichotomies between the rational and the irrational. It’s part of the same problem. Seeing the acceleration of intelligence as a move towards rationality, the enlightenment narrative of, “If we get smarter and better and we put away some of these things”, and seeing others as irrational, is all part of the same game. If we can stop using that terminology and stop denigrating others as one or the other.
Some people, also, would have a problem with a rational focus. One of the characteristics of someone like Lieutenant Commander Data and why we see him as distinct from his crew members is the stereotype of robot rationality. The coldness that is implanted. That’s our view of the summation of what a rational being is. Those two sides of the argument have to come together and just recognise, as I say, that messy middle ground of sometimes we’re irrational and sometimes we’re irrational. Do those terms actually make continuous sense?
Mason: So I do feel like I have to ask. If there was one science-fiction story that you feel like everybody needs to engage with in their life, what would you put up there as the pinnacle for how to best understand the future through sci-fi?
Singler: No, see, I can’t do that I’m afraid. I’m sorry. That is like picking amongst your children, but one thing I’ve done in the past - and I slightly regret it because maybe it’s a bit mean - but I’m a bit of a troll sometimes. People would specifically ask me, “Do you really like ‘Ex Machina? Isn’t it the best singular AI film there has ever been?” I’d go, “No. I think, actually, ‘Short Circuit’ is better, from the 1980s.” Their faces would drop because if you’ve seen ‘Ex Machina’, I have problems with it but it’s a philosophical consideration of AI and what it means to be human. To what extent do we trust AI? Will it turn against us? All of the big questions that you would expect.
‘Short Circuit’, if you’ve ever seen it, is a cheesy, problematic representation of basically Frankenstein’s story. Again, a military robot is hit by lightning and comes to life. We get a representation of what it means for a robot to be alive. It’s not a good film, and it’s not the answer anyone wants to hear because they want the deep, philosophical answer. But I think it’s worth pointing out because it also represents a different era in our thinking about AI and robots. It’s the 1980s and we’ve got a military application of robotics, even then. You’ve got darker involved, and that was the stage we were at in our public narrative.
The representation of what it means to be alive is a representation of our 1980s understanding of what it means to be alive. You enjoy television and you dance to pop music. That’s important as well, but it’s not the answer anyone really wants to get from me. They want deep and serious AI fiction that’s going to enlighten them and open their minds. I want them to enjoy silly 1980s films.
Mason: Well on that wonderful note, Beth, I just want to thank you for being a guest on the FUTURES Podcast.
Singler: Thank you so much for having me on. Thank you.
Mason: Thank you to Beth for sharing her thoughts on the relationship between theology and technology. You can find out more by visiting her website, BVL Singler dot com.
If you like what you've heard, then you can subscribe for our latest episode. Or follow us on Twitter, Facebook, or Instagram: @FUTURESPodcast.
More episodes, live events, transcripts and show notes can be found at FUTURES Podcast dot net.
Thank you for listening to the FUTURES Podcast.
Credits
If you enjoyed listening to this episode of the FUTURES Podcast you can help support the show by doing the following:
Subscribe on Apple Podcasts | Spotify | Google Podcasts | YouTube | SoundCloud | Goodpods | CastBox | RSS Feed
Write us a review on Apple Podcasts or Spotify
Subscribe to our mailing list through Substack
Producer & Host: Luke Robert Mason
Assistant Audio Editor: Ramzan Bashir
Transcription: Beth Colquhoun
Join the conversation on Facebook, Instagram, and Twitter at @FUTURESPodcast
Follow Luke Robert Mason on Twitter at @LukeRobertMason
Subscribe & Support the Podcast at http://futurespodcast.net