Coevolution of Humans and Machines w/ Edward A. Lee

EPISODE #19

FP19_EdwardALee-Web.jpg

Apple Podcasts | Spotify | Stitcher

Computer scientist Edward A. Lee shares his thoughts on the symbiotic coevolution of humans and machines, why the ‘dataist’ belief in human cognition resembling computation is likely wrong, and how recent technological developments resemble the emergence of a new form of life.

Edward Ashford Lee is Distinguished Professor (Emeritus) in Electrical Engineering and Computer Sciences at the University of California, Berkeley, where he runs iCyPhy, a research center focused on industrial cyber-physical systems. He is the author of Plato and the Nerd (MIT Press) and other books.

Find out more


YouTube

SoundCloud


Transcript 

Luke Robert Mason: You're listening to the FUTURES Podcast with me, Luke Robert Mason.

On this episode I speak to distinguished professor in electrical engineering and computer sciences at the University of California, Berkeley, Edward A. Lee.

"What we need is a new philosophy of technology that is much more integrated with our understanding of culture and human processes."

- Edward A. Lee, excerpt from interview.

 Edward shared his thoughts on why a symbiotic coevolution of humans and machines is more likely than the eventual obsolescence of humanity due to artificial intelligence, why the dataist belief in human cognition resembling computation is likely wrong, and how recent technological developments resemble the emergence of a new life form.

Luke Robert Mason: Your new book suggests that humans have less control over the development of technology than we might actually think. In fact, it argues that the development of technology might actually be a coevolutionary process. So I guess my first question is: What does it actually mean for technology to evolve?

Edward A. Lee: In the book, I coin the term digital creationism, for the hypothesis that all of these pieces of software and digital technology that we've created are the result of top-down intelligent design. The reality is, I've been writing software for about 40 years. I started writing embedded software in the 1970s, and for my entire career I thought that the product of my work was the result of my deliberate decisions in my brain about how things should come out. I am a little embarrassed that it took me so long to realise that that's not really the case. I could draw an analogy: It's a little bit like if you come home from the grocery store with a bag of groceries. You feel like you've accomplished something. It's a personal accomplishment to have stocked your refrigerator - but is it really? There are so many factors that played into that accomplishment. The road system, the car that took you there, the economic system that enables paying for the grocery bagger. All of these things that are really much bigger than your personal accomplishment, and the personal part of it is actually relatively small. I've realised that the same is true of most of the software that I've developed over the course of my career. Most of that software is really mutations of previous software and my thought processes as I've developed the software are very strongly shaped by the programming languages and the development tools and so on, that really guide my thinking. This view of these technology products as purely the result of top-down intelligent design is really very misleading. 

Mason: You take issue with digital creationism, because in a funny sort of way, in culture we assume that the designer of the technology has full agency over how it's created. This idea of digital creationism encompasses all of these assumptions that we have, and you believe that that's no longer the case. I just wonder, how have you come to that understanding? You've used that example there, but I just wonder at what point did you realise that, in actual fact, even though I am the developer of the software or of the digital technology, I don't have full agency over it. That there is a co-creation process that's occurring here. 

Lee: I'm actually not the first person to postulate this thesis. In fact, the first time that it entered my head was when I was reading this wonderful book by George Dyson, called 'Turing's Cathedral'. Towards the end of this book, Dyson talks about a visit that he made to Google where he got a tour of the data centre with its thousands of servers. The thousands of servers in this data centre sort of made him think of it as kind of a living thing, that was actually nurturing the humans that were taking care of it and developing it. He talks in this book about this sort of feedback relationship between the humans taking care of the machines and the machines taking care of the humans. That really got me thinking, and I started to run with that.

I think one of the things that bothered me in some sense about Dyson's argument was that I actually had a misconception about how evolution works. I think it's a fairly common misconception, even today. My understanding of Darwinian evolution was that largely random mutations would occur in DNA due to, for example, alpha particles or chemicals in the environment, or something like that. Sometimes those mutations would lead to a beneficial variation in the organism. Most of the time they don't, and the organism doesn't actually survive the mutation. But actually, biologists have discovered that that sort of process of random mutation can't account for evolution that we see in the wild. For example, the evolution of antibiotic resistant bacteria occurs much too rapidly to be explainable by that kind of mechanism. Instead, there's a whole suite of other mechanisms that are involved in biological evolutionary processes that have interventions that almost start to look like agency, where viruses intervene, splicing DNA of one microbe into another, for example. Or, I learned in the course of researching this book about a thesis called endosymbiosis, which is the thesis for how eukaryotic cells have evolved. Those are cells that are in all plants and animals, and they're cells that have organelles in them. They have nucleus and mitochondria and so on. 

It turns out my previous naive view was that: Well, at some point, a random mutation occurred in a DNA that caused a cell division to result in an organelle getting created inside one of those cells. It's probably not the way that it came about.

Lynn Margulis is one of the people who has really promoted and made respectable this thesis of endosymbiosis, which you can think of in a cartoon-like way. You can think of a bacterium swallowing another bacterium and instead of digesting it, making it part of its metabolism and turning this relationship into a symbiosis rather than just a food source. That kind of evolutionary mechanism is really very different from the kind of random mutation that I had thought of, that was happening before. 

So these kinds of biological mechanisms that have recently come to light actually have pretty good analogies in software development. If you think of what a software developer does: A software developer doesn't start with a blank page in a text editor and start writing code on that blank page. Nobody does that. Maybe students taking their very first introductory programming class might be asked to do that, but even then, they're always given some piece of code to start from. What really happens in software development is that programmers take pieces of code from here, pieces of code from there. They go to GitHub, they go to Stack Overflow to look for ways of doing things in the software. They use libraries that come with their programming languages like The Standard Template Library and C++ and so on, and they're really just stitching together pieces of code and their own little handwritten pieces of code to end up being a rather small part of the result. 

This is really quite analogous to what a virus does, when a virus takes a piece of DNA from one bacterium and splices it into the DNA of another bacterium. I use the term codeome in my book, for these chunks of code that get realigned and recombined by a software developer to create a new piece of software.

Then, of course, that new piece of software - most software that gets written dies. Most of the software that I've written is archived in some back up storage somewhere, and it never sees the light of day again, and never gets executed again. The reality is that that's true of most software, but some software - when you release it into the wild - actually takes off and starts to thrive in this ecosystem. Then this ecosystem itself provides a feedback loop where some of the software that gets developed ends up being in libraries, and then those libraries get used further by other software developers. Some of the software that gets developed ends up becoming part of software development tools.

One of the interesting things that I realised when I started to understand this process a little bit better is a lot of people who are very worried about AI these days are arguing that there is an inevitable point where the machines will learn to programme themselves. That AI is going to lead to machines writing their own software, and developing their own software. One of the things that I realise is that actually, machines teach humans to programme. The software development tools that we use today are actually teaching the programmers how to write code.

Today, I write software a lot, and my software productivity is orders of magnitude better than it was just 10 years ago, because the tools are guiding my thought processes, helping me construct that code accurately, showing me how to construct that code. Then of course, there's this feedback where some of the code gets developed and then influences further software development and the thought processes of other software developers.

Anyway, this process is really not like a top-down intelligent design.

Mason: So this is really what you mean by the idea of coevolution. Humans have an impact on the development of technology and vice-versa. Software then impacts the way in which humans intervene into what becomes successful software and what becomes obsolete software. I just want to talk a little bit more, though, about that relationship that we have with our machines. In the book, you posit various different types of symbiosis that we can potentially have with our machines, whether it's a mutualistic symbiosis or an obligate symbiosis. I just wonder where we are in our current relationship with machines as we understand them today.

Lee: Yeah, that's a good question. I mean, one observation, I think, is that the relationship between humans and machines is highly asymmetric today. The fact is that we tend to think that the technology has led to enormously complex systems, but actually, all of the computer technology on the planet compared to a single human being - the complexity relationship is highly asymmetric. Humans are far more complex as machines than the whole suite of computers on the planet.

I think that our relationship with the machines might be more analogous to our relationship with our gut biome. It's kind of that much asymmetry, or more. We depend very heavily on our gut biome as we depend very heavily on our machines. Imagine what would happen on the planet if we turned off all our computers today. It would be disastrous, the impact would be much bigger than the coronavirus in terms of loss of life, and mass starvation and so forth. If you shut down all the computers today, it'd be really disastrous; we're really very dependent on these machines. We're also very dependent on our gut biome. If you kill the gut biome - and that does occasionally happen, right - if you overdo antibiotics for example, you can end up killing way too many of your gut biome and you get very sick, extremely sick. It can kill you, in fact. I think that we're currently in a very asymmetric but symbiotic relationship.

Now, people fear that it's going to turn into a kind of parasitic relationship where the machines will no longer need the humans, and the humans will become the parasites on this associated lifeform. That's certainly a possibility. I mean, evolutionary processes are complex processes, so it's very hard to predict where they're going to go - but personally I think that's pretty unlikely. I think we're far more likely to see a refinement of the symbiotic relationship. That doesn't mean that things won't go wrong, right? Things will go wrong, but the things that will go wrong should be thought of as pathologies. They're illnesses in a symbiotic relationship. I think a lot of the doomsday books that are out there think of it as more like a war with an alien species, but it's more like an illness in a symbiotic relationship and we have illnesses like that in biology. 

If something goes wrong with your relationship with your guy biome, you can get quite sick. If a computer virus like the WannaCry ransomware virus that kind of took off in 2017 starts to run wild, it creates a huge disruption for human beings and you can really see how that kind of disruption actually is an illness. You can also see the affect that social media has on our culture, and we all worry about the digital persona of our kids and things like that. When things aren't going the way we'd like to see them going, we can think of this as an illness, as opposed to a war of the worlds. It's not that machines are an alien species coming to take over; it's more like continual evolution that can result, of course, in illnesses that must be treated as illnesses.

Mason: You spend a lot of time in the book challenging this idea that what will eventually emerge is human level AI. Emergence is one of these things that seems to be tricky, because it's the way in which we explain away life. There's all these processes that happen, and matter comes together and suddenly consciousness emerges, and here consciousness is. But everything can't be explained away by this tricky thing called emergence, can it? It's really a lazy way to assume that we will get human level AI. The argument for the emergence of machine consciousness is that, well, our brains are a matter that came together and suddenly consciousness popped into existence. Surely once enough silicon comes together and enough networks come together, something like consciousness might just pop into existence, might just emerge into existence. You challenge that notion.

Lee: Yeah, I actually challenge it from several different angles, and I think maybe it's worth focusing on two of them. One is that there's a very strong background assumption that a lot of people make, particularly my colleagues in computer science, that human cognition is a computational process at its root. That's a background assumption and for many of my colleagues that I talk to, they say, "Of course, we all learned about the universal Turing machine." and they misinterpret the universal Turing machine as a universal machine. It's not a universal machine. A universal Turing machine is a machine that can implement any Turing machine. But Turing machines are a very particular kind of machine, and have very distinct properties. They're algorithmic - everything in them proceeds as a sequence of steps, a sequence of discrete steps. All of their information is digital. It's discrete and finite, and they're terminating processes. The sequence of discrete steps has to stop.

I think the only property of those three that humans have is the terminating one. We all terminate, but we're not algorithmic, actually. In fact, algorithms are pretty difficult for human cognition. There is evidence that people have deliberately - or perhaps just been misled to misinterpret - to say, "Well, ultimately the mechanisms underlying the processes in the human brain must be digital." So they point to things like the discrete firing of neurons. This is a thesis that started with McCulloch and Pitts in the 1940s, and it developed kind of into a philosophy in the 1960s, led by Hilary Putnam, who talked about...multiple realisability was the term that he used. The idea was that neurons are ultimately just realising logic functions, and if you just replicate those logic functions in some other machine - and we know how to make silicon that replicates logic functions - then you will replicate all of the logic processes in the brain. But the problem is that this actually ignores a lot of what is actually going on in the brain. One of the key things it ignores is the timing of the neuron firing, and biologists know, neuroscientists know that timing is an extremely important part of how neurons work, and that's completely ignored by this basic logic function thesis.

So, there are several other assumptions that underlie this basic premise that our cognition must be a computational process. I've borrowed the term from Yuval Noah Harari's book - his wonderful book called 'Homo Deus' - and he coined the term dataism, for a faith. I actually argue in my book that this is really, ultimately a faith, that these processes are at their root, computational. It's a faith with, actually, rather weak evidence for it. 

Mason: We fall so easily into the trap of believing that humans are similar to machines because of the way in which we use metaphor to describe the human brain as a computer and whatever. Consciousness is software. We're used to having metaphors comparing the brain to a machine. We talk about our thinking as cogs turning, and now when I'm thinking, I'm processing what you're saying. We've taken that metaphor and we've almost run with it, and that seems to be what's at the core of dataism. A misunderstanding that metaphor is what's actually there, what's actually occurring, what's actually happening in the human brain. It just fits the best technological description we have of the day. Is that what dataism is, or is there more nuance there?

Lee: There is a little more nuance. One of the important points is - I actually take the stand in my book, that: Let's assume that the brain actually is a machine. That's not the core of the dataist thesis. The core of the dataist thesis is that it's a computational machine. That's a special kind of machine. It's a machine that operates in discrete steps, on digital data. That's what the universal Turing machine is all about. Noone has invented a universal machine, they've only invented a universal Turing machine which is a special kind of algorithmic machine.

Even if we accept the hypothesis that the human brain actually is a machine, that doesn't lead you to the conclusion that it can be replicated by a computer, because the computers are all Turing machines - that's what they are.

There's a second angle from which I attack this hypothesis - this dataist hypothesis - which borrows from a thesis that has become quite popular in psychology in the last 10 to 20 years, which is embodied cognition. This was, I think, very nicely advocated by Esther Thelen, the leading psychologist who developed this thesis. She argued that the cognitive mind isn't something that resides inside the brain, getting sensory data from the environment and then doing actions in the environment. She argued that the cognitive mind actually is the interaction between the brain and its environment. That it's that interaction that makes the cognition - not what's going on in your brain.

That is actually very interesting. If you look at that from a technological perspective, one of the things that we're seeing is that robotics is much more difficult than software. If you're building software that just operates on data, that field is then progressing very rapidly. Robotics has been progressing much more slowly. We make robots that can fold towels very slowly, and they cost hundreds of thousands of dollars. It's extremely difficult to get these digital algorithmic machines to meaningfully interact with their environment. There's been this huge optimism about self-driving cars, but they seem to be actually stalled, right now. We're not seeing them getting deployed as rapidly as many people were predicting, and the technology has proved to be much more difficult, because these machines have quite a bit of difficulty interacting with their environment. That interaction with their environment - with the messy, analogue, physical world - makes those machines much less computational. They're much less about algorithms and much more about an interaction with physical dynamics. 

This idea of embodied cognition also suggests that in order to get human level AI, we're going to have to have these machines interacting with their physical environment much more than they currently are. That interaction is going to be much more difficult to design, and it's going to make the machines less digital and less algorithmic.

Mason: So it could turn out that we're not a computer, or our brain isn't a computer as we understand it now, but it could be a quantum device that we don't fully understand. It could be a number of things. I want to look a little closely at that idea of how we understand human beings as machines, because this misunderstanding is also why you've made such a compelling argument against the idea and the possibility of mind uploading. The ability to take the brain and import it to another, perhaps silicon substrate.  In your previous book, you argue that mind uploading is probably unlikely, simply because if the mind was information, then surely that information could be inherited - in the same way that you inherit from your mother and your father their genetics - and that is the basis under which your body is formed. But when it comes to your mind, you don't inherit your mother and your father's memories. Therefore, memory might not be something reducible to information.

I just wonder if you could tell us a little bit more about why mind uploading is such a problematic idea within this context.

Lee: When people talk about information, because we're surrounded by a relatively new technology - information technology - that's rooted in computers, many people assume that what we mean by information is digital information. That every piece of information can be represented by a sequence of binary digits. But if you look at the root of what the word 'information' really means, and you look at the information theories that have been developed about how to understand what information really is, it's not restricted to being digital. In fact, digital information is a tiny, tiny subset of the information that is potentially out there in the world.

The question of whether you can upload your brain: It turns out - there's a really wonderful mathematical result developed by Claude Shannon when he was at Bell Labs in the 1950s. He showed that if you have a communication channel that can convey information from one place to another, if the communication channel is imperfect in any way - which every communication channel is; imperfect - then the channel cannot carry more than a finite number of bits of information. In order to upload our brain, or our mind, we have to assume that our mind is representable by a finite number of bits of information. There's actually no valid reason to assume that. In fact, I argued in my previous book that that assumption can never actually be a scientific thesis, because it's untestable by experiment. You cannot construct an experiment that would ever falsify that hypothesis. To prove that statement would require some math.

In this new book, I avoid all that, and I just say: Let's just cast doubt on the idea that information contained in my mind, that represents my mind, that is my mind - let's assume it is information. I'm willing to assume that. In fact, I believe it is; it is information. Let's cast some doubt to this hypothesis that it's digital information. The hypothesis that it is, is actually untestable, which means that if someone came and offered you a product to upload your mind to a computer and you decide to try it, it'll be impossible for anyone - outside of you, perhaps - to know whether it worked. It cannot be done. Noone will ever know whether it worked.

Mason: What you're saying in many ways feels like again, this misunderstanding of metaphor. DNA can be reducible to data, but that doesn't mean that GATTACA, the representation of DNA in information will one day just bounce up and emerge as biology. It's that issue of where this representation becomes matter - is that right?

Lee: Human DNA molecule has about two gigabytes of data in it, which is about 1000 times less than the laptop that I'm using to talk to you now. It's actually not a lot of information. I refer to it as the DNA fallacy, where people naively assume that DNA encodes humans, and therefore, I, as an entity, am reducible to two gigabytes of data. But there's a problem with that - I mean a lot of problems with it - but one of them is that I, as an entity, as a biological entity, am part of a process that started four billion years ago, roughly, and has been completely uninterrupted for the four billion years. There's a whole sequence of chemical, biological processes that are four billion years old, that I'm part of - with no gaps in that process. If there were any gaps in that process, I wouldn't be here. How much information was conveyed along that process, compared to the information in the DNA? My argument is that that process is actually capable of carrying vastly more information than two gigabytes of data. 

Biologists are now starting to think that using CRISPR technology, for example, that they'll be able to create the woolly mammoth. But how are they going to do it? They're not going to take DNA of a woolly mammoth that they find, and feed that into a machine that creates a woolly mammoth. No, what they're going to do is splice that DNA into a germline cell of an elephant, and then they're going to implant that in the womb of a mother elephant. Then the mother elephant and the womb, and the cell into which they put the DNA - all those carry information. That information is potentially vastly more than the two gigabytes of information in the DNA itself. So there's this misconception that since DNA is digital, humans must be ultimately digital. It's just an incorrect conclusion. 

Mason: We've focused a lot on how humans might be like machines, but really at the core of the book is this idea that software artefacts could be considered living, and in many ways, machines might actually resemble living creatures. Could you tell me more about these things called living digital beings, or LDBs?

Lee: When I was working on drafts of this book, my working title was 'Living Digital Beings', and throughout the book I referred to them as LDBs, and the publisher didn't like that word at all. They thought it was a silly word, and that it would undermine a serious message, to use a silly word for this. But it's a metaphor that's trying to get us to think about our relationship with the machines in a different way. Instead of thinking of them as our tools over which we ultimately have complete control, think of them more as evolving beings in our ecosystem. They're things that we have relationships with, that affect us just as much as we affect them. We're not just using them, they're using us. They're not using us in the sense of having agency or deliberate decision making or anything like that - not yet. I do look in my book at what it might take for them to get there. But they're using us in ways that can be thought of as quite analogous to our gut biome.

It turns out - I learned in the course of researching for this book - that your gut biome will actually synthesise proteins that release hormones that will make you crave certain foods that the guy biome likes. They control your brain to make you crave certain things so that they can be healthier. Of course, they're not doing this in a deliberate way; they're doing this as a result of a Darwinian evolution, because it's a beneficial thing for them and not too terribly harmful for you.

Digital machines that we work with also mess with our brains and create cravings. Look at Twitter addiction. Think about how Twitter is controlling the brain of Donald Trump, right now. It's very clear that there have been tremendous floods of hormones in his brain, making him extremely angry, and yet he's yelling at people in the White House to issue executive orders to constrain these companies. Twitter is resulting in the releasing of these hormones in his brain, and affecting his behaviour as a consequence. His behaviour - because he's powerful - is going to affect Twitter, and the whole system around social media.

So, there's this feedback loop, right? If you think about this kind of relationship between the humans and the machines in this more analogous way as if we're in an ecosystem - we're participants in an ecosystem, rather than them just being passive tools under our control - that's the metaphor that I'm after here.

Mason: So in many ways it's not the fault of technology platforms or the machines that we might be addicted to. It actually might be the fault of the humans, and it really is a feedback loop to change our priorities to then change the biosis of the software. We're so quick to blame the addiction caused by social media, but in actual fact, it's only addictive because we're giving it the feedback that it wants to see, and then optimises to be addictive. Is that the right understanding of what you're saying there? 

Lee: That's exactly right, Luke. The key thing is, I think, this misunderstanding that we have of the relationship with technology leads to ineffective regulation. We want to regulate on the basis of the assumption that if addiction is the result of technology, that that was the deliberate decision of some software engineers or of some Silicon Valley executives to make that addiction happen.

The argument that I make in my book is: It actually came about in a rather different way. The technology that we use is the result of a selection process. We think: Okay, well Facebook was the result of a brilliant mind who created this thing. But actually, there were thousands of other pieces of software that were really doing very similar things, most of which died and went extinct in this competitive ecosystem. One of them survived through a selection process. The ones that survive are the ones that propagate most effectively, and getting humans addicted to them is what results in that propagation. So it's a completely Darwinian process; it's natural selection. The creatures that thrive in an ecosystem are the ones that have the procreative prowess; that are able to spread themselves. Getting humans to be addicted to them is a fantastically powerful way to spread yourself. 

If we want to find ways to effectively steer the process towards favourable outcomes for humans, we need to understand that that's not just about getting the engineers to design things ethically. That's not going to result in the outcomes that we want, unless this digital creationism hypothesis is actually true. I believe it is not true, and therefore what we need to do is understand this as a dynamic ecosystem with a lot of feedback loops. When you have feedback loops like this, humans get addicted to technology which then causes that technology to propagate which then makes it more addictive as it gets developed because there's this feedback loop. Whenever you have a feedback loop, you can intervene at any  point in that feedback loop. You don't have to just intervene at the technology development. There's other places to intervene. For example, you could educate the users, right? Have them understand more how technology is playing a role in our society. If in our schools, we had serious courses that looked at the cultural context of technology, we might have our kids growing up with a more sophisticated understanding of - and a more sophisticated way of  - relating with the technology. That's an intervention point that I don't think we've even tried.

Mason: So in many ways, the idea that we're being controlled by these platforms is really just a byproduct of a belief in digital creationism. The idea that the thing that will fix this is top-down regulation, because the technology must have been designed top-down by a human being or an engineer. In actual fact you're suggesting there is much more nuanced.

Lee: It is much more nuanced. It's a much more Darwinian process, where each piece of software that shows up in this ecosystem is a mutation of some previous piece of software, and that mutation was certainly affected by software engineers and by executives in the companies that pay for the software engineers. It was affected by them, but it wasn't really created from scratch by them. It's a mutation of a previous thing. Most of those mutations die out. The ones that don't die out are the ones that thrive in the ecosystem. That process is really what's driving the development of the technology much more than the deliberate decision making of individuals. 

Mason: Viewing digital technology as a new life form - it feels like a very controversial idea. It brings into question this idea of what you mean by life, and how something can be alive, or A-live; artificially alive - in the case of artificial life - or exhibit a form of liveness. We recognise the traits within a piece of software that triggers the part of our brain that makes us think that it has, or exhibits, some form of vitality. Could you help explain a little bit more by what you mean by life, when you talk about this idea of living digital beings?

Lee: Yeah, that's a wonderful question. I should point out that this idea of thinking of technology as living, again, is not an idea that I originated. I actually first heard it from Kevin Kelly, who was the founding executive director of Wired magazine. He wrote a book - a wonderful book - called 'What Technology Wants'. He has a wonderful TED talk on this topic of thinking of technology as a living thing. He coined the term technium, for what he called the Seventh Kingdom of Life, and described it as a new life form on our planet.

I started looking at his argument in some depth, and there's some problems with this argument because he includes in technology even things that, to me, are very inanimate. He talks about a coronet, for example - which is a musical instrument - as if it were a living thing. To me a living thing is a process, not a thing. It can't be a static object, it's got to be a process. It's the process that's the living thing. Digital technology and software is a much better match to that metaphor, because an executing piece of software is a process. It's purely a process.

So, I started looking at: What aspects of living does that process have? If you pick a particular example, my favourite example is Wikipedia, which I used throughout my book. By the way, I should mention that I've made a public commitment to contribute all the royalties from this book to the Wikimedia foundation. They will get the profits - not me - because Wikipedia is my favourite living digital being, today. Wikipedia was born, I think, 19 years ago, 20 years ago - somewhere in that range. It was born on a single server and it started reacting to its environment, which is reacting to stimulus coming in over the internet, and it's been running as a continual process ever since then, for the last 19 or 20 years. The servers on which it originally ran no longer exist - it's running on a completely different set of servers today - just like you and I are running on a completely different set of cells that we had when we were born. The process, not the individual servers, is what is the living thing.

In my book, I look in some depth at: What other aspects of living does it have? Does it have the ability to reproduce? Wikipedia has arguably produced very prolifically. I mean, if you go to...there's many, many Wiki pages all around the world - thousands, millions probably - of Wiki pages all around the world serving lots of different functions that are arguably progeny from Wikipedia. They've inherited traits in the form of these pieces of codeome, so they have inheritance as well. They even have processes that we think of as very biological, like homeostasis. Homeostasis is the ability to maintain stable internal conditions. Our bodies maintain a stable temperature. Well, the computer controlled air conditioning systems in the Wikipedia server centres are maintaining an internal stable temperature, so they even have properties like that.

You don't want to push this analogy too far, but the fact is that it's a useful way, I think, to think about how we relate to technology, and that's really the emphasis of my book. That's what I'm trying to get us to do: To look at our relationship with technology through new eyes, that are more able to give a more sophisticated understanding of what the processes actually are and how we can nudge them. We're not going to be able to control them - that's one of my points. This isn't about controlling technology development. Noone knows how to control an evolutionary process, but you can influence it if you know how it works or have a better understanding of how it works. Then you're more likely to be able to effectively influence it.

Mason: The great thing about what you've just said there is it reorients how we think about technology. In other words, technology and the idea of AI doesn't become scary anymore, because if you're arguing that it coevolves with us, as human beings, using Darwinian forces, what it's doing is driving digital technology to be complementary, rather than competitive. It'll find it's best option is not to kill us and make us obsolete, but in actual fact to keep us around and to work with us, so that it's reliant on us. In the same way that we have found out that we are so reliant on the internet and all of the processes that the internet enables - whether it's communication on banking, or a multitude of products and services that now our life runs on. That's challenging. That's a challenging way to think about technology. That really puts a spanner in the works for all of the individuals who are the AI doomsayers, who go, "No, no, no, no. This thing is going to realise it doesn't need us around." How do you think they're approaching this idea, that it's not going to evolve past us - but continuously and forever evolve with us?

Lee: The current pandemic that we're in, I think, can offer some lessons here. One of the things that has made it so much worse in some ways than some of the previous pandemics is that the viruses have been much more lethal. They kill the host. They kill the host very quickly, with high confidence. The mortality rate of the coronavirus is not quite so high, not nearly as high as some of these others, and that has actually helped it spread. 

That's a natural part of a Darwinian evolutionary process. If you have a relationship between two living processes, and one of them is extremely destructive to the other, if it's also dependent on the other, it's likely that they're both going to die out - or at least one of them is going to die out. I think that right now, the machines are very dependent on humans. They're not going to progress very rapidly if the humans simply stop working on them. The humans are absolutely a big part of their procreative processes. 

The machines are currently very dependent on us, and in that kind of relationship, as it evolves, there is a tendency to then...mutations that would lead to pathologies tend to get suppressed. We see this, for example, in computer viruses like the WannaCry ransomware computer virus. The way that humans reacted to that is to inoculate the machines with, essentially, antibodies that would suppress this mutation of this piece of software. That's a natural thing that's going to happen when a pathological phenomenon emerges from this evolutionary process. The pathological phenomena are going to appear, but we're going to fight them, and that feedback loop leads to a likelihood that it's the symbiosis that gets strengthened rather than the competition.

That's largely what makes me relatively much more optimistic than many of these doomsayer books that say, "Well, we're just going to be completely sidelined because the technology is going to realise it no longer needs humans." In the partnership between humans and machines, it's actually the humans that are the scarier part - not the machines. Through our deliberate decisions in choosing to develop certain kinds of technology that are by intent destructive to humans - that's where the really scary outcomes from the technology will come. Not from the AIs just learning to programme themselves and then realising they don't need humans anymore. I don't think that's the kind of mechanism that we're going to see leading to the really destructive effects.

Mason: We've also designed technologies that enhance what it means to be human. You look at some of these in the book in the form of the intellectual prosthesis and the cognitive prosthesis that we've created. In what way has technology become an extension of our minds and changed the way that we remember, and that we communicate? Are these neural prostheses, these intellectual prostheses - are they making us smarter or are they making us dumber?

Lee: I actually think that we're at least collectively getting smarter, if not individually. I personally...I could not have written a book like this without Google and Wikipedia, and a number of other technological tools that I used to build this argument and understand the nuances. The reality is that a search engine is able to make links between pieces of information in a far more powerful way than any human brain can. It affects our thinking and it affects the meaning of the information. 

When two pieces of information come up early in a Google search, it can change what those pieces of information mean to the humans; it can develop in that way. I quote in my book a conversation that a historian of science had with Richard Feynman, the physicist. The historian had found handwritten notes that Feynman had used when he was developing his quantum electrodynamics theory. The historian described these notes as a record of Feynman's thinking, and Feynman said, "No, those aren't a record of my thinking. Those are my thinking." The historian said, "No, the thinking was in your brain, and this is just a recording on paper." and Feynman said, "No, that's not actually the way it works. The thinking was happening on the paper and in my brain, together. The paper and the pencil is an intellectual prosthesis that enables a way of thinking that cannot be done without the paper and pencil." That's what I mean by an intellectual prosthesis. The way that we use technology is way more powerful today, than just pencil and paper. It is affecting our way of thinking and affecting what we can accomplish with our thinking - very strongly.

Mason: The way we've coded the world has an effect on the development of the brain and the evolution of the brain itself. What you're referring to there - the idea that the brain can live outside the body - is what Merlin Donald used to call external symbolic storage. The idea that we can port memories into external symbolic sources that we can then revisit. That must be having a massive effect on the way in which our brain develops. Surely there's an impact on this technology, on our own biological evolution?

Lee: Yeah, there's a lot of wonderful work going on these days in understanding, for example, how our ability today to record and organise and sort vast numbers of digital photographs is affecting our memory. What it actually means to remember events has changed over time, because of the technology; it does affect our brains.

Mason: And it's affecting them from a biological standpoint, as well. You talk about, in the book, how our brains are getting smaller.

Lee: Yeah. The human brain is about 10% smaller than it was 10,000 years ago. How could that possibly be a favourable evolutionary outcome? One of the arguments is that: Well, it can be, because over that 10,000 years, we've become increasingly reliant on external prostheses to augment our brain capabilities; to deal with aspects of our lives that our brains are not very good at. So for example, working with numbers, or having reliable records in order to be able to make transactions.

I talk in my book about the discovery of the Sumerian tablets, which are from about...more than 4000 years ago. When these were first discovered, they had to be deciphered - because no one knew the writing system. It was profoundly disappointing when they found that most of what was written on these tablets was really quite boring. It was mostly bureaucratic record keeping. So the tablets were really functioning as cognitive prostheses that enabled a society to develop in a certain way, that would not have been possible without this kind of writing system. It's compensating for deficiencies - for our inability, in our heads, to do certain things. To work with numbers reliably, to work with records reliably - we're just not very good at that.

Mason: Now all of these ideas - they raise some challenges on how we understand and operate with machines. The first of those, I guess, is our ability to become cyborgs. Would you argue, Edward, that we're already cyborgs, because of the way we're already evolving with technology? Or, is there still yet a point at which we might find technology integrates in a more embodied way?

Lee: Well I think that the really remarkable thing that is happening right now is that ever since at least the invention of writing, technology has become a part of our cognitive processes - but this has really accelerated with digital technology and the mechanisms that we have today. I think it's accelerated very dramatically. The acceleration itself is evidence of the effect that this is having on our cognitive minds. The fact that we can actually put together unbelievably complex technologies that were completely unimaginable 20 years ago is, in large part, because our brains are getting better able to do these kinds of things. To deal with this complexity by using these cognitive prostheses in order to do them. It's having a very big effect on us.

Whenever you get these rapid bursts of evolution...Right now, okay, with the coronavirus pandemic, we're seeing a burst of evolution in our relationship with technology. I used to have a pretty embarrassingly naive understanding of evolution; I thought it was a slow, gradual process. Biologists actually know that no, it's more like a punctuated equilibrium. You get huge disruptions in an ecosystem and a lot of stuff changes, and mutations that survive that huge disruption tend to look quite different from what was before the disruption.

We're seeing exactly that right now, with this coronavirus pandemic. We're becoming digital humans. The fact that you and I are not sitting in that wonderful space in London in front of a live audience is a result of the pandemic, and the fact I've been learning to turn sloppy Zoom recording talks into something more polished by doing some editing - none of that I was doing two months ago. Everyone around us - we're interacting with all of our friends through digitally mediated technology. It's having a huge impact on our relationship with technology. This is a punctuation point in a punctuated equilibrium. We're going to see that our relationship with technology, when we emerge from this, is going to be quite different from what it was before. 

The technology is going to be different, as well. We're going to see a very rapid set of changes in what technology we use and how we use it.

Mason: What's so refreshing about reading your book is that it's very different from the sorts of writing about AI and robotics that we've seen. They always seem to end up in the conclusion that eventually, we will have human-like machines. Machines in the image and likeness of humans. Really, what you're arguing is: No. It is always going to be this coevolutionary process. When you start sharing ideas like, "Machines might be life, or they have similarity to life.", it does provoke this idea of: What could happen if machines eventually really did become living? How would we deal with the idea that machines were alive? How would we recognise those machines are being alive? What would they need to develop for us to be able to understand them as 'conscious' or 'living'?

In many ways, it feels like accountability and agency will be the two things that we will need to identify. How will we go about identifying the possibility of independent, autonomous life?

Lee: Let me first say that, emphatically, being conscious and being alive are not the same thing. Most of the living things around us, we would not ascribe any agency to, or we don't hold them responsible for their actions - and yet they're alive. The plants in our garden, or the microbes in our gut - we don't think of them as having any consciousness.

It turns out that consciousness is not a binary thing. It's not something you either have or don't have. Douglas Hofstadter writes very nicely about this in his book, 'I am a Strange Loop.' It's more of a gradation. People have done studies on worms that have relatively simple nervous systems, right? Just a couple of hundred neurons, for example. It turns out that worms have an ability - these worms with just a couple of hundred neurons - have an ability to distinguish self from non-self in a certain sort of way. If their senses detect motion under their body, they can tell the difference between motion that was caused by themselves moving, versus motion that was caused by some external event. Being able to tell that difference is important in the development of a sense of self. The sense of self that humans have is much more sophisticated than that, but it has that essential element. The fact that when my peripheral vision detects a hand waving in my face, my brain doesn't react in alarm, because my brain knows that it made my hand do that. That's distinguishing self from non-self, and it's an intrinsic part of our biology. 

It's something that, actually, a lot of the software out there has already. It has that ability. It's got at least those kinds of low level mechanisms. So, I look at my book at what it would take for these low level mechanisms to develop into things that ultimately do involve agency - what we would call agency, and what we would call consciousness. The conclusion I come to in the book is rather nuanced. It's not a simple story, and that's probably the part of the book that's the most difficult to read. The essential argument there is that if machines do ever develop a first person self - a sense of self that we can ascribe agency to - the argument I build is that we will actually never be sure that we've accomplished that. That we'll never be able to tell whether that's true of those machines. 

That's, in many ways, a disappointing conclusion for many people. Fundamentally, not being able to know something is never a very satisfying conclusion, but the argument I make for it is, I think, extremely compelling, and hard to refute.

Mason: Surely that's the case with life we recognise in nature. The idea that a plant has a sense of self. It could be argued that if you watch a plant over a certain period of time and speed that up, you see it moving in relationship to the light and closing itself up in relationship to the environment. Fundamentally, it has some sort of the relationship with the environment. It has a feedback loop that it goes through and all of these things together seem really important in understanding whether something has agency, or not. Why is it so important for us to assign agency to non-human objects?

Lee: The reason why we would want to be able to assign agency to non-human objects is that we're starting to see technologies getting deployed that have quite a bit of autonomy. That operate largely independently of human operators, and in fact they can develop in such a way that they become quite disconnected from the humans who developed them. Consequently, they could have effects, where we're going to have an extremely hard time finding anyone to blame for those effects.

People talk about self-driving cars, and the damage that they can do when an accident occurs. I think that's one of many things that could happen with technologies that have a certain amount of autonomy. I think that we're very quickly going to reach a point where you're simply not going to be able to find a human being on whom you can pin the blame for something that went wrong. In that case, who do we hold responsible? That question, I think, becomes very, very nuanced. 

One of the things that I dive into in my book is that in order to hold an agent responsible for something, you have to assume that that agent is able to reason about causation. That agent is able to have said, "Well, if I do this, I'm going to cause this. If I do something different, I'm going to cause something different." Well, it turns out that reasoning about causation is something that actually can't occur in an objective way. It can only be subjective. You have to have a first person self, in order to be able to reason about causation. The fact that you can't ever know whether the machines that we build will have a first person self means we can't ever know whether they will be able to reason about causation, which means we can't ever know whether, for sure, we should be assigning responsibility for actions. In some ways, it's a very unsatisfying conclusion - but it means that as a culture, we're going to have to find a way to manage these more autonomous technologies, and figure out how they're going to operate within our cultural, societal, structural, legal systems, for example.

Mason: If those things are so hard to identify, how then do we deal with the issue of things like machine rights?

Lee: I think that those are things that are, ultimately, going to become cultural decisions; that will be part of the systems of justice that we create, and so forth. I think we're a long way off from ever wanting to give rights to machines, and we may never get there because we may always take a speciesist approach, which is that the only creatures that deserve rights are humans, and that's because it's the humans who are in control of those rights. But even humans are not like that, right? We do give certain rights to animals, for example. I think those are things that are really part of a cultural evolution over the very long run.

Mason: I mean, you try and deal with some of these questions, oddly enough through the example of AI generated art. We've had Arthur Miller on the podcast who has spoken about machine created creativity. The question always comes up of agency. When it comes to AI generated art, who is the artist? Is it the non-human agent who created the artwork, or was it the human who set up the parameters of the software to allow it to generate this final form, or picture, or painting in some cases? You go one step further and say that in actual fact, it's not just a challenge of whether it's the human artist or the non-human artist. In actual fact, it might be a multitude of non-human and human entities that could be the originator of that art. I just wonder if you could explain that example a little bit further.

Lee: Yeah, so I talk about this famous portrait that was created by these three French guys who call themselves 'Obvious'. They're these three French artists. They have an AI generated portrait that they sold at Christies for some 430,000 dollars or so. They put it forth as the first AI generated painting. There are a couple of questions. One is to assign who is the artist. It's also an important question: What is the artwork? To me, the artwork there was actually much more a piece of conceptual art. The idea of a first AI created painting was the artwork.

I think it was a brilliant artwork, because it created this enormous fury and discussion and controversy about, well - these guys just downloaded some software written by a teenager, and largely used it unchanged and created the painting. Shouldn't the teenager have been the real creator? Well, the teenager was using a technique that was created by Ian Goodfellow called Gan - shouldn't Ian Goodfellow get some of the credit for this?

We have a tendency as humans to really want to oversimplify any creative work and say, "Well, it had one creator." I think this is what we do, for example, with software - with this digital creationism hypothesis. We want to single out the one creator of this artefact. "It was Zuckerberg who created Facebook." We have a very strong tendency to want to do that as humans, and it's a mistake. No piece of creative work was created by an individual. Every piece of creative work evolves in a context where the context hugely influences the outcome.

If you think of the portrait that sold for 430,000 dollars as a piece of conceptual art, that concept is three guys who were the first to put such a portrait onto Christies. If that was their creative work, that was a really small delta on everything that was around, but it was a very clever delta. Perhaps they did deserve to get some 400,000 dollars - well they got less than that, because there were big commissions and stuff. 

This feeds into the overall theme in my book about a co-evolutionary process. We've got to stop trying to pin every development on a single creator, because the story is much more complicated than that.

Mason: And because that story is much more complicated, that means we have to reapproach how we look at technology. One of the ways we can do that is through something called digital humanism. By taking a more human-centric - or perhaps a life-centric approach to technology. How do you propose that ultimately, given our new understanding of how we relate with technology, how should that change the way in which we study and approach the understanding of technology through something like digital humanism?

Lee: Yeah, I really like this term: digital humanism, which I credit to Hannes Werthner, who was - at the time when he coined this term - the dean of computer science at The Technical University of Vienna. Hannes organised a series of workshops on this topic. He's a computer scientist like me, but his goal was to get a much more sophisticated dialogue happening among computer scientists, and between computer scientists and sociologists and psychologists and scientists in other fields. I proposed to him that this was a little bit analogous to the Vienna Circle, and the effect it had on the development of the philosophy of science in the early 20th century.

What we need is a new philosophy of technology that is much more integrated with our understanding of culture and human processes, and human systems like economics and politics. Those are things that are well beyond the skillset of people in any one of the disciplines that they touch on. You do need to have people who have a sophisticated understanding of the technology involved, because otherwise you get very oversimplified versions of the technology. But, you also need to have people with very sophisticated understandings of culture and how human culture develops, and of economics and of psychology, and of biology. All of these things need to be part of the story, and much of the way in which the Vienna Circle in the early 20th century brought together philosophers and scientists and social scientists to get a more sophisticated approach to science...At that time, the crisis that it was dealing with was the enormous power that science was acquiring with its ability to create atomic bombs, for example. It hadn't happened yet - at the time of the Vienna Circle - but they were coming, and people were understanding this enormous power that was requiring scientists to grow up, in a sense, and start engaging with the broader world around them.

Digital humanism is saying that technologists today need to grow up, and start engaging in a much more sophisticated way with the world around us. We need to be elevating the level of our dialogue and our discourse about how technology develops and how we can affect it, and how it's affecting us.

Mason: And let's make sure that AI also has a place at that table, when it comes to discussing digital humanism. Edward A.Lee, thank you for your time.

Lee: My pleasure. Thank you, Luke. I always enjoy talking with you.

Mason: Thank you to Edward, for sharing his insights into the coevolution of humans and machines.

You can find out more by purchasing his new book, 'The Coevolution: The Intertwined Futures of Humans and Machines' - available from MIT Press, now.

If you like what you've heard, then you can subscribe for our latest episode. Or follow us on Twitter, Facebook, or Instagram: @FUTURESPodcast.

More episodes, transcripts and show notes can be found at FUTURES Podcast dot net.

 Thank you for listening to the FUTURES Podcast.


Previous
Previous

Abolish Silicon Valley w/ Wendy Liu

Next
Next

Evolutionary Mismatch w/ Adam Hart