New Laws of Robotics w/ Frank Pasquale

EPISODE #36

FP36_FrankPasquale-Web.jpg

Apple Podcasts | Spotify | Google Podcasts | Stitcher

Professor of Law Frank Pasquale shares his insights on how Artificial Intelligence (AI) can capitalise on human strengths and take advantage of human limits; what automation means for healthcare, education and warfare; and how the robotics regulations we implement today might have a dramatic effect on the future of work.

Frank Pasquale is Professor of Law at Brooklyn Law School and author of The Black Box Society: The Secret Algorithms That Control Money and Information. His work has appeared in the Atlantic, New York Times, Los Angeles Times, Guardian, and other outlets.

Find out more


YouTube

SoundCloud


Transcript 

Luke Robert Mason: You're listening to the FUTURES Podcast with me, Luke Robert Mason. 

On this episode I speak to professor of law, Frank Pasquale

"The stories we tell about the economy, and society and politics are so critical. They really help decide and they really help channel us in certain directions, and away from others" - Frank Pasquale, excerpt from interview. 

Frank shared his insights into how artificial intelligence can capitalise on human strengths and take advantage of human limits, what automation means for healthcare educational warfare, and how the robotics regulations we implement today might have a dramatic effect on the future of work.

Luke Robert Mason: In his 1942 short story runaround, science fiction author Isaac Asimov outlined three laws for machines. These laws preoccupied within ensuring robots don't cause death or injury to human beings still feature heavily in the current discourse around artificial intelligence. Of course, the way in which intelligent agents have expressed themselves in modern life is vastly different from the predictions made by Asimov. As such, these principles are long overdue for an update. 

Thankfully, in his recent book, 'New Laws of Robotics', Frank Pasquale provides four alternative recommendations for how we should govern the use of robotics within healthcare, warfare, media and society at large. Rather than attempting to mitigate any possible homicidal tendencies expressed by future machines, these new laws explore the working relationship between humans and robots. One that Frank hopes will see AI supplementing rather than replacing human expertise. In doing so, will help realise important human values. 

So Frank, why did you feel it was important to update Asimov's laws of robotics? What did you feel were the limitations of these laws?

Frank Pasquale: I think it was important for a few reasons. One is that I think that Asimov's laws were really focused on trying to ensure that robots don't hurt us. I think that if they could be implemented well, they could do a good job in taking many steps in that direction. But I wanted to go a bit further with this book and talk about sustainable, durable human control over technology. I think that takes a little bit more than a non-harm principle. It takes us into some principles that involve political economy, sociology, law - just thinking about it from a broader social scientific perspective on where tech is going.

Mason: Could you explain what some of your new laws are, and what specifically they focus on?

Pasquale: Sure. So I began in 'New Laws of Robotics' with looking at debates over automation and the future of work and the professions. Out of a lot of studies and work looking at the development of AI and robotics technology in different fields, my first new law says that robotics and AI should complement professionals rather than replacing them. This, I knew, would be controversial on two levels. One is that lots of people want to see the professions automated. They feel that we need to come closer to the Star Trek tricorder vision of doctors and diagnoses, or create robots and machines that could help proliferate expertise there. 

One of the arguments that I try to make is that in the profession of medicine and many other professions, you need to have an ongoing dialogue and cooperation between domain experts in the field that's being automated or informed by AI, and the technologists. Part of what I see as the future of work in so many fields is making sure that people who are more close to the patient in the case of doctors, and the client in the case of many other professions - can help intermediate between them and the technology. First, to help figure out whether the technology is working or not. That's been a big issue with, for example, with respect to diagnostic apps. Are they accurate, and are they doing well? Also, to guide them through the incredible array of different devices and things being marketed as AI, so that they aren't taken in by snake oil. I think that's an ongoing problem with technology. We saw it with pharmaceuticals in the early 20th century and we risk something similar happening if we have true rapid automation in many of these fields. 

Another law of robotics that I think is really complementary with the first law about complementarity between AI and professions or promoting intelligence augmentation in professions involves the principle that AI and robotics should not counterfeit humanity. What I mean by that is - I think it's a rather difficult principle to apply sometimes - but I think the core idea here is that we don't want to have massive investment in tools that are meant to deceive users into thinking that the tool itself has had an emotional response, a human response, or something along those lines, when it is in fact a simulation of those things. 

Particularly if we look at the open text generating programmes and programmes like the GPT-3 and others - these are large language models that can generate large amounts of text - it's really important in the future to label that they came from such models, rather than allowing the models to, say, use a face that was generated by a generative adversarial network and say, "This is just a person" - pretending that it's a person out there, speaking or putting out text.

A third new law of robotics for me is a principle we need to stop unproductive arms races; that AI robotics should not be contributing to those. The classic example of that is an international campaign in order to stop killer robots. I think that's a great campaign and I describe it at length in the book, but I also try to complement it with a larger political-economic perspective. I also think that there are many areas where AI and robotics are being deployed where it's really a zero sums arms race. We're not adding to the productivity of society - we're just helping one group cut the cue or get a positional advantage relative to another. Unfortunately, speaking as someone who is a lawyer, I see a lot of this in law. I think a lot of this is happening in law and finance, and other areas where machines are increasingly judging human beings.

Finally, a fourth new law of robotics is on attribution and requires that any robotic system be attributable to a person or group of persons. This has both a very pedestrian and a very ambitious facet. The pedestrian facet is just that if we have drones flying about like we have cars on the road - we may have drones everywhere in the next 10 or 20 years - then any particular drone, you could just point your smartphone at it and know who owns it, or at least being able to tie that to a registry where upon proving certain information, you can get that information about who owns it. 

It also has a much more ambitious view, which is to say that if you have this attribution requirement, essentially you're really cutting down on the possibility of the forms of autonomy of AI and robotics that are most concerning to the existential risk theorists.

Mason: These laws, they promote complementarity, authenticity, cooperation and attribution, as you've so wonderfully described there. More importantly, what they do together is that they capitalise on human strengths. As you say in the book, they take advantage of human limits. Could you explain some of the ways in which they do that?

Pasquale: Thanks so much. That is, I think, a really deep message of the book and I'm really glad you're surfacing it here in our conversation. I think that these advantages are really critical in terms of rigorous data practices and empathy. For example, with respect to trying to figure out whether robotics and AI systems are working, I think a naive point of view on it might be that we can develop outcomes that we want - key performance indicators - that would tell us whether these systems are working or not. In fact, a lot of times it's hard to figure out exactly how well, say, a surgical intervention has worked compared to a pharmaceutical or exercise intervention - say for someone with orthopaedic problems. 

In the educational context, there are ongoing controversies over how we measure the value of an education. Is it, for example, the degree premium? How much someone after college makes than if they would have made had they not gone to college. Is it their level of citizenship and awareness of civics? Is it having some basic understanding of science and math? These are all really difficult things to think about in terms of how well has either an automated or non-automated process gone - or an AI process or a human centred one. The promise of human intervention here is people being able to in a more nuanced and qualitative manner, figure out whether things have gone well or not. 

I've been teaching recently a course called 'Health Data Analysis and Advocacy' and I've been thinking about patient reported outcome measures such as level of pain. It's so interesting, all the different ways in which you can ask a patient to report on pain - to describe it, to treat it, etcetera. Often, the quick technical fix is a bad one. Opioids can lead to addiction and other problems like that. You want to have, I think, an empathetic, sympathetic interlocutor who's actually experienced pain to try to gather that data and make sense of that data. That, I think, is a really good way of thinking about how there's human abilities there that I think are quite good to capitalise on. 

You also mentioned human weaknesses that are good to capitalise on. That reminds me of the discussion in the book about automated classrooms. I began the chapter on education, talking about a programme in China by two companies that would take a picture of every student in the classroom, every second, and have a picture of their face. They'd analyse the face by saying: is the student attentive or not? One of the companies actually claimed to be able to deploy affective computing and to have more granular assessments of: is the student daydreaming; sad; engaged; not engaged; etcetera? I think the problem with that is that thinking back to my time in grade school, no one can go to my third grade teacher and say, "Did he show signs of being a menace? Of being irresponsible? Of being unengaged?" If you unleash too much automation and surveillance in these settings, the risk is that you end up with this incredible supercharged permanent student record. In that way, I think that there is a real problem in terms of thinking about the ways in which this supercharged student record could lead to a fear or unfairness - the same types of problems that Europe has been grappling with, with the right to be forgotten.

Mason: There's so many wonderful case studies in the book. Largely, what the book really focuses on is robotics in the context of the future of work. There's so many assumptions that we make when we think about the future of work. We're constantly told that robots will take our jobs, but at the same time we see massive amounts of evidence that robots don't do our jobs very well. I guess my question, Frank, is: how did the idea that robots will make us obsolete become such a dominant meme? Especially in managerial circles.

Pasquale: It's a great question. I actually had a section of the book in one of my first drafts of it that went into this in some detail, so I'm so glad you've asked about this. My personal sense is that I would trace this current panic about robotics and AI to the late 2000s or early 2010s. It was a time at which many journalists were watching their profession collapse from underneath them. I think that because of that, there were journalists who were seeing - at least in the US, and I think in many places - longstanding newspapers shutting down or going from having 400 reporters to 30 reporters. So much of the money was being directed toward large, largely automated, AI driven platforms of Google or Facebook that were much more efficient at matching audiences to advertisers. The idea there, I think, was that many journalists saw that and extrapolated from their own experience where one of their core competencies was apparently relatively easily automated - though there's a lot of controversy over that, even now - and projected that. That's how the story became so popular. 

It's particularly important to focus on these economic stories. As Becker who is an economic sociologist, or Deirdre McCloskey, or Robert Shiller who is one of the most recent people to dip into narrative economics point out, the stories we tell about the economy and society in politics are so critical. They really help decide and they really help channel us in certain directions and away from others. That's where I would lay a lot of the blame. 

I can't totally blame journalists. There are a lot of very smart journalists writing in this area. There is just an ongoing pressure to cut labour costs in order to enhance the relative value of capital. That, of course, is pushing this trend as well. There are a lot of very sophisticated commentators like Leslie Willcocks who is an expert on both automation and outsourcing, who are predicting that the real problem won't be too little work for humans; it'll be infinite work. These technological systems produce so much information and data to sort through that needs some human judgment as we're sorting through it, that there could be infinite work thanks to them.

Mason: You do such a good job at breaking down some of those assumptions that we have about the future of work. The idea of robotic lawyers and robotic doctors, to some degree, is largely hype, isn't it? Ultimately, the way in which you're thinking about robots is that they make labour more valuable, not less valuable.

Pasquale: I think that's absolutely the case. The issue with making labour valuable rather than less is that we do see that in many of the professions, you have...for example - this is another example I don't get into the book. I mean, I did in earlier drafts - but I think it's helpful for imagining this situation. If you think about robot anaesthesiologists, originally, the people that were marketing this type of tool were saying, "Yeah, we're going to replace anaesthesiologists. There's a huge market here. People make a lot of money, at least in the US and in many places, and that's going to be the future of anaesthesiology." However, it just turned out that it was very difficult to get the relevant regulatory approvals - safety and efficacy. Instead, what you can have is anaesthesiologists monitoring several of these machines that will increase the anaesthesiologist's productivity. Or you can add in the subroutines from these machines to try to make sure that we avoid errors. We have not just one failsafe for a bad anaesthesiology procedure, but two or three or four, thanks to AI and all that it can sense things potentially going wrong during the procedure. Then it really increases their productivity and increases their value; the value of that labour.

Part of the goal of the book is to try to figure out: how do we bring other walks of life and people that are in unions now or areas that are not as rigorously professionalised as the folks of some of my case studies into that situation where AI and robotics is increasing the value of their labour rather than being a menacing substitute for them?

Mason: Much of the fear around robots taking our jobs derives from the idea of the data double. The idea that machines can record and imitate what workers do, and then eventually the worker will be replaced by it. Why do you think the promise of a data double doesn't actually match the reality on the ground?

Pasquale: I think the digital double model was actually the inspiration for the whole book. What I found was I entered into some of these debates with respect to robot lawyers. I would point to an online programme that created wills and I would say to them, "I tried to use your programme but it turns out that..." I just came across this fortuitously, I'm not a trust expert or anything like that, but I said, "Fortuitously, it turns out that thanks to this one Supreme Court decision, there's a significant asset class that many people have. It seems as though your programme is suggesting that the will will decide who gets it. In fact, there's a separate form called Designation of Beneficiary that decides who gets it." When I would make that claim, people who were in legal tech would say, "Thank you. Now you've given us the better way to do version 2.1, or version 2.2." The idea that they were suggesting was that any critique that was offered would eventually be wrapped into the automation and the AI itself. 

Of course, there's two ways to respond to that. One would be that I could just surrender and say, "Let me join your company." The second is to say, "Wait, there's some things where it's not quite clear what the answer is." Then you have to guide someone through and say, "There's actually not a lot of clarity about this way of disposing of something in a will. There's a trust that could be quite complicated." That is where I think the challenge is. I think the key challenge is - and I tell all my students this - is I say: If you think your daily work routine could be totally routinised and predicted, watch out, because it will probably be routinised and predicted by somebody. But to the extent that it seems to require a lot of improvisation, judgment, consultation etcetera, then it seems to be something that's likely. Those types of improvisations and consultations are becoming a lot more important to growing areas of work. 

Particularly in the US, the largest growing area of work now is in healthcare, and particularly home health aids, position assistance, physical therapy and other things like that - all of which are pretty high touch and pretty communicative. I think that's part of resisting the supposedly irresistible logic of the digital double path to robotic replacement for workers.

Mason: Some of that irresistible logic is because the inmates are running the asylum. It's the AI researchers who look at industries and go, "You know what? I think this would be more efficient if we did X, Y, and Z, and removed the human and datified this piece of work." The way in which you're looking at regulation is in fact, there needs to be some form of collaboration between the industry and the folks actually developing the AI. How do we develop that relationship?

Pasquale: Yes. That's a terrific way of framing the future of work debates. That's where we're heading - I think - is trying to develop that cooperative relationship. Here, there's actually some work I'm doing now for a company called the Oxford Handbook of Expertise which is an interdisciplinary collection of people thinking about the concept of expertise. I think that one way that what has gone wrong in some of the future work literature has been through the idea of meta expertise; that there are people who are usually identified as experts in code, quantitative analysis and algorithmic approaches, who can themselves judge the value of what other experts do. I think that the problem that comes up for the meta experts first of all is an infinite rigorous problem. You ask: who are the meta meta experts that decide the value of the different meta experts that are deciding the value of the experts? Secondly, because of the difficulty of evaluation that we've already discussed in some other context. 

The positive view, I think, comes out of the critique. The positive view is one of complementarity, working together with each other. To give some legal examples here, there are some studies recently that showed or purported to show that judges after lunch gave lighter sentences than judges before lunch. They were being more favourable to the people they heard after lunch than before lunch. That was seen as a bias. Similarly with respect to judges in a certain state after their football team lost, they were harsher on juveniles than if their football team had won. Clearly, this type of thing is something that you don't want to see in a fair justice system. Those extraneous factors shouldn't be leading to disparate impacts on different groups. That type of statistical analysis is something that the traditional system is wise to bring in more of. To have people comment and say, "We're spotting some problems and some patterns that you might want to think about." I don't think the answer is to say, "What would be much better is to have natural language processing look at the overall trial court record and spit out a number for how long a person's sentence should be." Rather, it might be that, "We've learned this on a system wide level and we'll give the judge a snack before lunch, or at least warn them that these patterns are happening." 

Of course, if the patterns continue then things become a little more complicated. That's one way of envisioning this sort of cooperation. There are other examples. In journalism, some people working with the National Institute of Computer Assisted Journalism are thinking very rigorously about that. Julia Angwin at The Markup has really modelled how you can bring in data science to journalism that puts a certain rigour. The old critique of a lot of journalism was that it was anecdotal: "You've interviewed five people" or whatever. You look at some of the pieces that come up on The Markup and it's like oh wow - they've figured out a programme that has analysed 10 thousand comments on the agency website. Those are some of the examples of this cooperation.

Mason: At the core of this, there is that concern of the role of expertise and how that functions in our society. Some may say, "Well, we don't need experts if we have AI." Especially those careerist futurists that you're rightly critical of in the book. They say, "Look, AI removes the need for professionals. With enough datasets and enough data, any human function can eventually be replaced by a robot." Why do you not share this view?

Pasquale: First, I should actually start with a concession. I can't predict in 50 to 70 years if classrooms will be mainly robots, or will mainly be an AI that Sal Khan initiated and it eventually becomes a larger and larger part of education. What I do know is that if we decide on that as a long term telos for education - that's where we want to be in the year 2100 or what have you - that's really going to involve a lot of surveillance and a lot of data analysis on individuals by some firms that are, at present, not very responsible and not very responsive to those that want a more nuanced approach. That's one way to respond to the long term futurist argument. 

One of the things I try to do in this book - I don't know how successful it was - but I wanted to both respond to the people that are the short term AI ethics and reformist community, and to the people that are the much longer term existential risk community. Right now, they don't talk to each other that much. The short term pragmatists think that the long term-ists are high in the sky. The long term-ists are like, "Yeah, yeah, yeah. We'll deal with that or have existing agencies deal with it, but we've got to think about our broader vision." I think that actually, the problems that are being identified by the short term-ists could really get exacerbated massively if we go in the direction of the fully automated luxury communism school, or a more singularitian approach, or what have you. It really is important to look at that now.

On the other hand, some of the short term-ists are critiquing, for example, electronic monitoring instead of prisons. This is such a fascinating case study. Their critiques of electronic monitoring as opposed to prisons draw on a much larger philosophical and theoretical framework than they often let onto. They draw on a vision of human flourishing and freedom that is the mean state of people debating in the long term camp, that is less likely to get engaged in now, in the more pragmatic, reformist camp.

Mason: Some high tech advocates - often the cyber libertarians - often argue that AI should be given the freedom to think and access all the data it wants. What are some of the dangers in taking a position like that?

Pasquale: The cyber libertarian position is a really worrisome one, to me. I think that it has this logic of ever increasing data accumulation. To me, it becomes alienating. Alienating in the sense of meaninglessness and powerlessness. To give you a concrete example, there's been a lot of discussion about loans - either microlending or algorithmic lending. I just saw this great piece about a huge number of Chinese apps that are trying to push users into loans. Some people say the way in which you regulate that - or you shouldn't regulate it - is you allow people to make free contracts and gradually you'll just get more and more data about who's a good risk and who's a bad risk. The credit will be allocated optimally. 

My worry about that type of world is that - at least from the people that I've spoken to in this fintech microlending community - the way that they judge the value of the loan or the success of a particular lending event really comes down to repayment. Of course, in some of the worst American examples, it also comes down to repayment costing lots of fees and penalties, but we'll set that aside for a moment. They don't really get into how the repayment is made. If the repayment was made by getting another loan from a microlending app, that sounds like the person is increasingly desperate. If it was made by starving the person or the person not taking a meal for five days or something, that seems pretty bad as well. 

I saw this really great study of microlending in Kenya and the popularisation of it was called something like 'Debt in the Silicon Savannah in Kenya'. They talked to lots of people that had a really difficult time. They were repaying the loans but were just having a really difficult time. On the one hand, finance is the area where I think AI and robotics has gone the furthest; probably finance and media or journalism in terms of really structuring or restructuring our worlds. Yet there are just so many ongoing problems there that really haven't been addressed by leading companies and leading firms, or regulators to be frank.

Mason: It really does feel like the small decisions that we make today are going to have a massive impact on how AI is going to be developed in the future. I wonder if AI is even the right word? Should we be talking about artificial intelligence, or should we really be talking about intelligence augmentation?

Pasquale: Absolutely. I think intelligence augmentation as a field description is a very good one. I think that it was the book 'Machines of Loving Grace' by John Markoff that looked into some of the early history of those terms: intelligent augmentation. Doug Engelbart was really pushing that in the 60s, versus people that said, "No, the real goal is something much more ambitious. It's to create something that's in silico, a replacement for, or assimilation of, or goes beyond what the human can do." 

I've just been working on a piece with Barbara Evans who is a terrific engineering and law professor at Florida on the role of the FDA in looking at software as a medical device: if diagnostic software makes a recommendation or a diagnosis recommendation to a doctor, or says there's a range of possibilities there. What's so interesting about this area is that the FDA, building on legislation, have developed a distinction between explainable and unexplainable software. In our work, we try to say that to the extent the software is unexplainable - it's just spitting out recommendations in a black box sort of style - it should be subject to more lawsuits than if it is just a tool that is walking the doctor through and saying, "Here's what I think."

What's fascinating there is that the intelligence augmentation approach perhaps comes closer to what a real machine conversing with a human might look like. It's actually explaining why it thinks a given diagnosis is more likely than another. It's a great example of how legal regimes can play a role in helping push the development of technology in one direction or another, as well as stories and framing the IA versus AI.

Mason: When we think about how to govern AI more generally, it's problematic because governance isn't necessarily exciting and isn't necessarily sexy, until we talk about warfare and guns, and autonomous weapons. Suddenly, we're quite happy to have the governance debate. There's some advantage to that, isn't there? The idea that the way in which we govern AI and autonomous weapons - those things that we do now in relation to those innovations - could have a serious impact on the way we organise social cooperation and deal with conflict more generally in society, couldn't they?

Pasquale: Absolutely. I found that the hardest part of the book to write. I felt that with the rest of the book, I had a sense of an overarching regulatory body that nation by nation could reflect national values or even states or provinces. We were working from within that framework. Whereas with the killer robot problem - the AI and cybersecurity debates that are emerging now, or are becoming more and more intense now - we really do need a global perspective. Trying to find some baseline of global values is really difficult. 

I was part of this group that were representatives from the Chinese government, from Australian academics, some bureaucrats and some Americans. It was just fascinating how sometimes it felt like there was a real difficulty in trying to get to the core things we could agree on in something like facial recognition. Some of the people that were sceptical about facial recognition would talk about the negative things that could be done. Other groups - at least in that particular Chinese delegation - would say, "What about child kidnappings? There are so many child kidnappings that this could stop." It was hard to find that common framework of values. 

I think there is something very similar with respect to the arms and enhanced soldiers, and all the different things that the war futurists talk about. It's hard. It becomes this situation where I hope that we can regain a level of global alarm and ability to unite against those that clearly violate norms there. Simultaneously, part of the battle is going to be the nuclear nonproliferation battle, which is just to realise there are going to be some states that are going to have really advanced AI warfare capabilities, but how you avoid proliferation of those and use of those.

Mason: What is interesting to me is that it's not always about warfare. It's sometimes about this new neologism, lawfare. What is lawfare and how does it operate when thinking about autonomous weapon systems?

Pasquale: The lawfare debate is a really good way into the problems here. Thinking about the use of, say, drones in territories that are occupied by the US military, these drones can potentially watch people 24:7. They know whenever they've gone out of their house, and can drop bombs on them. The problem is that usually, the affordances of war or the tolerance that the international community has had for the killing that goes on in war depends on the idea that this is an existential life or death battle that anyone on each side could be killed and therefore by virtue of giving up their safety in entering into a wartime environment, they are potentially justified in killing others.

What's been noticed now with technological asymmetries or the extreme asymmetries of capability is that if you've got a drone fleet that is effectively operated by people in Nevada that is flying over parts of Pakistan or Afghanistan or Yemen, the people operating the fleet certainly have no skin in the game, so to speak. They can't be hurt by the people that they're patrolling. Then it starts looking less like a war, and more like a police action. Then there's just so many different rules that are supposed to limit what police can do. In the US, we've had massive protests over violations of exactly that, in terms of excessive force. There are at least a set of rules there. 

What's been happening in an international context is what the historian and lawyer, Sam Moyn says: that war has become longer lasting, more pervasive, and more humane. It's because of this lawfare where essentially you have international norms and international law restricting how far you can push your advantage. This merger of international law and norms with potential ethics of policing suggests that we're intuiting that mass technological asymmetry is something that needs to be regulated and recognised, and not pushed too far to one side's advantage. I think that's going to be a really interesting, ongoing, and difficult debate going forward in wartime situations.

Maybe there's really two laws of warfare that they're developing in a way. One is in the sort of occupation context and another is between well matched enemies where almost anything can go.

Mason: All of these concerns come from a fascination with how AI is going to interact with humans. One of the ways in which we help human interaction with technology is by humanising it. That kind of research can also lead to something known as 'counterfeiting human characteristics'. I hadn't quite realised until I read your book that those two things are completely different. Could you explain why it's so important to create a dividing line between AI and humanity?

Pasquale: Yes. This is one of the most controversial of the laws. It really involves a long term projection of what's fair for corporations to deploy and governments to deploy, as they increasingly use technology to mediate their relationships between themselves and us, or citizens; users; consumers; whatever. That's the first thing, but the second part is a more metaphysical commitment.

Let's start with the first pragmatic limits on the use of affective computing to manipulate people. Just to introduce the idea of affective computing, lots of people now are trying to use computing systems to pass people's faces, and to analyse their faces to see, like in the classroom example I gave earlier: Is this person engaged? Are they really thinking about this or is their mind elsewhere? Are they happy or sad? There are all sorts of emotions that could be attributed to someone. I like to say 'emotion attribution' as opposed to 'emotion recognition' because I think often it's really hard to recognise the emotion. In fact, they're so ephemeral and ineffable that the mere suggestion goes a long way towards creating the thing or the memory of it. 

This is advancing, and you see very crude versions of it, say, where they can develop an ad that is matched to you, where the person in the ad looks like you. Then it's like: Oh, that person looks trustworthy, I know that type of person. They keep pushing that harder and harder in terms of vulnerability marketing or other ways of developing sympathy. It's a difficult thing to write about because the people that like affective computing say, "Well, this is a way to humanise the tech. We're dealing more and more with technology. Why not have technology that has smiley faces or animated avatars, or even created faces that make you like it, or that make you feel at ease or like you're dealing with a human?" To me, there's something that's really deceptive there, because there isn't a human there. 

That, I think, is the problem with so much of this. It becomes a decepticon of an attribution. For example, the AI bot that could be running sometime like, say, the OS in that movie 'Her' - to think of a very vivid illustration of this sort of affective computing and a very advanced imagination of it - it doesn't have the emotions of the people that it's interacting with. Yet, it's taking advantage of a world in which that was happening. The other problem that comes out of this, I think, and the reason I use the metaphor of 'counterfeiting', that AI should 'counterfeit' humanity - is because the proliferation of this technical ability to simulate human emotions, to me, will inevitably lead to a devaluation of the authentic thing itself. 

That's the worry that I have: that in a world where thinking of the 'Minority Report' vision of the person constantly having screens with faces or making appeals etcetera, it becomes easier and easier to harden yourself to all of it, and shut it out. This is what many people in the US now do to phone calls. If they don't recognise the number, they shut off the call. Part of that is because we have unregulated bot speech. It's assumed to be attributed to the owner of the bot. There's a first amendment right for the robot to speak and call you, and then you're going to get 5000 calls a year - okay, not that many - but you're going to get enough that it's such an inconvenience that you just stop answering the phone. This has had many serious problems with respect to doctors not getting reached when their patients are urgently ill. It's had problems with contact tracing in the COVID-19 context where people don't answer their phone because we have unregulated bot speech - and also human speech, it's not all a problem with robotics - but you can easily see how easily that problem becomes multiplied, if it is just allowed to be automated. 

That's where I'm coming out of. The more metaphysical side is that where we were surrounded by entities that were mimicking our humanoid robots, I'm worried about that world. I think that the claims on resources and attention by those entities become a bit overwhelming in light of what may already be overwhelming claims on our resources and attention just by people. I know that's debatable, and there may well be much more positive visions of that type of future than what I'm able to imagine.

Mason: One way of dealing with that is using a fourth law: having AI needing to identify itself, making its creator and its processes transparent to other humans. I guess my question, Frank, is: what TAILS does an AI need, and how do you imagine we'd solve that from a technological standpoint?

Pasquale: I really appreciate that idea of the TAILS and the indicators. One thing, just in terms of existing technology providing a model: I once heard Ed Felten, a very smart technology policy analyst from Princeton, talking about Google Glass and recording equipment before always having a red light on, so you'd know when you were being recorded. I think Macs have green lights now, right? You can look at the green light and it's there, telling you you're being recorded. It's an interesting shift there, by the way, in terms of persuasive advocacy - as to whether this is a good thing, or green is like, "Oh goodness, it's actually recording." That sort of light on certain equipment, on robots, would be really helpful.

There have been case studies of these robots in urban areas that are just patrolling and recording everything around them. I don't think they have the light. I think they might have a sign on them that says 'this is recording everything you do' - but not a light on it. I would definitely require something more than the sign. I think that something really easy to viscerally recognise. Ryan Calo, the big robot law person, came up with this idea of visceral notice for privacy violative technology, or we could call it data collecting technology if we want to be more neutral. 

Online speech is interesting as well. That's one where perhaps some version of the robot emoji could be repurposed to be in the bio on Twitter or on social network sites that indicates the entity is a bot. It is really interesting to think about how you do disclosures that are not intrusive. That's been an ongoing debate in disclosure law for a long time. For example, a lot of ads on Facebook, thanks to a really bad decision by the Federal Election Commission, did not have to have attributions on them - or the full attribution that's on TV and newspaper ads. That type of technology non-neutrality or bias has to be addressed. There's easy ways in which you could, say, have a link on those things.

One of my first articles in this area is called 'Asterisk Revisited' and it was on giving people a right to reply to certain Google results with a small asterisk that would lead to their side of the story. Those sorts of things, online, we definitely have the affordances necessary to identify or at least give people a sense of the origins or AI or robotic expression.

Mason: I guess a lot of this stuff feels like it's a long way away. When we talk about the idea of governing AI and creating new rules for AI, we go: well, what AI? It really isn't here yet. What you're talking about feels very, very far away. The reality is that we already have artificial entities that we treat in certain ways using a law. Those artificial entities are artificial persons, otherwise known as corporations. How can the framework under which we understand these artificial persons or corporations be used and then applied to our thinking about AI?

Pasquale: Yes, the corporate analogy is a very important one. It has both promise and pitfalls. The promise is that we do have forms of liability for corporations and ways of ensuring that if a corporation is formed, we're supposed to know who owns it. Of course, unfortunately, as I mentioned in 'The Black Box Society' book, there's lots of people who have gotten around that. Now there is increasing legislation to force them to fess up and tell which corporation they own. At the very least, we should know who owns major corporations, so that we can hold them responsible. 

The second layer of responsibility there is that we need to trace actions to particular people who made certain decisions. In that respect, if we were to think about corporate record keeping, there are many requirements with respect to financial regulation and other formalities that have to be observed by corporations when they take certain actions: record keeping that helps us understand who is the particular person that made a decision. For example, think of the Volkswagen scandal in Europe, involving the diesel engines and trying to get around limits on emissions. We were able to identify some of the key actors there and punish them accordingly. 

The same can go with respect to robotics technology, to keep a track both of the initial algorithms and data and of the people that interacted with them who might have interacted in a malicious way to set them on a bad course. For example, Microsoft's bot Tay is ultimately machine learning online to emit text that is racist, homophobic Nazi speech, because of what the bot had interacted with. I think that the record keeping and attribution side of corporate regulation provides some models here for where to go with robotics.

Mason: So ultimately, what can we do? What can we do right now to ensure a future of robotics that is inclusive and is democratic? In other words, how do we engage in the anticipatory social research that shapes and not just merely responds to technological advance? How do we develop robots that reflect the hopes of all of us?

Pasquale: I think that anticipatory social research is a critical goal here. I think so much of the dialogue now is stuck between a very unambitious form of economics and engineering that is just sort of like: how do we get over, or through, the next crisis? - and a very long term view that is not really translatable into current policy. 

One of the things that I would recommend to future policymakers here is to engage in some scenario analysis of where you want a given sector to be in 10 to 15 years, and what concrete steps you would need to take there. For example, if I were to think about robotic caregiving or robots as assistance in augmenting human caregivers, I would like to see a world where, 10 to 15 years from now, the people who are doing this work which I think is some of the most important and meaningful work available are first treated well, and secondly are given the technology needed to lessen the burden of the aspects of the job that they find most burdensome and difficult, whilst simultaneously allowing them to keep developing their skills with respect to the parts of the job that they find most meaningful and appealing, and useful to those they help. 

Part of that will involve license or requirements both for the robotics and the people doing those jobs. What type of education is required? How do we certify that the robotics and AI that they're using actually is valuable and good? How do we improve it over time? How do we involve them in the improvement of it over time, both with reports, post-marketing surveillance, and other things like that? That's going to be a lot of work. It's a lot of work in terms of structuring that sector, but on the other hand we already have a lot of investment in ongoing quality improvement in a learning healthcare system. That's one example.

I think there are so many other examples where we can develop institutions that start that scenario analysis. I guess that at the broadest level, the scenario analysis is key. With respect to any particular sector, it's going to involve some very specific consultations and projections about where we want to see the tech and the labour develop together.

Mason: Part of that scenario analysis is storytelling, isn't it? The stories we tell about AI and robotics sometimes are a symptom of our anxiety about AI, but also they can be a great source of wisdom, can't they?

Pasquale: Yes. That's a very important aspect of the future debates here: taking seriously the culture - be it art, novels, films - that resonates. Not just that that resonates with huge numbers of people, but it just might resonate with smaller numbers of individuals. Thinking of the stories that it's telling and the problems it's identifying. This ability to bring in the humanistic perspective is really critical. There was a good book published by Oxford University Press last year on narrative and AI and I think that being able to talk about stories in a way that is democratic, inclusive, and rigorous is important. 

For example, with the movie 'Ex Machina', some people just dismiss it and say, "Oh well, another techno-thriller that is trying to scare people about robots." I actually think that it has so many layers, having read the script more carefully and watching it a few times. Thinking about the layers of meaning to it, it really gives us a very concrete sense of -and this is in so many aspects of culture and engaging in AI and robotics...there's a whole Wikipedia page on movies about AI and robotics which is great - they give really good, concrete examples of when the rubber hits the road and people are actually interacting with this technology, what the ethical limits are that occur and how we can anticipate them and try to diffuse them in the future. That's where I think there are lots of roles for experts and narrative, in nurturing culture to inform these discussions.

Mason: On that important note, Frank Pasquale, thank you for being on the FUTURES Podcast.

Pasquale: Oh, you're welcome Luke. Thank you for a terrific set of questions and conversation. I really appreciate it.

Mason: Thank you to Frank for offering us an inspiring vision for how human expertise will play a core role in technological progress. You can find out more by purchasing his new book, 'New Laws of Robotics: Defending Human Expertise in the Age of AI', available now.

If you like what you've heard, then you can subscribe for our latest episode. Or follow us on Twitter, Facebook, or Instagram: @FUTURESPodcast.

More episodes, transcripts and show notes can be found at FUTURES Podcast dot net.

Thank you for listening to the FUTURES Podcast.


Previous
Previous

Humanity’s Uncertain Future w/ Thomas Moynihan

Next
Next

Nostalgic Feedback Loops w/ Grafton Tanner