Artificial Intelligence Will Transform Everything w/ Martin Ford

EPISODE #59

Summary

Futurist Martin Ford shares his thoughts on why we should treat artificial intelligence like a utility, the impact a robot revolution will have on the economy, and how machines may enhance our creativity by encouraging new forms of innovation.

Guest Bio

Martin Ford is a futurist and the author of four books, including Rule of the Robots: How Artificial Intelligence Will Transform Everything (2021), the New York Times Bestselling Rise of the Robots: Technology and the Threat of a Jobless Future (winner of the 2015 Financial Times/McKinsey Business Book of the Year Award and translated into more than 20 languages), Architects of Intelligence: The truth about AI from the people building it (2018), and The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future (2009). He is also the founder of a Silicon Valley-based software development firm.

Show Notes

Martin Ford’s Website

Martin Ford on Twitter


Transcript 

Luke Robert Mason: You're listening to the FUTURES Podcast with me, Luke Robert Mason. 

On this episode, I speak to futurist, Martin Ford.

"Artificial intelligence is going to become an important tool in science and in technology to really drive innovation; to really allow us to create entirely new products and services; things that simply would not have been possible without the application of artificial intelligence." - Martin Ford, excerpt from the interview.

Martin shared his thoughts on why we should treat artificial intelligence like a utility, the impact a robotic revolution will have on the economy, and how machines may enhance our creativity by encouraging new forms of innovation.

Martin, your new book argues that AI is an indispensable technology. You go as far as suggesting that AI might be the new electricity. What do you actually mean by that?

Martin Ford: What I mean is that it's becoming a genuinely systemic technology. It's going to touch everything. It's going to impact every industry, every employment sector, and it's really going to touch every aspect of our lives. We're going to be, eventually, I think, to rely on artificial intelligence much in the way that we rely on electricity. You can imagine that if the electric power goes out, that has a dramatic impact on everyone. It's a very serious event. I think that someday, artificial intelligence will have that same reach and impact on just about every aspect of everything.

Mason: You say in the book we should really treat artificial intelligence as if it was a utility rather than just another tool. Why is it so important to see AI in a similar way we see water or electricity, or even the internet, for example?

Ford: The point is that it's not just a specific technology that does a specific thing. It's a technology that encapsulates intelligence, and that machine intelligence will be able to be applied virtually anywhere, to any kind of problem that we might face, to any aspect of operations in a business or just about anywhere that you can imagine.

It's not a straightforward, simple technology. Rather, it's something that's genuinely systemic and that is going to impact virtually everything. I think we have to view it in that way in order to understand just how disruptive and influential this technology is going to be to our future.

Mason: Now often when we have discussions about artificial intelligence on the podcast, there's a lot of confusion, especially over what AI actually is. In your mind, what encompasses this idea of artificial intelligence as opposed to something like machine learning, for example?

Ford: I would include machine learning as one kind of artificial intelligence. Certainly, right now, it is the most important approach taken to artificial intelligence. Generally, I would say that AI is any time you have a machine or an algorithm solving the kinds of problems that human intelligence can solve. Making predictions, optimising things and doing things that previously only the human mind could do.

It's very important to distinguish between what we call artificial general intelligence or human-like intelligence and much more specific applications of AI. The reality is that in the world today, the only things we have are specific forms of artificial intelligence or what's called 'narrow artificial intelligence'. AI applications - especially in the machine learning realm - can do very specific things. We don't have anything like more general intelligence. That's something that obviously remains an objective for people that are working in the field but it's certainly fairly far in the future. At a minimum, decades away.

Mason: When artificial intelligence is spoken about, often it's surrounded by a lot of mythoi; a lot of hype. When it comes to the nuts and bolts of how we actually create AI, what are some of the ways in which we can produce these tools? I know there's a difference between approaches when it comes to the creation of AI. For example, some believe in taking a connectionist approach versus a symbolic approach to the development of intelligence. Could you explain the difference between those two?

Ford: The symbolic approach - which is also called classic AI - was the kind of thinking that really dominated the field up until roughly 10 years ago, for the most part. Basically, what it says is that what you want to do is give knowledge to a system, somehow input knowledge to a system, and then you want that system to be able to make decisions or to basically conduct reasoning based on the knowledge that you input. That is what, for example, led to expert systems that have been used in many fields in the past. It turned out that that approach was very, very limited.

The other approach - what you call 'connectionism' - is really about building machines to learn. Rather than putting knowledge into the system and then trying to develop algorithms that can reason based on that knowledge, you simply turn algorithms loose on data and let those algorithms learn for themselves. Then the hope is that some form of reasoning or the ability to do reasonable things kind of emerges from that.

That was the approach that took off about 10 years ago. The inflexion point was in 2012 when a technology known as Deep Learning really came to the forefront. That's what's really powering this disruptive leap in artificial intelligence that we've seen over roughly the last decade. It's all about machine learning now, and especially, specifically deep learning which is the application of artificial neural networks, or in other words systems that are loosely based on the way that the brain works in a very loose, rudimentary fashion.

It's the application of that technology that has seen us really leap ahead. That's what's powering, for example, Tesla's autopilot system. It's what's powering Siri on your iPhone. It's what's powering Amazon's Alexa. All of these applications of artificial intelligence that you see all around you are really the result of this leap forward in machine learning and specifically deep learning.

Mason: What's fundamentally different about the deep learning approach? Is this system really brain-like or is that just used as a metaphor?

Ford: It is, at a very rudimentary level, brain-like. These systems are constructed of artificial neurons which are connected in a rudimentary way, similar to the way that the neurons in your brain are connected. There is nowhere near the level of complexity that happens in the brain. It's kind of a cartoonish scheme that is based on the way that the brain works.

This is not a new idea. In fact, the first neural networks emerged in the late 1940s. It's an old technology but what has happened is that we've finally reached a point where computers have become fast enough to make this into a practical technology that can do useful things. That's the result, basically, of Moore's Law, over the course of decades, that has finally produced machines that are fast enough to really do this on a significant scale.

The second thing that's happened is that we now have available to us enormous, incomprehensible amounts of data. It is all of this data which is then used as the feedstock or the training medium for these smart algorithms running on incredibly fast computers that have finally brought us to the point where we're seeing the breakthroughs and realisation of technologies that really just 10 years ago would have been considered science fiction. We're really seeing this unfold. It's because of fast computers and enormous amounts of data, combined with these algorithms that have actually been around for a long time. Of course, they've been further refined, but it's the confluence of these things coming together that has really made this technology leap forward.

Mason: When we talk about artificial intelligence, often the tricky word is less 'artificial' and more 'intelligence'. How do we recognise intelligence? What's the difference between seeing a computational system versus something that truly has AI?

Ford: The fundamental hallmark of artificial intelligence in terms of what we see today is the ability to learn. One thing that you may have heard people say, at least in the past a lot, is that computers only do what they're programmed to do. This, up until the recent explosion in machine learning and artificial intelligence, was true. Most of what was done with computers were the result of a programmer who would sit down and tell that machine what to do step by step.

This is one of the things that made computers, in a sense, very inflexible. They could do exact things as they were told to do, in a very precise, fast fashion. They couldn't go beyond that. Now we've reached a point where instead of taking that approach, we've got these learning algorithms. No one sits down and tells these systems exactly what to do. Instead, we've got algorithms that are provided data and basically, they can learn from that. They learn on the basis of that data.

As a result of that, we now have systems that can do remarkable things and are remarkably proficient. In reality, we don't understand the exact details in many cases of what they're doing or how they're reasoning to get to the conclusions they're coming to and to make the predictions they're making, for example. In some ways, these systems are becoming a little bit brain-like. In the same way that you can't understand what's happening in a person's brain, we don't quite understand what's happening in some of these systems.

Mason: Often, when we hear about AI, we often hear about these overhyped promises that are made about what the technology can do. What sort of overly inflated expectations do you think we should pay close attention to? Should we be careful about the hype associated with something like AI?

Ford: AI has always been a field that has been a combination of fact-based reality and extraordinary amounts of hype and overexuberance. That was true right in the beginning. You had people at the very beginnings of the field of AI when computers were dramatically slower than they are now. It really wasn't reasonable at all to expect dramatic progress simply for that reason. You had people making predictions that within just a few years, you would have machines that were, in effect, smarter than people and able to do anything that a person could do. Of course, that was just total and complete fantasy.

That kind of hype being coupled with reality has continued throughout the development of the field. In recent years you've seen it to some extent, especially in the field of self-driving cars. Way back in 2009 when the first Google experiments with self-driving cars were publicised, people were saying that within just a few years we would have completely autonomous cars on the road. There were major people in the industry saying, "When my child becomes 16 years old in just a few years, the kid is not going to get a driver's license because no one is going to be required to drive cars anymore." and so forth. This, of course, has been proven to be simply not reality.

The problem has turned out to be a lot harder than most people expected. We still don't have true self-driving cars. You see Tesla and Waymo doing certainly remarkable things, but we're still not at the point where we've got anything like a self-driving Uber that can pick you up and take you anywhere without a driver. That still, probably, is fairly far in the future.

One of the things that are happening is that some of those visible areas where the expectations have really been inflated - in areas like self-driving cars - are likely to disappoint in terms of the product. The progress that we see is going to take longer than we might expect. In other areas, we're going to see faster progress that might take us by surprise. The fundamental thing there that I think is the defining line between where we're realistically going to see faster progress and areas where people have been overhyped and they might be slower is just how unpredictable the environment that AI is being applied to really is. When you're talking about self-driving cars out on public roads, it's extraordinarily unpredictable. It's not at all a controlled environment.

On the other hand, if you're talking about the application of AI inside an Amazon warehouse where you've got already thousands of robots being deployed, in that kind of environment it's much more controlled. It's possible to limit the unpredictable aspects of the environment to make things more predictable. There, I think you're really going to see AI have a big impact in the relatively near future. That's the key issue to focus on - how controlled and predictable the environment is.

Mason: Should that boom and bust history that you've just described there - the boom and bust history of AI development - should that act as a warning for how we get excited about AI today?

Ford: Yes, I mean absolutely. This is something that people in the field are very much aware of, that there has been this kind of boom and bust cycle. There have been these periods known as AI winters where all of the enthusiasm around a project evaporated. A lot of the investment and funding for the field - especially that came from the government - disappeared. Careers were dead-ended and so forth. People in the field are very much aware of this. We don't want to get out ahead of our skis again and get so overexuberant that everything falls apart. People do worry about that because of the hype that has come to the forefront in the field again and again, and certainly to some extent now.

At the same time, we have seen a fundamental change in the field of artificial intelligence in the sense that it has really become - over the past 10 years - a very practical tool. It has been integrated into the business models of these incredibly influential, powerful, rich corporations like Google, Amazon and Facebook. It's become absolutely central to what they do and to the way that they compete in the marketplace. I don't think that there's any chance that AI really enters into a true winter where it really becomes a marginalised technology again. It's absolutely central to too much that's going on in the tech industry and beyond that. There may continue to be a cycle, but it's not going to be as extreme as it was in the past.

Mason: One of those key promises that seem to drive a lot of the excitement about AI is the idea that it's going to add 15.7 trillion dollars to the global economy by 2030. Martin, how do you think AI is going to achieve that?

Ford: There are a number of ways. It's going to make businesses much more efficient. part of that is going to come through the reduction of labour, and automation of jobs. Generally beyond that, just making predictions within business operations and so forth to make operations across the board vastly more efficient. That will add a lot of value to the economy. Maybe more important than that, artificial intelligence is going to become an important tool in science and in technology to really drive innovation; to really allow us to create entirely new products and services; things that simply would not have been possible without the application of artificial intelligence.

One of the best examples we've seen of that recently was actually the development of a tool by DeepMind, the AI company based in London which is part of Google as a parent company. They recently released a tool called AlphaFold which takes on the challenge of protein folding. Protein molecules fold into a geometric configuration within microseconds of when the molecule is fabricated in cells. It's this geometric shape of the molecule that actually determines its function. One of the most important endeavours over the last 50 years has been for scientists to figure out, based on the genetic recipe of protein molecules, how they actually fold into these complex shapes. Scientists have been working on this for 50 years. It's an extraordinarily challenging problem, but DeepMind was able to create an AI system that solved this problem, essentially. It's one of the biggest breakthroughs in biomedicine and in biotechnology. It's going to have enormous implications for the future of medicine, biotech, engineering new molecules, drug discovery and so forth. This is going to have a huge impact going forward on science and technology.

One of the most overall important applications of AI is in the field of drug discovery; discovering new molecules that can be used to cure diseases and so forth. We're already seeing breakthroughs that are going to have a very material impact.

I think that's going to be true across the board. We're going to see AI applied to solve the problems that we need to solve in order to address climate change, for example. New breakthrough sources of clean energy. New ways to adapt to climate change. New ways to create materials that help us solve these problems and so forth. That's going to have enormous benefits. It's going to add enormous value to the economy and to our whole way of life.

Mason: Well when you look at all of the things that AI makes possible, where do you think, today, AI has made the most disruption?

Ford: The thing that I have the most hope about and where it will be most important is again the ability of this technology to accelerate innovation. We need enormous amounts of innovation if we're going to solve the problems that we face, especially in areas like climate change but also in terms of confronting the next inevitable pandemic, doing a better job next time around than we've done with this particular pandemic. Just in terms of being more prosperous, and solving the problem of global poverty. These are all issues where AI is already showing great promise. I think there will be even more promise in the future. That's really the fundamental hope for this technology - that it can be the tool that amplifies our intelligence, our creativity, and our ability to innovate and solve these massive problems that we're clearly facing.

Mason: It certainly feels like today, there are some fundamental differences from the way in which AI is being developed when you compare it to the last AI winter. One of those things is the sorts of technologies that we have available to us. What role do you think cloud computing or perhaps even quantum computing will play in the future of AI?

Cloud computing is already absolutely critical to all the advances that we're seeing. Again, most applications of artificial intelligence require enormous amounts of computing power. You need access to lots of very powerful computers and the way that that is delivered is through cloud computing. Again, I've been saying that artificial intelligence is evolving into a utility, something that ultimately will be something like electricity. Cloud computing is sort of the conduit for that. Think of it as the electrical powerlines, analogous to that. It's the way that this technology is delivered across industries and into every possible application of the technology. That's incredibly important and it will continue to be incredibly important. It's the primary way of really making this technology realise.

Quantum computing is a much more futuristic technology. It's something that right now is in its very infancy. We still have not built a practical quantum computer. In other words, what's called achieving practical quantum supremacy; building a quantum system that can do things that a traditional computer can't do, and that can do practical things. That still lies in the future. When we eventually get to that point, it's quite possible that it may have important applications in artificial intelligence. That lies far in the future.

Mason: I mean ultimately, do you think we'll actually find there might be some limitations to AI development? Is Moore's Law still going to continue or is Moore's Law's promise of exponential growth just fiction?

Ford: We're still continuing to see the kind of acceleration that was defined by Moore's Law. Computers are definitely getting faster. Apple has recently released some new computer chips that really amaze people in terms of the speed upgrade that they've been able to achieve, even in light of the fact that it is true that in terms of the dimensions of computer chips, we're really in theory approaching the endgame there. Dimensions on these chips are getting so small that it's beginning to approach the scale of atoms and molecules. It's still pretty clear that we have at least another decade or so where these technologies will continue to accelerate.

Moore's Law has kind of become a catch-all phrase for acceleration in general. It's not just about building faster chips. It's now about parallel computing. It's about linking more processors together in parallel to achieve acceleration in that way. It's about designing entirely new architectures for computer chips, including new architectures that are specifically designed for artificial intelligence and for deep learning. Yes, I think that absolutely, we're going to continue to see progress going forward in the future. It may not look exactly the same as it has looked traditionally, but we're going to continue to see an acceleration in speed for sure.

Mason: It certainly feels like AI is progressing. What about mechanical robots? Is this matching the speed of AI development or are there material limits there?

Ford: There is no Moore's Law for mechanical applications. When we're thinking about things like robotic arms and the sensors used in robots, there is not the same kind of acceleration that we see in processing power. There are other important trends that are coming to the forefront. There are certainly economies of scale. As we see more robots being built throughout the economy, then it definitely will become cheaper to build a lot of these components that make up robots. As technology takes off and scales across our economy, it'll become more affordable to build a lot of these technologies. There definitely are advantages and we can expect that in the future, robots will become more prevalent, more affordable and far more capable.

Mason: When it comes to AI development, that doesn't just happen in a vacuum, does it? There are all sorts of battles that are waging behind the scenes. Those are for talent, for funding and for basic access to some of this technology. How do those things impact the way in which AI is developed?

Ford: It's definitely true that there is an enormous competitive dynamic that plays out at several scales. Definitely between the companies like Google, Facebook and Amazon, there is intense competition for talent and in terms of investing in these technologies to develop breakthroughs that can be incorporated into products, especially the cloud computing offerings of these companies upon which they compete. That's really become a driving force that is resulting in a lot of the fast progress that we're seeing. These companies are actually competing with each other and that's pushing the field forward.

There is also absolutely competition between the West, generally - the US in particular, and China, for example - on another scale. That's also becoming incredibly important in driving the field forward. The competition is actually integral to the progress that we're seeing. We need to take that competition seriously. Certainly in the West, we can't afford to fall behind China in terms of the applications of these technologies. They are incredibly consequential. They have applications not just in the commercial sphere but obviously in the military sphere and in national security and so forth. It's an incredibly important undertaking and we need to really up our game.

Mason: When it comes to thinking about something like AI talent and the sorts of individuals who are creating this technology, we don't always need just technologists, do we? You argue and advocate for a cross-disciplinary approach to the development of AI. Why is that so integral and why is that so important?

Ford: We need talent drawn from many fields because artificial intelligence is, as I said, going to impact everything. It's not just about the details of the technology itself - although that certainly is incredibly important, we need to develop talents in that area - but it's also about how artificial intelligence is applied across our economy.

That means we need people with all kinds of different expertise to have enough knowledge of AI to understand how it can be applied and to be at the forefront of using this technology to solve problems in many different spheres. I think that's going to be one of the most important aspects of this going forward. Not just developing the technology, but finding ways to deploy it in productive ways. For that, we need people from all kinds of different backgrounds and different kinds of skills and expertise.

Mason: What do you think some of those skills that we're missing from the AI development community should be?

Ford: Certainly one of the areas that are really coming to the forefront now is the ethical considerations, for example. The fact that AI has been shown in many cases to be biased in some applications. For example, you've seen AI used in resume screening systems that have been biased by race or by gender. You've seen it in the AI used in the United States, even in very high stake situations like in the criminal justice system, and you've seen racial bias there. With areas like that, it's very important to have people looking at this technology holistically. We're keen to develop systems to ensure that we can audit artificial intelligence and make sure that it's being applied in fair ways.

There's also, absolutely, a need for regulation. We do need more regulation of certain applications of artificial intelligence to make sure that it's being applied fairly and effectively. This is something that we're going to need - people working in government to become more familiar with the technology. Perhaps we need to develop more regulatory bodies to handle this

Mason: From the work of yours that I've read, I know you're a large advocate for the regulation of AI. In fact, you argue that we need a body similar to the FDA - the Food and Drug Administration - or the FFA who look after aeroplanes in the US. Should this be a nation-by-nation endeavour to regulate or should there be attempts to regulate AI globally? If so, how do you imagine something like that could even work?

Ford: Yeah, I mean definitely. In the ideal world, you definitely would have some sort of global standards. The best example of that right now is that there is an initiative in the United Nations to ban autonomous weapons. This is one of the greatest fears that people working in the field of AI have, that their innovations and the technology that they're creating could be weaponised. It's easy to imagine the development of, for example, drones that would be fully autonomous and that could attack people or maybe kill people, without a human being in the loop to specifically authorise that. You can imagine, for example, swarms of hundreds or thousands of autonomous drones attacking civilian populations. This is a really terrifying scenario that a number of people are very concerned about.

There is an initiative in the United Nations to ban these kinds of technologies. It hasn't gotten a lot of traction for the simple reason that the most important countries involved with this - which are the United States, Russia and China, in particular - have all refused to go along and actually ban these technologies. The reason that they won't ban it is that they're afraid that an advisory will cheat. If they went along with it and banned the development of autonomous weapons, then another country might secretly develop this technology and get an advantage. That's why it's very hard to do that. Coordination between countries is very hard.

Yes, I think we should strive for that but it will probably have to start at the national level. I think we will need to see the development of agencies, as you said, similar to the FFC, or the FAA, or the European Medicines Agency - the agencies that regulate things in existing ways. We need another agency like that for artificial intelligence. But it is important to say that what I'm advocating is the regulation of applications of AI in certain spheres where clearly we have a need for that. I'm not advocating for regulating or limiting research into artificial intelligence. I think we should embrace the technology itself, let it move forward and take advantage of it. I think it will be indispensable to solving the problems we will face in the future. Clearly, in certain realms where this technology is applied and there are real issues, we're going to need regulation.

Mason: So ultimately you believe that we should take a proactionary approach to AI development. Instead of being precautionary about what the development may give rise to, we should in actual fact push the boundaries of what this technology might possibly be able to do.

Ford: Yes, I think we need to invest more in the development of artificial intelligence. We need to embrace it as a tool that is fundamentally going to amplify our intelligence and our creativity, allowing us to solve the problems that we're going to face in the future. I think that it will be very difficult to solve those problems without utilising artificial intelligence. We really can't afford to turn away from technology or try to limit progress in the field. Instead, we need to embrace it and then have appropriate regulations and adapt to the problems that are going to come along with the development of artificial intelligence.

Mason: Listening to you speak there, talking about the competition between nations,  it certainly feels like we're approaching a state whereby there might be a new AI arms race. That might be with either Russia or China versus the US. Those AI arms races aren't necessarily about autonomous weapons systems such as drones, but they're about more subtle things like attacks on infrastructure or cybersecurity concerns. In what way, by putting more and more artificial intelligence into the infrastructure of the technology we use every day, does that expose us to these sorts of attacks?

Ford: Right, I mean one of the primary applications of AI is going to be in cybersecurity. It's also going to be in making things more efficient across the economy. We're moving into a world that's going to be defined by what we call the Internet of Things, where literally everything is connected. Every device is connected and you're going to have smart algorithms deployed across this Internet of Things, controlling and diagnosing problems and so forth. That's going to make everything more efficient, but at the same time, it's going to make everything more vulnerable to cyberattacks and hacking.

The more autonomy you have and the more critical systems you have operating autonomously without a human being overseeing that, the more vulnerable it is to a potential cyber-attack. That's just the basic reality. That's one of the main problems that we face going forward. Artificial intelligence is going to be absolutely critical to essentially defending these systems and keeping them safe from cyber-attacks. Keeping the scale of what we're talking about in terms of the amount of interconnection we're going to have in the future, only AI is going to be able to defend those systems against attack.

At the same time, the people that are interested in perpetrating attacks across these networks and across the infrastructure are also going to utilise artificial intelligence. It's going to be a two-sided battle, much like you see with computer viruses where you have a constant, ongoing arms race between the people creating these viruses to attack systems and the people in the companies that build tools to defend against them. You're going to see that same kind of dynamic unfold. I'm afraid that's inevitable.

Mason: Beyond regulation, are there other ways to mitigate the risks associated with things like AI?

Ford: One of the other concerns that I'm really worried about is the impact on the economy and on the job market. This is probably the issue that I've written about the most. My previous book, 'Rise of the Robots' was published back in 2015. It was focused almost entirely on that issue. I do think that there is the potential for a great many jobs - especially jobs that are more predictable and routine - to be automated or deskilled.

In the past, you might have had a good, middle-class job that required a fair amount of skill and experience, that now is replaced with a minimum wage worker or a gig worker together with technology that can now do that job. You see wages driven down and jobs that were once good, solidly middle-class jobs now became marginal service sector low-wage jobs. In many cases, the jobs can disappear entirely - as many of the tasks that are undertaken by workers are completely automated.

I think that that, going forward, is going to be a driver of more inequality. A lot of people are at risk of getting left behind. As a result of that, we're going to need policies to address that. I'm generally an advocate of some kind of universal basic income or something similar to that going forward at some point in the future, as this becomes a bigger and bigger challenge.

That's one of the biggest challenges that we're going to need to address, but then there is a whole range of other things to worry about. As I mention, the potential for the weaponisation of these technologies; the potential for these systems to be biased against certain groups and so forth; the potential for cyber-attacks; the potential for AI to be used in ways that can deceive us.

One of the things we're seeing emerging is deep fakes - the ability to use artificial intelligence to create media that appears to be real. It might be audio or it might be video but in fact, it's just completely fabricated. We're approaching a future where, for example, someone who wanted to disrupt or steal an election - this is an example I give in the book - could make a politician literally say anything that they wanted. They'd say things that would be very disruptive to that politician or the political campaign. They might do this shortly before an election just to upend things and try to damage a particular politician.

This is something that we really need to worry about - this kind of application of these technologies going forward. We need tools to defend against this. We need regulation where it's appropriate. We need policies to address the increasing inequality that's going to come about as a result of artificial intelligence.

Mason: You certainly mentioned the big word there - 'jobs'. I do hold you largely responsible, Martin, for popularising the idea that AI will steal your jobs. Where do you really stand on that debate? Do you think that new jobs will replace those taken by AI or do you think we need these new systems such as universal basic income to deal with the rise in unemployment brought about by AI?

Ford: Right, there definitely will be new jobs created. There's no question about that. The question is, will there be enough of those new jobs to absorb the potentially tens of millions of workers that are now engaged in the more routine predictable types of work that I think are certainly going to evaporate in the coming decades?

The second question is, even if there are jobs available, will the workers that are now doing jobs that are fundamentally routine and predictable be successful at transitioning into those new jobs? Is it going to be the case that many of those new jobs may require capabilities, talents or personality traits that some people simply don't have access to? They're really going to struggle to transition into these new areas.

One of the most important questions that people ask is, which jobs are safe and which are likely to evaporate? As I've said, if you're doing something that's predictable and fundamentally routine - it doesn't matter if that's a blue-collar task, working in an Amazon warehouse, for example, or a white-collar task, sitting in front of a computer, doing the same thing again and again - that is definitely going to be susceptible to automation going forward.

The areas that, at least for the foreseeable future are going to be less susceptible to automation, are things that are genuinely creative, where you're really thinking outside the box, creating something new. This might be a scientist doing creative things, an engineer, or a lawyer coming up with new legal strategies. Many artistic endeavours are going to be relatively safe from automation going forward. A second area would be areas of working with people and developing complex relationships - really doing things that require deep interactive, interpersonal skills. A third area is what we might call skilled trade-type professions. Things like electricians and plumbers where you're really problem-solving in unpredictable environments; areas that require a lot of mobility and dexterity. If you want to build a robot that can do what an electrician does, that really requires science-fiction technology. It's going to be a long time before we can do that.

These are the kinds of areas where the jobs that are being done by people are going to be safer. The question is, can people really transition from more routine work to more creative work, or more people-oriented work? Some people will of course successfully make that transition. I think a lot of people will struggle because they won't have the necessary talent or the personality traits to do that kind of work. I do think that there's a risk that a lot of people are potentially going to be left behind by this. That's where I think there is a need for policies like, potentially, universal basic income or some other way of supplementing incomes so that those individuals who are not able to find a traditional job that provides an adequate income are not going to be completely left out as we move forward.

Mason: What I love about your writing, Martin, is often you're very much a techno-utopian. You mentioned it earlier that you truly believe AI will be this catalyst for innovation. For that to occur, certain other things have to occur. Those are about how AI is rolled out. In your mind, AI needs to become both ubiquitous and affordable. How do you envision those two things will occur?

Ford: I think we're seeing that happen already. As I said, there is this enormously powerful, competitive dynamic between companies like Google and Amazon, for example, which are both important providers of these technologies to other businesses. They do that through their cloud computing platforms. Amazon, Google, and Microsoft - they are competing. That is driving down the cost of access to these technologies which is making them more generally accessible.

They're also building tools that make it a lot easier to work with these technologies so you don't necessarily have to have a PhD in computer science in order to begin to deploy artificial intelligence in your business. That's what is called the 'democratisation' of AI. That's happening already. I think that's kind of inevitable. I think that's a very positive force. I think it's becoming generally accessible, but again we need to make sure that as this unfolds, we adapt to it in ways that minimise the downside risks that are going to come with it. That will require regulation and policies to address the inequality that I think is inevitably going to be unfolding as a result of this.

Mason: What makes you so confident that the benefits of AI will ultimately outweigh the risks associated with it?

Ford: I think that's my hope and my belief. I'm not saying that it's inevitable. I don't think it's necessarily the default. If we simply sit back and do nothing and just hope for the best, then I think that we could well find ourselves in a situation where at least for a great many people - maybe most people - the negatives actually could outweigh the positives. They could find themselves in a worse-off situation. That's what I'm striving to avoid here. That's really the purpose of my talking about it and writing books about it. I think we need to be proactive. We need to put policies in place to make sure that the benefits of this technology are distributed widely, and that it's inclusive - it includes everyone at every level and group within our society. If we don't do that, we're definitely running the risk that we don't have a net positive outcome. That's why it's really important to address these issues head-on.

Mason: I mean you mentioned a bunch of big tech companies who are looking into AI as a new opportunity for the growth of their business. There's one company in particular that you're extremely complimentary of. That company is Tesla. What are your thoughts on their aspirations? What competitive advantage do you believe Tesla has beyond all the others?

Ford: Well, as I say in the book, Tesla, again, is this strange combination of reality and absolute advantage, together with enormous hype. Certainly, Elon Musk is one of the people most guilty of hyping artificial intelligence. Tesla has announced what they're calling their 'full self-driving system'. They're now allowing people to download that system and begin to deploy it in their cars. In reality, it's nothing like true self-driving technology. They really are overhyping it. I think that's quite dangerous, to the extent that people believe these cars are truly capable of driving themselves in all situations - it is very dangerous. Drivers might rely on that when they really shouldn't.

I have a concern there, but at the same time, it is true that Tesla has got an enormous advantage because they've got hundreds of thousands of cars on the road, equipped with cameras that are continually generating data as they drive. That's an enormous resource for this company. Again, as I said at the beginning, the breakthroughs in artificial intelligence that we've seen over the last decade have occurred because of this confluence of powerful computers, powerful algorithms and enormous amounts of data. Whoever owns lots of data will have an advantage in terms of applying these technologies. Tesla is really the leader in terms of having access to just vast amounts of real-world data coming from cars driving on roads. This is data that they're going to be able to leverage in order to improve their self-driving systems going forward.

I think they do have a real advantage, but at the same time, they really are hyping it and promising things that they don't have yet. I do worry about that. This is what you see in AI. There's always a coupling of real technology, real advantage and real data combined with this hype. Tesla is maybe the single best example of that.

Mason: You're right there. I'm fascinated to learn what your thoughts are on Tesla AI. The way in which that was announced to the world was so playful and so surprising in so many ways. It seems like Elon wants to create humanoid robots, and at the same time has spent the last decade warning the world against the development of certain forms of AI. He's certainly bringing about the sorts of developments that create the dystopian future that he's warned against. I just want to know your initial gut-instinct thoughts on having seen that announcement recently, Martin.

Ford: Again, this is another great example of reality being coupled with hype. First of all, there is absolutely no doubt that Tesla is one of the absolute leaders in the field of artificial intelligence. They have real technology and they have extremely capable people, some of the top people in the field. No doubt, they are at the forefront of developing AI technology.

At that event you're referring to, Elon Musk brought out this person dressed in a robot suit that danced around. He said that Tesla is going to develop a humanoid robot. He said that it will, for example, go to the store and buy things for you. It'll go to the grocery store and buy a list of things. He'll bring those back for you. That's what he said it'll be able to do. That is just fantasy. We are not anywhere remotely close to a technology that would be capable of that. Yet he said Tesla would have a prototype of this humanoid system within a year, I think he said.

Again, they may have something in a year that can do something but the idea that they would have a truly capable humanoid robot that would be able to autonomously go to the supermarket and do your shopping for you before bringing those items home to you is just pure fantasy.

You've seen the work that Boston Dynamics have been doing. Everyone has seen the videos of the Spot robots that are kind of scary looking. That company has been working literally for years just to get robots that can walk, basically. It's taken years and years of work to do that. They do not have robots that can go and do useful things autonomously. Most of the videos that you've seen from that company are robots that are remote controlled by a person. They do exhibit the ability of these machines to have agility, walk, dance and so forth. But we're nowhere close to having a humanoid robot that can go out autonomously in the world and do useful things. That's just not going to happen. That is, I think, just one example of absolute extreme hype.

Elon Musk has been guilty of this again and again. Part of it is just an intentional publicity stunt in order to draw attention to his company. Yes, it's a combination of reality and hype. Everyone that is interested in this technology needs to understand that.

Mason: It does make me wonder sometimes whether when Elon warns the dangers of AI, he's trying to get into our minds the fact that this stuff is far more imminent than it actually is. If you create panic around how advanced AI is going to be, the public assumes this stuff is going to be so advanced. In reality, on the ground, it's not there at all, is it, in many ways?

Ford: Again, this is another area that Elon has really been over the top with. He's talking about the potential of existential threats from artificial intelligence. This is not a concern that I would dismiss and say you don't have to worry about, but it's a concern that will arise only when we have true artificial intelligence, with machines that can think at least at the level of human beings and then will go beyond us. This is a worry about super-intelligence.

Someday, we might have a machine that is vastly smarter than any human being. Then it's true, we might lose control of that system. It might do things that we don't understand. Even if it's not intentionally malevolent in trying to harm us, it might just do things that we can't anticipate and that harm us in unintended ways. Yeah, that's a legitimate concern. There are a lot of people that are working on that. Nick Bostrom, for example, is one of the most famous people who wrote a book about this. He's at Oxford University and there's a whole institute there that's focused on these kinds of problems.

I think that's a good thing, but clearly, those kinds of concerns lie, at a minimum, decades into the future. Maybe 50 years into the future. Maybe 100 years into the future. It's not something that I think we want to have an overwhelming focus on right now. It's a good thing that certain people are engaged in that, worrying about that and working on that. For the public at large, it's much more important to focus on the near-term concerns around AI, which are the things that I've been talking about. The potential for it to automate a lot of jobs and create inequality. The potential for it to be weaponised. The potential for there to be security threats. The potential for there to be bias and unfairness in the application of artificial intelligence. These are all things that are already happening right now or certainly will become important within the next couple of years. I think it's wrong to focus too much of our concern on really futuristic things that aren't going to be a concern for decades. The risk there is that it distracts us from what we need to be focused on right now.

Mason: So we shouldn't expect to see Ex Machina-style robots any time soon then, Martin?

Ford: I would not. At a minimum, for decades.

Mason: In that case, do you think there's any science fiction that captures the nuances of the sorts of AI development that we're seeing today?

Ford: Well, one movie that I worry a lot about is The Matrix. In that movie, you've got this super-intelligent AI that enslaves humanity. That's not what I'm referring to. What I am referring to is this scenario where people essentially become divorced from the real world and they live instead in this matrix; in essence in this kind of virtual reality. One of the fears I have is that as AI and technologies like virtual reality become better and better, there is going to be this kind of alternate universe out there that is going to be very distracting and attractive to us.

Coupled with this, if we don't take the kind of initiatives that I've been talking about, the real world is going to become much more unequal. Many people are going to be struggling to get ahead to see a path to succeeding in the real world if we don't address these issues. That alternate universe, perhaps driven by virtual reality and much more sophisticated video games than what we have now - because increasingly this universe is also going to utilise artificial intelligence - is going to become very attractive. I think there's a real risk of people checking out from the real world and going and living in this alternate reality. As a result of that, they'll become not-very-productive citizens.

I really worry about that. That you're going to see people entering this alternate reality or perhaps getting involved in other things like drugs and so forth. We're going to see a very dystopian outcome if we don't address these issues to make sure that people are thriving in the real world and that they do see a path to a better, improved life. That's why I think it's really important that we have these policies in place.

Mason: In that case, do Mark Zuckerberg's metaverse proclamations fill you with a certain form of dread?

Ford: Yeah, to some extent. You can always rely on Facebook to create dystopia, and I do worry about that. I have to say, on the other side of that, Facebook is also doing very important work in artificial intelligence. They have an organisation called FAIR - the Facebook Artificial Intelligence Research Lab. Those are people that are really helping drive this technology forward. I hope these innovations will have applications in many positive ways. As I said, this is an important technology to help us solve problems of the future. But there is a real risk that Facebook is going to be instrumental in creating this alternate reality universe that is going to continue to distract people from doing productive things in the real world. That's the downside of it.

Mason: Well it doesn't necessarily have to be dystopian, does it? In your new book, 'Rule of the Robots', you conclude that there are two possible AI futures. One, is that Matrix vision of being enslaved by AI or living in these metaverse environments, but on the other hand it could be more like Star Trek, for example. We could live in a post-scarcity world with material abundance, unlimited poverty, and our environmental concerns all addressed by AI. Just for a second, Martin, could you give us a little bit of what that utopian vision might look like?

Ford: Right. That's kind of how I end the book, with these two potential scenarios. I believe we're headed on a path to somewhere in between those two outcomes. Clearly, my worry is that if we just do nothing, we're headed towards something that looks more like that Matrix scenario.

Clearly, I think everyone would agree that the more utopian Star Trek outcome, where these technologies are utilised in a way that solves our problems and allows us to live in a world where we are more productive and where we have more freedom, where no one has to go and work eight or ten hours a day doing some drudgery. Where people don't have to do things that they dislike. People have the freedom to engage in things that reward them, whether that's spending more time with their families or in human relationships. Whether it's becoming an artist. Whether it's working in your community to help other people or doing other things that are genuinely meaningful to you, or participating in innovation and creating a business that contributes something. We want to create a world where we utilise these technologies to allow people to really grow into more productive and more effective human beings and genuinely have a better life.

That's what I think of about the Star Trek scenario. I think we can get on that path but we need policies and in some areas, regulation of this technology in order to make sure that that's the path we're on.

Mason: I hate to ask, but right now, which of those two AI futures do you think is most likely?

Ford: I'm an optimist, so I think we're going to shift towards that more optimistic path. It's going to take some work to do that. We can't just do nothing, or we're definitely going to end up in a world that incorporates much of that dystopia. We really are going to need to put policies in place. I do think I see an emerging conversation about this. I see ideas like a universal basic income getting a lot more traction. For example in the United States, Andrew Yang ran for president and brought a lot of attention to this idea of a basic income. I see that as very positive. I think that there are indications that we're going to begin to confront these problems and get on a better path.

Mason: In that case, I hope I get to live in the AI future that will enable me to sit and do podcasts all day. That sounds very appealing to me. In terms of recommendations for our audience, what would you say folks should do now to futureproof themselves against this coming AI future?

Ford: I think definitely educate yourself about the technology. Of course, my book, 'Rule of the Robots' is one good source there. I also have a Twitter feed, @MFordFuture, where I share many articles about artificial intelligence and how it's progressing. It's important to become informed.

In terms of your career or your job, the best advice I can give is to really make sure you're not spending most of your time doing things that are fundamentally routine, repetitive and predictable. If you are, your job is at risk of disappearing sometime in the coming years and decades. You want to really shift your focus to make sure you're not doing things that are fundamentally routine and that are more creative. Things that are more people and relationship-oriented, and so forth. That's certainly one of the most important things you can do to adapt to this technology going forward.

Mason: Well I do have to ask, what if people don't have the choice over what sort of jobs they can do? What if they don't have the ability to transfer from a job that may seem rather repetitive to one that is highly creative? How do they deal with this burgeoning AI future?

Ford: If you genuinely can't make that transition, the best thing is to become an advocate for policies that will address these issues. That's why, again, it's important for people to be familiar with artificial intelligence and its implications. Eventually, these are all issues that are going to have to be brought into the political arena. I think we're going to have to ensure our politicians are working towards a future that works for everyone, including people that as you say, are not able to make that transition. We have to make sure that these individuals are not left behind, and that's going to require policies.

Mason: With that incredibly sound advice, Martin Ford, I just want to thank you for being a guest on the FUTURES Podcast.

Ford: Yes, thank you. It's been a great conversation. Thank you very much for having me.

Mason: Thank you to Martin for showing us the dramatic impact artificial intelligence will have on every aspect of society. You can find out more by purchasing his new book, 'Rule of the Robots: How Artificial Intelligence Will Transform Everything', available now.

If you like what you've heard, then you can subscribe for our latest episode. Or follow us on Twitter, Facebook, or Instagram: @FUTURESPodcast.

More episodes, live events, transcripts and show notes can be found at FUTURES Podcast dot net. 

Thank you for listening to the FUTURES Podcast.


Credits

If you enjoyed listening to this episode of the FUTURES Podcast you can help support the show by doing the following:

Subscribe on Apple Podcasts | Spotify | Google Podcasts | YouTube | SoundCloud | Goodpods | CastBox | RSS Feed

Write us a review on Apple Podcasts or Spotify

Subscribe to our mailing list through Substack

Producer & Host: Luke Robert Mason

Assistant Audio Editor: Ramzan Bashir

Transcription: Beth Colquhoun

Join the conversation on Facebook, Instagram, and Twitter at @FUTURESPodcast

Follow Luke Robert Mason on Twitter at @LukeRobertMason

Subscribe & Support the Podcast at http://futurespodcast.net

Previous
Previous

Virtual Reality is Genuine Reality w/ David Chalmers

Next
Next

God in the Machine w/ Dr. Beth Singler