Think Like a Futurist w/ Amy Webb

EPISODE #11

FP11_AmyWebb-Web.jpg

Quantitive futurist Amy Webb discusses the importance of trend forecasting, the global challenges faced by modern businesses, and the tools you need for thinking like a futurist.

Amy Webb is a quantitative futurist. She is a professor of strategic foresight at the NYU Stern School of Business and the Founder of the Future Today Institute, a leading foresight and strategy firm that helps leaders and their organizations prepare for complex futures. Founded in 2006, the Institute advises Fortune 500 and Global 1000 companies, government agencies, large nonprofits, universities and startups around the world. Amy was named to the Thinkers50 Radar list of the 30 management thinkers most likely to shape the future of how organizations are managed and led and was won the prestigious 2017 Thinkers50 RADAR Award. Amy’s special area of research is artificial intelligence, and she has advised three-star generals and admirals, White House leadership and CEOs of some of the world’s largest companies.

Find out more: amywebb.io


YouTube

SoundCloud


Transcript 

Luke Robert Mason: You're listening to the FUTURES Podcast with me, Luke Robert Mason. 

On this episode, I speak to quantitative futurist, Amy Webb.

"I would like to see a future in which we all still have agency, and my concern is that we are getting further and further away from a future in which each one of us has the ability to make decisions." Amy Webb, excerpt from interview.

Amy shared her insights into the importance of trend forecasting, the global challenges faced by modern business, and the tools you need for thinking like a futurist. This episode was recorded on location in London, England, where Amy was due to give a keynote presentation.

 Luke Robert Mason: So Amy Webb, you are a futurist. What does that term - futurist - mean to you?

Amy Webb: So in my case, as a futurist I consider myself to be a quantitative futurist which is to say that I use data and quantitative evidence and qualitative evidence, and use that to model out plausible, probable and possible scenarios in the long term, and then develop strategies around that. So it's a data driven process.

Mason: So how did you get interested in this thing - the future?

Webb: The future. So the short end of the story is that this is my second career. My first career was as a foreign correspondent. I was living in Tokyo and China in the mid-90s when a lot of the consumer technology that we take for granted today was first being prototyped. So I got to see very early versions of phones that were connected to the internet, phones that had cameras, and I remember thinking how dramatically that technology was going to change every day life. I continually had challenges convincing the journalists that I was working with that someday in the very near future, we were all going to have the internet in our pocket, and have access to news 24/7, and we were gonna probably have probably have new distribution channels to enable anybody to share news. And by the way, I could take a photo which would probably mean that I could probably someday be able to take a video and post it from wherever I happen to be. I got constant feedback and editors saying, "Who would ever publish a grainy photo taken from somebody's phone? Nobody would ever do that - a grainy photo will never run in a newspaper." And I remember saying, "I'm not talking about the physical newspaper, I'm talking about the internet." So I got tired of having those arguments and quite frankly the newsroom was tired of me bringing up those arguments, and we parted ways. I started a R&D lab that was prototyping news features, mostly in the distribution realm but basically we were working all the time on interesting and different ways to collect and share news. That was all about the future. At the same time I had discovered Alvin Toffler which then led to all of the futurists from the late 1800s through the 70s or so. I read everything and decided, "Wow, there are people who do this all the time. They think about and model out future scenarios and they do that for all different types of purposes, and that's what I should be doing next".

Mason: So your new book The Signals Are Talking is really the methodology for how we analyse and look at the future. Could you share some of those methodologies that you share in the book?

Webb: Sure. So my methodology is six parts, and it was definitely influenced by other futurists who are in the sort of academic space. My model alternates between what Stanford's D School would have called 'flared and focused thinking.' It's been my observation that when people are thinking about the future - especially when it comes to technology - they tend to focus on just one thing. If they're trying to figure out the future of cars - and I just had a long conversation with an auto company about this - what they're trying to do is figure out the future of people moving around. They're not actually trying to figure out the future of cars, because that would assume that we will only ever have cars. That narrow thinking is the result from not going really broad in a methodical way, and going narrow when it makes sense.

So my method is six steps. It starts with hunting down weak signals at the fringe, so these are changes in technology, changes in society, and what I would call The 10 Modern Sources of Change which involve everything from ecology to economics and wealth changes. That allows me to create a map and I call that map a 'fringe sketch', but for people who have done any kind of statistics, it's just a bunch of notes and connections. That essential step - especially when you do it with a team of people - it helps you find all of the different pieces that you otherwise would have missed. It forces you to change the question from, "What's the future of cars?" to, "What's the future of people, pets and objects moving around?"  

From there, the second step is to focus and do pattern recognition, and look for patterns from those signals. At that point you should have different trend candidates. Trends are important because they are waypoints to the future. You know, a lot of times people think identifying trends - that's the whole goal, that's the end. Really that's the beginning, because once you've identified trends you have to do three things. So that's the next couple of steps of the process. One is you have to make sure you didn't screw up. A whole bunch of people get distracted by shiny objects. The example that I like to use is Foursquare and checking in badges. If you can remember way back, many many years ago to 2013 when everybody was checking in and earning their badges. Lots of companies invested, lots of companies made custom badges and everybody thought that the badges and the check-ins were the future. That wasn't the future. Location based services, which is really boring - that was the future. That was what was paying attention to. That was the trend.

The third step of the process is to focus, to ask yourself a bunch of questions and to go through data, to go through the models to make sure you didn't mess anything up.

Then the fourth step is to narrow once again and think through timing and trajectory, and then you have to take some kind of action. So the fifth and sixth steps have to do with developing a strategy.

So it's a long explanation, but I should say the reason I just explained it all is because as of last month, I have open-sourced all of my IP so all of my research, all of the work I've ever done is now freely available.

Mason: What's the reason? 

Webb: That seems nuts! Why on earth would you do that?

Mason: Well when so many futurists it seems kind of protect their methodologies. Futurists do this magical thinking back in their offices and yet the first thing they say when they get on the stage is, "No-one understands what I do." There's always this fake mythos that's created around what I like to call the 'mediatised futurist.' The futurist who has a keynote speaking career but really doesn't do the hard graft of dealing with the difficult questions associated with this thing called the future.

Webb: Right - excellent point, and good question. The reason is because - well. I've always thought it strange that people who run governments and businesses are expected to learn how to use a spreadsheet. They're expected to understand basic accounting, and they're not expected to understand how to think like a futurist. I've always thought that was really strange because ostensibly their jobs are the future.

I think we're on a new kind of time horizon with regards to technology. It's our generation that is living through a transition - it just doesn't feel like it. My daughter - who is pretty young - she's going to be probably the last group of human beings who have to learn how to drive. My father - who is in his 70s - is probably going to be in the last generation of people who still have to type.  

We're looking at a whole bunch of fundamentally groundbreaking technologies that range from the various facets of artificial intelligence, to genomic editing, to all kinds of automation. All of these things together will fundamentally change what it's like to be a human. At the same time, we are all also all living through a geo-politically unstable moment in time. I hope it's a moment. Part of that is the fault of the person running our country - my country - right now. If it was anybody else you could use game theory to sort of model out what might happen next. We're in a situation where we truly don't know what might get Tweeted next or what might happen next, and I'm concerned. What my feeling is, now more than ever people are fetishising the future and they feel very anxious about the future, and I want everybody to make smarter decisions and to get informed, and to use the tools that I use to make better decisions. I see no harm in open-sourcing everything. I see that only as a big benefit, because if we are all using futurist tools and models and we're doing it in a serious way, that will help everybody.

Mason: Do you think some of the interest - or at least the public interest - in notions and possibilities of the future comes from an attraction to shiny objects? How do you extract or remove people from purely that fascination and help them realise that there's things a lot more difficult to navigate? We're told, "The future is going to be awesome." You get these people who stand on stage and they sell these incredible futures. But it always feels like the reason why people are so attracted to these futures is because there's something so fundamentally wrong with the present, and this feels like a potential to escape into something that will be better.

Webb: I think you're onto something. I would partially blame it on the pattern recognition parts of our brain that start firing off when we're looking to make sense of something. I think partially what attracts people to tech-utopianism - I think you're absolutely right. I think it's the same reason that we go watch movies in the theatre. It's because we want an escape. Maybe it's also why people go to church. They want the promise of a better tomorrow. But there's also the other side of that coin which is the dystopian visions of the future. There's plenty of people who also stand on stage and talk about the end of the world coming.

A couple of things are going on. As humans, we've always been surrounded by a lot of data. We're especially surrounded by and assaulted by enormous amounts of data today, and the way that our brains our wired is that we're constantly looking for patterns to help us make sense of what's around, and the easiest way for us to do something with that information once we've recognised patterns is to fit it into a narrative. That's why storytelling is so fundamental to humanity. It's because that's how we pass information. The people who tell these crazy stories about the future - whether they're positive or negative or strange or whatever - you know, it's easy to connect to them and to what they're saying.

But the thing to keep in mind is that I am a professional futurist and I have absolutely no idea what the future is. My job isn't to tell you or to predict what the future is. My job is to figure out, given what we know to be true today - what are the likely paths and what does the probabilistic model show? Then we use that information to make better decisions. But that's not as easily understood as somebody standing on stage with a pretty picture in the background and spaceships flying overhead and Uber-taxiing it - or whatever they're calling it - these five minutes and saying, "Everything is going to be great. Just wait 15 or 20 years for AGI to kick in."

Mason: Why do you think the current state of discussing or framing the future sits within this binary of either, "You're an eternal optimist - AI is going to save us, it's going to make us more intelligent about ourselves" or, "AI is going to be the thing that kills us, if it isn't nano-tech or if it isn't some sort thing that falls from space or synthetic biology." Why do you think it has to sit across these two dichotomies? I've always felt that when Elon and Co. say that, "Oh AI is going to be our last invention," it sometimes feels like it's just really good marketing. I don't think that the technology is quite there yet, but if they instill that fear in people, they believe that the future is closer than it actually is. I think the future itself is being used as a form of leverage in a weird sort of way.

Webb: Yeah that's a really, really good perspective. It's the third step of my methodology - once you've heard something or you've decided something is - like a technology - is a thing, and even if it's binary at that point, step number three says, "Eviscerate everything that you know. Tear it all apart, and if at the end of you poking holes into every single thing, you still believe that, then fine - and if you've got evidence to back it up." What I would say is that everybody has - usually - different reasons for offering polarising views, and usually those reasons have to do with some kind of gain, and so that's part of it. The other part of it is we live in a world where information is everywhere, and in the digital realm, attention is currency. It's harder and harder to get people's attention without saying something salacious.

So Marvin Minsky, one of the founders of modern AI and one of the people who coined the term in the 60s calls AI a 'suitcase word' and the reason is because you can pack a lot of stuff in a suitcase, and it is a suitcase. But once you open the suitcase up you can have a thousand different things in it. AI - artificial intelligence - is a suitcase word, because inside that suitcase is anything from machine reading comprehension to deep nets, and machine learning and computational linguistics - I mean there's a lot that's in there. To try and have a conversation with someone who is not a technologist or who doesn't follow what's happening, their eyes are going to gloss over. Therefore it's either, "AI is going to kill us all," or, "AI is going to save us all." That's what grabs the attention. In reality, AI - artificial and narrow intelligence - is already here. We all already use it and interact with it every single day. Anything in life, the subtleties are what always get missed. But those are usually the most important components to be paying attention to.

Mason: So what is the role, then, of the futurist? To better educate the general public around the language associated with potential new forms of technology? It always feels like there's a language issue, with the example of AI and the suitcase. People are very confused as to what this technology is capable of doing and what it does right now. There's a miscommunication between what constitutes artificial intelligence versus what is essentially intelligence augmentation. How can the futurist better help average Joe or Jane navigate these complex times?

Webb: I see my role as partly educational for that reason. To help the public make sense of technology in their lives and the decisions that we're making with regards to that technology. Part of it is educational, part of it is advisory. So I do advise the United States government, and the military, and different companies. I think that there are sort of dual purposes. There are certainly futurists who work in a consultative capacity and I don't think do the public education. I view myself as a public intellectual as much as anything and I feel like I have an obligation to not just tell people, "This is what I see," but to show them my work. Especially now when everything is potentially considered fake news, the last thing we need is fake futures news. That would be a real problem.

Mason: Do you think we are in a situation where we're being shown a certain degree of fake futures? It goes back to that issue around leverage and future being used as leverage - either because there's personal gain or there's profitability or there's some sort of political gain there. That's been since the 60s where old Kennedy was going, "We're not doing it 'cause it's easy, we're doing it 'cause it's hard," and went off to space to prove that they owned both the future and the present, versus the Russians. I just wonder when either personal agendas, profitability agendas or political agendas collide with the future, the inevitable outcome is fake futures in the same way that we have fake news.

Webb: That's right and that is what we called the AI winter in the 60s. So the answer to your question is, "Yeah" - and that's not good when that happens. So for people who aren't familiar with this already, leading up to the 1960s there was a lot of activity happening with new kinds of computers and computers moving from the first era of computing to the second era of computing. If the first era really just was tabulation, the second was more about computation and complex computation.

There was a lot of activity in the 40s, 50s and 60s around conceptualising a framework where humans could teach machines to think, and so that was the genesis of all this. All the theories were fascinating. Especially now, it's really interesting to go back and read some of those early academic papers about whether or not humans might someday teach machines to think and what the machines might do. Minsky actually had a paper...he had obviously several papers, but one of the papers he wrote talked about whether or not machines could maybe gain consciousness. So there was a lot of really interesting debate and discussion at the same time that computers were getting faster components, the price of components were dropping. We had additional computer power, we had more people who knew what to do with computers, we had the birth of modern computer science as an academic discipline - and then everybody started making a lot of promises. So one of the promises that got made in the United States was - and this is at the height of the Cold War - that artificially intelligent machines could be used to simultaneously translate Russian into English, which would have been a game changer. To sort of monitor conversations that were happening and to simultaneously translate those messages. The ultimate spying tool. But there was no way that that was going to work, so there was a lot of overpromising about the future, a lot of fake news about the future of AI in the 60s, and when a lot of that failed to materialise, all of that exuberance and excitement and most importantly funding dried up. The fake news about the future actually wound up dramatically impacting the future and we set ourselves back.

There's a lot of excitement again and everybody's talking about AI now, and there's a lot of the same exuberance, a lot of the same insane funding cycles, you know.

Mason: So do you think we're due another winter? Another AI winter?

Webb: I mean I would hope not. There's always going to be a pocket of people that push the technology forward. I think at this point it's too big to fail. There's so much funding tied up. China has promised a chunk of it's sovereign wealth fund. I don't see AI, the field, going through the same thing it did during that first AI winter. However, I see a lot of people getting distracted by the shiny. The shiny object I think in this case is a lot of what you see celebrities talking about, and celebrity technologists talking about. But also, we're heavily influenced by entertainment media and a lot of these images are indelible. So Her - the movie Her -

Mason: - And Black Mirror, and Humans in the UK, Westworld in the US?

Webb: Absolutely. Now some of those, I don't think I've... Westworld, by the way, is my favourite show ever. Most of the Black Mirror episodes are my second favourite. I haven't come across anybody who believes that Westworld is likely in our future. I have, however, heard a bunch of people reference the movie Her, and Samantha - the character in it, on a pretty regular basis, which means that when people think about the prospect of talking to machines, that movie is so stuck in their heads that that's what they've envisioned for the future. That's probably not what the future is going to look like in the near-term but it's a good reminder that we influence the outcomes of the future through effective storytelling.

Mason: Do you think there is certain memetic power in science fiction that actually underlines certain trajectories towards the future? We have less people reading science fiction - more people know Charlie Brooker's Black Mirror than they do William Gibson's Neuromancer.

Webb: You think so?

Mason: Oh yeah, yeah.

Webb: As you were saying that I wasn't thinking of Gibson. I was thinking more of Asimov, or of Philip K. Dick.

Mason: It feels like that guides most individual's thinking, and Black Mirror is an interesting example 'cause it feels so close.

Webb: Yeah. I think the control and nostalgia - they're incredibly powerful feelings and so there's a sense of not having any control when it comes to the future because you don't know exactly what's going to happen next unless you think that we are living in Elon Musk's robot world, right. So that sense of not having control is incredibly powerful and disorientating. It engages our limbic systems and our limbic systems start firing off, and the squishy computers inside of our heads - our brains - enter fight or flight mode and we feel anxious and then we start making...you know. The stories we tell ourselves in our heads are always worse than real life. They always are.

So I think that's one component and if you think about just everyday technology, I would posit that a very small sliver of the general population feels 100% comfortable any time they get a new television or get a new telephone or something. They realise they're not going to break it and they're okay making mistakes and tinkering and fiddling around, and it's not a big deal. I would argue that probably 90% of the population feels some sense of anxiety every time they have to replace their mobile device, their mobile phone, or they have to get a new computer or they have to do something different with email. It fires off that limbic system, and there is this sense that you don't have control. And to be fair - we don't. We don't really control any of the devices in our lives, somebody else does. Amazon does, Google does, Twitter does, Facebook does...pick a company. So I think that's a big piece of it. 

But as you were talking, I'm wondering if we're always yearning for simpler times when we were kids. That's a theme, right? Simpler times when we were kids. I don't know when I was a kid my life was any simpler necessarily than it is now, but I think we all think that it was. I wonder if part of that storytelling that goes on inside of our heads sort of feeds into that, "Life is going to be much more complicated." Technology is part of every single thing that we do. There is no way to extract it. My hunch is that there is this underlying sense of anxiety that everybody feels because of technology. All the time.

Mason: Anxiety and also depression. Do you think we're in this weird liminal space at the moment where we haven't quite gone through what we were promised and we're not entirely sure as to what may emerge? Do you think it takes us a lot longer to deal with the impact of these devices or these tools in our lives?

Webb: Yes - to what you just said. The answer is yes, but let me explain why. You made me think of a couple of interesting things. One thing that you made me think of was that I was at an event a couple of months ago with Ev Williams, one of the founders of Twitter, and there were about 2000 journalists in the room. One of the things...he didn't address what has become of Twitter. He didn't talk about it, and he didn't address Twitter's impact on geo-politics, he didn't talk about any of it. Okay I understand that. He's on the Board, he's got a fiduciary responsibility to make sure that Twitter doesn't tank because of something he said. But a journalist, finally during the Q and A, did ask him, "Did you ever stop and think that Twitter may be hijacked by bots or by people who would want to spread misinformation?" and his answer to that, I thought, was really telling. His answer was, "It never occurred to us because we weren't thinking about it, we were just trying to build a cool product," right? If I had a nickel for every time I heard someone say, "We're just working on the product right now," that's bullshit. The problem is I either think that's untrue, or wildly irresponsible. I cannot fathom that. Especially because before Twitter, Ev had another project. Do you know what else he founded before that? Have you ever heard of Blogger? It's not as though he had never seen somebody use a free platform to spread ill right around the world.  

My point is we're past a time when you can just work on the product and not think about anything else, because any technology that comes into the media space is subject to misuse and use for good and all of these other things, but you have to start thinking through the second, third, fourth, fifth order implications of whatever it is that you're building. If you've done that and you acknowledge, but then you choose to not worry about it - fine. But just own up to it, you know.

I kind of wish Alvin Toffler was alive today, you know. Unfortunately he just passed, recently passed. I wish that he was alive today and that he hadn't yet written Future Shock. I wonder what the 2017 version of his book Future Shock would sound like. My gut tells me it would sound a lot like it did in the 60s, right - but probably with more urgency. Humans go through cycles, so it may feel right now like life is moving very fast and we don't have a lot of control, and we feel very anxious and people are making bad decisions. But if you look at a lot of the literature and movies, and shows and stuff that was being written in the 60s, people felt the exact same way. If you go back to the 40s - the same, and the 20s - the same. There's a history of this.

Mason: I wonder if we've always been in this feeling of increasing acceleration. Whether we've always felt like technology, the movie camera and all of these other things have always been in this constant state of flux. I wonder if this is a normal situation. All that changes is the medium of transmission.

Webb: Well but that's an important piece of it. I actually think - and plenty of people would argue with me - I agree that we have always been in this state of flux. However, we have never in human history created this much data. Nor have we in human history had the ability to ingest as much data as we do every single day. So if you think back to the 60s, there was television, there was radio, there were newspapers and there were magazines, and that was it...and books, right...and movies. That was still relatively slow. So you could have breaking newscasts but for the most part if you wanted to find out, you can go to the Washington Post and the New York Times in the United States. Both have archives that are open and easily searchable. If you look at the volume of news being reported about AI when it was new and terrifying and interesting, and the reactions to that were all over the place in science fiction. If you go back, there wasn't a tonne of insanity. There wasn't a tonne of writing. Today, it's inescapable. You cannot get through the day unless you completely unplug from everything, right? Which most people don't do. You cannot get through the day without hearing some kind of news about change, right? Whether that's technological change, or economic change, or disenfranchisement, or something nutty happening with politics somewhere in the world. I think that's the key difference, but that's an important difference because if our sense of change and anxiety is that much more heightened then the stories we tell ourselves about the future get that much crazier and I think that it has this cascading effect where we wind up having these polarising, binary responses to anything happening to do with technology. Then - at least in my country - all of it gets politicised, and so you wind up with people saying, "Climate change," or, "There is no climate change," or, "AI is coming," or "Our cabinet officials being AI deniers." You know, you wind up with all kinds of crazy information and thought.

Mason: Is that because we're trying to aim to work at the same speed as capital? So to go back to Ev Williams, they're turning Twitter - which a lot of individuals are saying should be handed over as a public service - into a business that now has to return a hundred X return. But the numbers don't make sense.

Webb: The business model doesn't make sense.

Mason: The business model doesn't make sense. Part of the speed within news and media is because they have to create click-through to actually sell and service the ads. Are we losing something very human to capital? If suddenly Twitter started to slow down in how much return on ad investment it was making, it would slowly but surely die as an organisation. I always feel like the best thing Jack could do is hand it over to the general public and go, "Look -"

Webb: Oh no, no, no. Don't give it to the general public. That would be worse. I think it should become a um -

Mason: Platform cooperative. I'm fully -

Webb: Well, okay so a couple of things -

Mason: It's a public service, it's a public good. It should be used in that way.

Webb: I think, so...there was a worldwide consortium of journalists who have been doing phenomenal investigative work that resulted in something called the Panama Papers and now the Paradise Papers. One of the things that I recently said was that Twitter is the wire service of the 21st century. I did not...and the context around that was that news goes over the transom as quickly as it did at the beginning of the early days of wire services.

However, unlike the AP of Reuters, or the AFP which only allowed quality journalism that had been vetted and reported and sourced and edited, anybody can put their stuff out through Twitter. That's actually not a good thing. It can be used as a 21st century wire service if there's a global consortium of news organisations that get it, somehow. I don't think it's purchasable by anybody. And they allow the public to continue using it. 

However, there are plenty of ways to make sure that networks aren't taken over by botnets and that misinformation doesn't spread. I could literally talk to you for about an hour in very, very deeply technical terms and explain to you exactly how that would work.  

To your question about capitalism versus the future, which I think is actually an interesting debate and sort of right on. Twitter is not a good use case for that because they're not making money outside of a handful of licensing deals and I'm not sure how sustainable their model is in the longer term. However Google and Amazon, and Tencent in China, and Baidu and Alibaba - there are plenty of companies that are very, very large, that are in the personal and public information business. Now in that case, those are all publicly...well not in the Chinese companies but in the United States...those are all publicly traded companies and the economic interests don't always align with what's best for society in the longer term. But you could also argue that in a capitalistic society, you know a business which has a responsibility, a fiduciary responsibility to shareholders - has to put it's business interests first. So you could argue that these companies are doing exactly what they are set up to do, and they are doing it well. The challenge is that we now see some of the effects of Silicon Valley essentially operating independently of what the rest of our country, or the rest of the world is doing.

Mason: You as an individual - beyond the work that you do in terms of predicting other people's or other businesses' future - what's the sort of future that you would like to see?

Webb: That's an easy question to answer. I would like to see a future in which we all still have agency, and my concern is that we are getting further and further away from a future in which each one of us has the ability to make decisions, and that's because we control less and less of the data - our own personal data.

We are further and further removed from the algorithms that both mine and refine and process that data, and we have very little insight into how decisions are being made on our behalf - when that's even happening. There's no transparency around how decisions are being made and that may not sound like a technical issue, however all of the technology that you use in your life whether that is your telephone, your smartphone, or your email or the game that you're playing...you have almost no say or control in how you use that device and how that device uses you.

The challenge is that the more our technology gets sophisticated in it's approach, the closer that we move to a zero UI reality where things happen more seamlessly; the more that we are allowing people to programme machines to make machines to make decisions for us. That sci-fi future terrifies me more than anything I have seen on Black Mirror, because that's everyday life. So the best that I can hope for is that everybody starts thinking through the implications of all the technology that we have access to and comes to a unified decision that we are about to an enable enormous tragedy of the commons. We are the commons, right - and that we collectively decide that we want something better for ourselves.

Mason: Well on that note, Amy Webb, thank you for your time.

Webb: Thank you, this was a lot of fun.

Mason: Thank you to Amy for sharing her thoughts on how we can think more critically and deeply about the future. You can find out more by purchasing Amy's books, or downloading her open source forecasting tools at Amy Webb dot io.

If you like what you've heard, then you can subscribe for our latest episode. Or follow us on Twitter, Facebook, or Instagram: @FUTURESPodcast.

More episodes, transcripts and show notes can be found at FUTURES Podcast dot net.

Thank you for listening to the FUTURES Podcast.


Previous
Previous

History of Transhumanism w/ Max More & Natasha Vita-More

Next
Next

iSlavery w/ Jack Qiu