Intelligence Augmentation w/ Prof. Pattie Maes

BONUS | Dubai Future Forum #02

Listen on Apple Podcasts | Spotify | Google Podcasts | YouTube | SoundCloud | Goodpods | CastBox | RSS Feed

Bonus episode recorded live from the Dubai Future Forum at the Museum of the Future in partnership with the Dubai Future Foundation on 20 November 2024.

Summary

MIT Media Lab’s Prof. Pattie Maes shares her insights on using technology to enhance human potential and agency, developing wearable systems to support cognition and learning, and designing ethical human-centred artificial intelligence.

Guest Bio

Prof. Pattie Maes is the Germeshausen Professor of Media Arts and Sciences at the MIT Media Lab. With a background in Artificial Intelligence and Human-Computer Interaction, her research focuses on human augmentation and how wearable, immersive, and brain-computer systems may assist people with memory, decision-making, and other functions. Netguru selected her for "Hidden Heroes: The People Who Shaped Technology"; Time Magazine included several of her designs in its annual list of inventions of the year; Fast Company named her one of the 50 most influential designers, and the World Economic Forum named her a "Global Leader for Tomorrow."


Show Notes

01:48 Definition of Intelligence Augmentation

04:16 Summary of Research Projects at MIT Media Lab

06:35 How to Use Conversational AI to Enhance Learning

08:41 Using AI to Speak to Our Future Selves

17:19 Ethical Implications of Social AI

24:44 Risks of De-Skilling if AI Undermines Human Agency

Links


Transcript (AI-Generated) 

NOTE: This transcript is AI-generated and unedited. It may contain errors. A human-transcription is coming soon.

Pattie Maes: We should think more about how we want to use this technology to help people or what human future do we want in an era where AI is ubiquitous.

Luke Robert Mason: You're listening to the FUTURES Podcast, live from the Dubai Future Forum at the Museum of the Future, where minds are enhanced through conversation. On this show we meet the scientists, technologists, artists, and philosophers working to imagine the sorts of developments that might dramatically alter what it means to be human.

Some of their predictions will be preferable, others might seem impossible, but none of them are inevitable. My name is Luke Robert Mason, and I'm your host for this session. Now, when we talk about the concept of the cyborg, it's often in relation to how the human mind can adapt to the technological environment of increasing speed and complexity.

In response, how can we enhance memory, learning, creativity, and communication through the new tools and devices at our disposal? Thank you. Exploring this question is Patti Mayes, a pioneer in the field of artificial intelligence and human computer interaction, and by combining insights from neuroscience and design, she's pioneering systems that allow us to amplify the essential abilities that we hold most valuable as humans.

So, Patti Maes, welcome to the Podcast Lounge.

Pattie Maes: Thank you for having me.

Luke Robert Mason: I guess my first question is, uh, how do you define, what is, I guess, the core of your interest? This, this concept of not AI, but IA, intelligence augmentation.

Pattie Maes: Yeah. Well, I studied AI over 30 years ago, back when it wasn't all that popular yet.

But, um, after doing my PhD and working on, uh, making robots smarter and learn how to, uh, develop skills and so on. I suddenly got this insight that what I wanted to do with my life was not necessarily make robots and machines smarter, but think about how we could use some of those same techniques to help people, to make people smarter and help people reach their full potential.

So, that's what I mean by intelligence augmentation, um, or IA as opposed to AI.

Luke Robert Mason: I was at IBM Watson a couple of years ago and we ran an event there. We ran a live event there and IBM Watson team, they were talking about this thing, intelligence augmentation, although their approach was more about ensuring that there was a human in the loop when it came to decision making.

So where do humans figure in the design of intelligence augmentation?

Pattie Maes: I'm a little bit bothered about right now with all the recent developments, uh, and really the whole history of artificial intelligence, is that we have this goal, the, the, um, field has set this goal for itself to ultimately reach human level or beyond intelligence in machines.

And I think that that is not the goal that we should be striving for. We should think more. about how we want to use this technology to help people or what human future do we want in an era where AI is ubiquitous. So I try to think a little bit more about that, like what we want to optimize agency of people.

For example, we want to optimize human dignity, uh, human well being and more rather than just build smarter machines that can replace us and maybe make us obsolete and more stupid.

Luke Robert Mason: Obsolete and more stupid. That does seem to be the fear. Anytime we have a conversation about future technology, uh, where do humans figure in this?

But some of the projects you're doing at MIT Media Lab, they're focused on memory, cognition, uh, the ways in which we learn. Can you just share some of the projects that you're involved with and how you're thinking about this problem?

Pattie Maes: Yes. So we build all sorts of devices, often wearable types of systems that use a lot of sensors, um, as well as AI techniques, of course, to help people in the moment with things like learning.

For example, imagine that one day kids may have a sort of a tutor that is with them all the time that can help them with their curiosity about their environment, with learning about the world, et cetera, and it's not necessarily. the way today's chatbots work, where they give you this elaborate answer that makes you think less for yourself.

But instead, maybe it operates like a Socratic tutor, asking you clever questions that will make, uh, that individual think for themselves and this way. discover things and become excited about their environment and learning about it. So learning is one big area that we work on, but I'm also very interested in what intelligence augmentation can do for the elderly, for the aging population.

We live in a world where increasingly we have more. Uh, we live longer, we have more older adults and not enough young people to take care of them. And older people really want to stay in their own homes, live in their own homes for as long as they can, uh, without having to go to a nursing home or without having to have a person visiting them.

uh, every day and being in their space. So we've been looking at how we can build systems that help the elderly with independent living, remembering that, um, you already took your medication this morning or reminding you of a conversation that you had with your daughter, uh, where she said she's going to come visit this afternoon or helping you find misplaced objects and more.

Luke Robert Mason: Okay. Take care. It sounds like there's an important factor there, another I word, other than intelligence, interaction. It feels like interaction is the key to getting these systems to help us to learn. Often our interactions with these systems, say a chat GPT, it's very one way. You know, we put in our question, we get back the answer, but we don't really have a conversation with it.

So how are you developing conversational systems through, I guess, voice and feedback that really encourages that interaction?

Pattie Maes: Yeah, you're absolutely right. Um, I think we've really not paid enough attention to the interaction between people and AI. AI is seen as an engineering problem primarily, and I think we have to think of it as a human design problem.

How do we design AI to fit in our human lives, so that ultimately, We are the ones that benefit, people can reach their potential, kids can learn, older adults can live independently, and we're ultimately better off. And that is not a trivial design problem. Today's AI is incredibly, um, Simple, really in its interaction in that you give it a prompt, you ask it a question, and it gives you this long winded answer that is very complete and blah, blah, blah.

But it discourages the person from thinking for themselves about the issue because it just gives you that 10 points that you should be thinking about when thinking about this particular topic. And so I think we should design interactions where the AI engages you more in thinking about the topic, uh, uh, at hand so that Uh, people are really more involved, they learn, they become interested and excited.

And it's really more a two way type of interaction going back and forth, just like we are doing right now.

Luke Robert Mason: I mean, you're so well known for having written some of the early work on software agents, but what's fascinating about some of your latest work is how you talk about AI as good characters as having personality and being able to embody voices of those we know, and sometimes even our own voices.

So could you tell us a little bit about how we can have a talk to our future selves?

Pattie Maes: Yeah. So what is, of course, really interesting about today's AI, the conversational AI system specifically, is that you interact with AI like you interact with another person. And we are learning through our experiments that, um, a lot of what psychologists have learned about, uh, people and interaction between people and so on applies in this context and can be used in interesting ways, actually.

So in one experiment, for example, we built learning or tutoring agents and we gave them faces. And for one of these tutoring agents, we chose the face of Elon Musk, who at that time was still a little bit more liked, maybe on average, maybe you should cut that out. But, uh, but, um, we took someone controversial, someone that's, uh, some people admired somebody and other people, um, maybe didn't like as much.

And we noticed that there was a significant effect in how people rated this Tudor, even though they knew that this was not the real Elon Musk. It was a deep, fake Elon Musk. And we told him so. Uh, in fact, deep, fake Elon Musk introduced himself as, I am a virtual blah, blah, blah character. And what we learned.

Yeah, is that the people who loved Elon Musk, they wanted to learn more about the topic that Elon Musk was teaching them about. They said he was a great tutor. It was a really interesting lesson. They were much more motivated in the topic because they related to this virtual tutor. So these are effects that we can, um, abuse or use.

They can really cut both ways to influence people. Uh, AI is seen as a social entity and we respond to it the way we respond to other people for good. better and for worse. You asked also about future you, which is another interesting experiment that we did. We realized, well, if you can talk to, um, virtual agents that have faces and so on, could we have you talk to your future self?

So that, so as to encourage people to really think a little bit more long term about their long term future and taking actions and behaving in the interest of their long term future. So we. We actually have a website that anyone can go to, futureu. media. mit. edu, and you upload a picture of yourself, and you tell the system a little bit about your interests and your goals for life, and then it creates this 60 year old version of yourself, of your picture, in the form of a chatbot, also, that you can talk to, to, um, Talk to your older self and realize what your older self has experienced, what path they took from where you are now to maybe their successful older life, etc.

And together with a psychologist from UCLA, we evaluated how this affects young people. And it turns out that it makes people really think more long term and act more in their own long term interests when they have had this opportunity to take, uh, to talk to their, uh, older or a possible older selves.

So they are more interested in things like investing for old age, um, uh, Uh, being serious about their education and learning and more.

Luke Robert Mason: That's fascinating. But what's guiding the design of that future self? Is it the, the face, the aged face that they're empathizing with? Is it the content of the conversation?

Is there empirical work to kind of unpack what it is about the future self that triggers that empathetic moment?

Pattie Maes: Yeah, we actually, um, looked at a variety of different conditions, like just a chatbot, a chatbot with a face, etc. And we realized that the more realistic the older future self, the stronger the effect, basically, or the impact.

Luke Robert Mason: Did anyone dislike their future selves?

Pattie Maes: We didn't really ask that question, whether they disliked it. Luckily, you can try the experience multiple times, so if you don't like one version, you can try another.

Luke Robert Mason: You can prototype multiple versions of yourself and let them live different

Pattie Maes: lives. Well, that's part of the idea, that people can really explore possible futures, that they can see, like, what if I became A podcast host

What if I became an AI developer? What could my life be like? Would I, at the end of my life, think that this was a life worth living and a life that I enjoy and so on?

Luke Robert Mason: Well, that's fascinating. I'm currently pursuing a PhD, so I should run one simulation of myself pursuing just that PhD and one, pursuing the podcast and see who ends up happier Exactly at the, the end of, at the end of 60 years

It's interesting because. I use a voice clone of myself. Of course, I've got all of this data from the futures podcast that I can use to clone my voice and I use it to read back academic papers to myself and a question I have, and I'm not sure if it's something we can answer empirically right now, is does that aid my memory to hear my own voice, read the words on the page?

Am I internalizing that? Differently. I mean, if I was at MIT Media Lab, what are some of the ways in which you think about studying and designing experiments that can study these sorts of things from an empirical standpoint?

Pattie Maes: Maybe you should come and be a student in my research group. That's what I was hoping you would say.

I love that suggestion. And we do some related work, actually. We are cloning people's voices with the 11 Labs software, for example. And we are basically doing empirical studies to to detect whether when people hear their own voice talk about themselves, whether that influences them more and impacts them more than when it is a random stranger's voice, basically.

So, for example, if you want to change your own behavior, you want to exercise more, eat more healthily, et cetera. If you have a system talking to you in your own voice saying, I am a person who exercises every day, I am fit, and that, that is part of my identity, etc. If you hear that in your own voice, does that make a difference than if you hear this, uh, third person's voice?

So these are the kinds of questions that we are looking at.

Luke Robert Mason: I'm interested, when you design an experiment, do you start with those insights from neuroscience and cognitive science first? Or do you start with the technologies that are available? I mean, how does that process?

Pattie Maes: It's a little bit of a mix of things. And, uh, but yes, we, we do read a lot of neuroscience and psychology literature and collaborate with those people, um, and look at certain phenomena that have been. talked about in that literature and then try and experiment with those or in, in real prototypes that we test. But I believe that our work also results in new insights in psychology and neuroscience because we are really building new prototypes, new platforms that enable the type of studies that previously were not possible.

Luke Robert Mason: When a lot of people hear about this sort of work that is working with the human brain and having an interface between technology and the human, they get quite scared of some of the ethical implications of that, the potential misuses of technology that has a massive effect on the brain. How do you approach things like ethics when doing these sorts of experiments?

Pattie Maes: That's a very important question. Um, we always, um, think hard about the ethical implications. And for one, for example, privacy is always a huge concern. Privacy of the data. Uh, we very much try to run experiments locally on edge devices and keep all the data local so that Ultimately, the person themselves is the only one that has access to the data, and the agent that runs, uh, on their local device, rather than have, um, someone else having access, um, to, uh, these types of, uh, data.

So it's very important that we think about how these, uh, types of, um, uh, services are being rolled out, who makes them available. Again, where does the data live, who has access to the data, and so on. And I hope that this time around, in comparison with, say, social media and the introduction of the web and personalization on the web and so on, that we can be more careful about the choices we make.

Luke Robert Mason: Well, just like the Museum of the Future, the MIT Media Lab feels like one of those special places that, uh, that, that gives birth, I guess, to science fictional ideas. In what way does science fiction inform and, and science writing and artwork? In what way does that inform the sort of work that you do and the design of the sort of work that you do?

Pattie Maes: Obviously my students and I are huge fans. My students more so maybe than me, but of science fiction movies and books and so on. But we think it's important to not just, um, look at these stories, dystopian or utopian, uh, of possible futures, but really prototype these systems and test them empirically with actual people to see what happens because you can only do so much in terms of predicting what the outcomes will be.

There are often surprises, um, in, um, the work in, in seeing what happens basically and how people respond.

Luke Robert Mason: Well, the Dubai Future Forum, what they're really interested in more than anything else is how we think about the future. So, if there were some abilities that you believe that we should enhance to allow us to think better about the future problems that we have in the 21st century, what sort of abilities would you look to design tools for?

Pattie Maes: So yeah, I think we should really, um, create a future with AI where we maximize people's critical thinking and agency. I want to avoid that we outsource. thinking to AI. No matter how appealing it may be, it is important that we maintain agency, that we are in charge of our own future and decide on our own future, rather than having others optimize it for us.

Luke Robert Mason: I mean, do you think there's Possible i mean when these technologies become available within the commercial market there are often agendas driving them you know neuro Lincoln in the way in which it's promising a device that allows us to connect directly to the internet you know it's scary because it's invasive.

But the challenge is what's driving those motivations is a fear of the sort of world we're living in that requires us to be always on. And by attaching technology, that's what's going to make us competitive in a working world where AI exists. So how do we contend with those different tensions that may arise when we start getting the availability of cognitive enhancing devices?

Pattie Maes: I think that often, um, in the world of AI today, the, um, what drives, uh, AI is very short sighted, ultimately. Um, the goal is very much, uh, an economic goal, like making people, uh, rich. more efficient, um, perform at a higher level, um, and, and be able to deal with more work basically. So it's really that, uh, economic, uh, uh, factor sort of that seems to drive all of these developments.

But I think we have to think a little bit harder about the long term implications of having ubiquitous AI that assists us, um, Uh, with all of our thinking, for one, I am worried about de skilling. People may lose certain skills if they always rely on AI to do all the thinking and the hard work for them.

We see that in experiments that we do where, um, we have people write with AI, for example, and two weeks later, they don't even remember what they wrote about, uh, because they weren't truly engaged. They didn't learn anything in the process. I think another possible danger we should be aware of, of course, is once you rely too much on AI, you become a potential victim of misinformation, manipulation, et cetera, uh, based on whoever AI systems.

They may put certain biases and so on into the system. in the answers of the AI that slowly sort of steer people in a particular, uh, direction of opinion. There's risks of, um, dehumanization even right now, or at least before AI was used widely. We would always go to another person when we needed help with something.

You want a present for your spouse, um, and you go to a friend or a family member to get some advice or, uh, you have some, um, other issue, you go to a therapist or a friend to get help. Uh, if you need help at work, of course, you go to a colleague and so on. But what we do now, we go to AI, we talk to an AI about these issues.

So I am worried about how this will weaken our social networks, uh, when we increasingly are by ourselves talking to our AI systems, as opposed to connecting with other people. And of course, these AI systems that we connect with. They are always, um, very, uh, accommodating. They never challenge our points of view, et cetera.

So we're not, again, uh, we're not learning anything from an AI that just mirrors us, and uh, doesn't encourage us to really, uh, broaden our points of view and think differently, https: otter. ai

Luke Robert Mason: Well in that case, do you think there's certain human skills that don't require technology that we should foster and develop as a response to AI?

Pattie Maes: Critical thinking is ultimately the most important skill I think that we should try and preserve. We have to be able to think critically and be very mindful, very aware of how we are. Possibly being manipulated about, uh, um, uh, being very aware about all the information that we come across and more. But I think, again, that if we design, uh, personally AI the right way, it can actually help us with critical thinking rather than sort of, again, making us, like, dumped down in the process. Thanks.

Luke Robert Mason: Well, on that wonderful note, I'm excited to see what the MIT Media Lab creates in the future to help us enhance our critical thinking skills. And I want to thank you for joining us on the Futures Podcast live from the Dubai Future Forum. If you like what you've heard, you can find out more by visiting futurespodcast dot net. Thank you, Patti, for joining me at the Podcast Lounge at the Museum of the Future.

Pattie Maes: My pleasure.


Credits

If you enjoyed listening to this episode of the FUTURES Podcast you can help support the show by doing the following:

Subscribe on Apple Podcasts | Spotify | Google Podcasts | YouTube | SoundCloud | Goodpods | CastBox | RSS Feed

Write us a review on Apple Podcasts or Spotify

Subscribe to our mailing list through Substack

Producer & Host: Luke Robert Mason

Assistant Audio Editor: Ramzan Bashir

Transcription: Beth Colquhoun

Join the conversation on Facebook, Instagram, and Twitter at @FUTURESPodcast

Follow Luke Robert Mason on Twitter at @LukeRobertMason

Subscribe & Support the Podcast at http://futurespodcast.net

Next
Next

Dreaming the Future w/ Pierre-Christophe Gam