Podcast: Mission AI – Giving a Global Voice to the AI Discussion with Charlie Oliver and Randi Williams | AI
How are emerging technologies like artificial intelligence shaping our world and how we interact with one another? What do different demographics think about AI risk and a robot-filled future? And how can the average citizen contribute not only to the AI discussion, but AI’s development?
On this month’s podcast, Ariel spoke with Charlie Oliver and Randi Williams about how technology is reshaping our world, and how their new project, Mission AI, aims to broaden the conversation and include everyone’s voice.
Charlie is the founder and CEO of the digital media strategy company Served Fresh Media, and she’s also the founder of Tech 2025, which is a platform and community for people to learn about emerging technologies and discuss the implications of emerging tech on society. Randi is a doctoral student in the Personal Robotics Group at the MIT Media Lab. She wants to understand children’s interactions with AI, and she wants to develop educational platforms that empower non-experts to develop their own AI systems.
Topics discussed in this episode include:
- How to inject diversity into the AI discussion
- The launch of Mission AI and bringing technologists and the general public together
- How children relate to AI systems, like Alexa
- Why the Internet and AI can seem like “great equalizers,” but might not be
- How we can bridge gaps between the generations and between people with varying technical skills
Papers discussed in this episode include:
- Druga, S., Williams, R., Resnick, M., & Breazeal, C. (2017). “Hey Google, is it OK if I Eat You?”: Initial Explorations in Child-Agent Interaction. Proceedings of the 16th ACM SIGCHI Interaction Design and Children (IDC) Conference, ACM. [PDF]
- Stefania Druga, Randi Williams, Hae Won Park, and Cynthia Breazeal. 2018. How smart are the smart toys?: children and parents’ agent interaction and intelligence attribution. In Proceedings of the 17th ACM Conference on Interaction Design and Children (IDC ’18). ACM, New York, NY, USA, 231-240. DOI: https://doi.org/10.1145/3202185.3202741. [PDF]
- Randi Williams, Christian Vazquez, Stefania Druga, Pattie Maes, Cynthia Breazeal. “My Doll Says It’s OK: Voice-Enabled Toy Influences Children’s Moral Decisions.” IDC. 2018
You can listen to this episode above or read the transcript below. And don’t forget to check out previous episodes of FLI’s monthly podcast on SoundCloud, iTunes, Google Play and Stitcher.
Ariel: Hi, I am Ariel Conn with The Future of Life Institute. As a reminder, if you’ve been enjoying our podcasts, please remember to take a minute to like them, and share them, and follow us on whatever platform you listen on.
And now we’ll get on with our podcast. So, FLI is concerned with broadening the conversation about AI, how it’s developed, and its future impact on society. We want to see more voices in this conversation, and not just AI researchers. In fact, this was one of the goals that Max Tegmark had when he wrote his book, Life 3.0, and when we set up our online survey about what you want the future to look like.
And that goal of broadening the conversation is behind many of our initiatives. But this is a monumental task, that we need a lot more people working on. And there is definitely still a huge communications gap when it comes to AI.
I am really excited to have Charlie Oliver, and Randi Williams with me today, to talk about a new initiative they’re working on, called Mission AI, which is a program specifically designed to broaden this conversation.
Charlie Oliver is a New York based entrepreneur. She is the founder and CEO of Served Fresh Media, which is a digital media strategy company. And, she’s also the founder of Tech 2025, which is a platform and community for people to learn about emerging technologies, and to discuss the implications of emerging tech on our society. The mission of Tech 2025 is to help humanity prepare for, and define what that next technological era will be. And so it was a perfect starting point for her to launch Mission AI.
Randi Williams is a doctoral student in the personal robotics group at the MIT Media Lab. Her research bridges psychology, education, engineering, and robotics, to accomplish two major goals. She wants to understand children’s interactions with AI, and she wants to develop educational platforms that empower non-experts to develop their own AI systems. And she’s also on the board of Mission AI.
Randi and Charlie, thank you both so much for being here today.
Charlie: Thank you. Thank you for having us.
Randi: Yeah, thanks.
Ariel: Randi, we’ll be getting into your work here a little bit later, because I think the work that you’re doing on the impact of AI on childhood development is absolutely fascinating. And I think you’re looking into some of the ethical issues that we’re concerned about at FLI.
But first, naturally we wanna start with some questions about Mission AI. And so for example, my very first question is, Charlie can you tell us what Mission AI is?
Charlie: Well, I hope I can, right? Mission AI is a program that we launched at Tech 2025. And Tech 2025 was launched back in January of 2017. So we’ve been around for a year and a half now, engaging with the general public about emerging technologies, like AI, blockchain, machine learning, VR/AR. And, we’ve been bringing in experts to engage with them — researchers, technologists, anyone who has a stake in this. Which pretty much tends to be everyone, right?
So we’ve spent the last year listening to both the public and our guest speakers, and we’ve learned so much. We’ve been so shocked by the feedback that we’ve been getting. And to your initial point, we learned, as I suspected early on, that there is a big, huge gap between how the general public is interpreting this, and what they expect, and how researchers are interpreting this. And how corporate America, the big companies, are interpreting this, and hope to implement these technologies.
Equally, those three separate entities also have their fears, their concerns, and their expectations. We have seen the collision of all three of those things at all of our events. So, I decided to launch Mission AI to be part of the answer to that. I mean, because as you mentioned, it is a very complicated, huge problem, monumental. And what we will do with Mission AI, is to address the fact that the general public really doesn’t know anything about the AI, machine learning research that’s happening. And there’s, as you know, a lot of money, globally, being tossed — I don’t wanna say toss — but AI research is heavily funded. And with good reason.
So, we want to do three things with this program. Number one, we want to educate the general public on the AI machine learning research ecosystem. We happen to believe that it’s crucial that, in order for the general public to participate — and understand what I mean by the general public, I should say, that includes technologists. Like 30 to 35 percent of our audience are engineers, and software developers, and people in tech companies, or in companies working in tech. They also include business people, entrepreneurs, students, we have baby boomers, we have a very diverse audience. And we designed it so that we can have a diverse conversation.
So we want to give people an understanding of what AI research is, and that they can actually participate in it. So we define the ecosystem for them to keep them up to date on what research is happening, and we give them a platform to share their ideas about it, and to have conversations in a way that’s not intimidating. I think research is intimidating for a lot of people, especially academic research. We however, will be focusing more on applied research, obviously.
The second thing that we want to do is, we want to produce original research on public sentiment, which, it’s a huge thing to take on, but the more that we have moved, grown this community — and we have several thousand people in our community now, we’ve done events here, and in Toronto; we’ve done over 40 events across different topics — we are learning that people are expressing ideas, and concerns, and just things that I have been told by researchers who come in to speak at our events, it’s surprising them. So, it’s all the more important that we get the public sentiment and their ideas out. So our goal here is to do research on what the public thinks about these technologies, about how they should be implemented, and on the research that is being presented. So a lot of our research will be derivative of already existing research that’s out there.
And then number three, we want to connect the research community, the AI research community, with our community, or with the broader public, which I think is something that’s really, very much missing. And we have done this at several events, and the results are not only absolutely inspiring, everyone involved learns so much. So, it’s important, I think, for the research community to share their work with the general public, and I think it’s important for the general public to know who these people are. There’s a lot of work being done, and we respect the work that’s being done, and we respect the researchers, and we want to begin to show the face of AI and machine learning, which I think is crucial for people to connect with it. And then also, that extends to Corporate America. So the research will also be available to companies, and we’ll be presenting what we learn with them as well. So that’s a start.
Ariel: Nice. So to follow up on that a little bit, what impact do you hope this will have? And Randi, I’d like to get your input on some of this as well in terms of, as an AI researcher, why do you personally find value in trying to communicate more with the general public? So it’s sort of, two questions for both of you.
Randi: Sure, I can hop in. So, a lot of what Charlie is saying from the researcher’s side, is a big question. It’s a big unknown. So actually a piece of my research with children is about, well when you teach a child what AI is, and how it works, how does that change their interaction with it?
So, if you were extend that to something that’s maybe more applicable to the audience — if you were to teach your great, great grandma about how all of the algorithms in Facebook work, how does that change the way that she posts things? And how does that change the way that she feels about the system. Because we very much want to build things that are meaningful for people, and that help people reach their goals and live a better life. But it’s often very difficult to collect that data. Because we’re not huge corporations, we can’t do thousand person user studies.
So, as we’re developing the technology and thinking about what directions to go in, it’s incredibly important that we’re hearing from the baby boomers, and from very young people, from the scientists and engineers who are maybe in similar spaces, but not thinking about the same things, as well as from parents, teachers, all of the people who are part of the conversation.
And so, I think what’s great about Mission AI is that it’s about access, on both ends.
Charlie: So true. And you know, to Randi’s point, the very first event that we did was January the 11th, 2017, and it was on chatbots. And I don’t know if you guys remember, but that doesn’t seem like a long time ago, but people really didn’t know anything about chatbots back then.
When we had the event, which was at NYU, it sold out in record time, like in two days. And when we got everybody in the room, it was a very diverse audience. I mean we’re talking baby boomers, college students, and the first question I asked was, “How many people in here are involved in some way with building, or developing chatbots, in whatever way you might be?” And literally I would say about, 20 to 25 percent of the hands went up.
For everyone else, I said, “Well, what do you know chatbots? What do you know about it?” And most said, “Absolutely nothing.” They said, “I don’t know anything about chatbots, I just came because it looked like a cool event, and I wanna learn more about it.”
But, by the end of the event, we help people to have these group discussions and solve problems about the technologies, together. So that’s why it’s called a think tank. At the end of the event there were these two guys who were like 25, they had a startup that works with agencies that develop chatbots for brands. So they were very much immersed in the space. After the event, I would say a week later, one of them emailed me and said, “Charlie, oh my God, that event that you did, totally blew our minds. Because we sat in a group with five other people, and one of those people was John. He’s 75 years old. And he talked to us.” Part of the exercise that they had to do was to create a Valentine’s Day chatbot, and to write the conversational flow of that chatbot. And he said that after talking to John, who’s 75 years old, about what the conversation would be, and what it should be, and how it can resonate with real people, and different types of people. He said that they realized they had been building chatbots incorrectly all along. He realized that they were narrowing their conversations, in the conversational flows, in a way that restricted their technology from being appealing to someone like him. And they said that they went back, and re-did a lot of their work to accommodate that.
So I thought that was great. I think that’s a big thing in terms of expectations. We want to build these technologies so that they connect with everyone. Right?
Ariel: I’d like to follow up with that. So there’s basically two sides of the conversation. We have one side, which is about educating the public about the current state, and future of artificial intelligence. And then, I think the other side is helping researchers better understand the impact of their work by talking to these people who are outside of their bubbles.
It sounds to me like you’re trying to do both. I’m curious if you think both are either, equally challenging, or easy to address, or do you think one side is harder? How do you address both sides, and effect change?
Charlie: That is a great, great question. And I have to tell you that on both sides, we have learned so much, about both researchers, and the general public. One of the things that we learned is that we are all taking for granted what we think we know about people. All of us. We think we’ve got it down. “I know what that student is thinking. I know what that black woman is thinking. I know how researchers think.” The fact of the matter is, we are all changing so much, just in the past two to three years, think about who you were three years ago. We have changed how we think about ourselves and the world so much in the past two years, that it’s pretty shocking, actually. And even within the year and a half that we have been up and going, my staff and I, we sit around and talk about it, because it kind of blows our minds. Even our community has changed how they think about technologies, from January of last year, to today. So, it’s actually extremely, extremely difficult. I thought it would get easier.
But here’s the problem. Number one, again, we all make assumptions about what the public is thinking. And I’m gonna go out on a limb here and say that we’re all wrong. Because they are changing the way that they think, just as quickly as the technologies are changing. And if we don’t address that, and meet that head on, we are always going to be behind, or out of sync, with what the general public is thinking about these technologies. And I don’t think that we can survive. I don’t think that we can actually move into the next era of innovation unless we fix that.
I will give you a perfect example of that. Dr. James Phan co-created the IBM Watson Q&A system. And he’s one of our speakers. He’s come to our events maybe two or three times to speak.
And he actually said to me, as I hear a lot from our researchers who come in, he says, “My God, Charlie, every time I come to speak at your event, I’m blown away by what I hear from people.” He said, “It seems like they are thinking about this very differently.” He says, “If you ask me, I think that they’re thinking far more in advance than we think that they are.”
And I said, “Well, that shocks me.” And so, to give you a perfect example of that, we did an event with Ohio State regarding their Opioid Technology Challenge. And we had people in New York join the challenge, to figure out AI technologies that could help them in their battle against opioid addiction in their state. And I had him come in, as well as several other people come in, to talk about the technologies that could be used in this type of initiative. And James is very excited. This is what I love about researchers, right? He’s very excited about what he does. And when he talks about AI, he lights up. I mean you’ve just never seen a man so happy to talk about it. So he’s talking to a room full of people who are on the front lines of working with people who who are addicted to opioids, or have some sort of personal connection it. Because we invited people like emergency responders, we invited people who are in drug treatment facilities, we’ve invited doctors. So these are people who are living this.
And the more he talked about algorithms, and machine learning, and how they could help us to understand things, and make decisions, and they can make decisions for us, the angrier people got. They became so visibly angry, that they actually started standing up. This was in December. They started standing up and shouting out to him, “No way, no way can algorithms make decisions for us. This is about addiction. This is emotional.” And they really, it shocked us.
I had to pull him off the stage. I mean, I didn’t expect that. And he didn’t see it, because he just kept talking, and I think he felt like the more he talked about it, the more excited they would become, like him, but it was quite the contrary, they became angrier. That is the priceless example, perfect example, of how the conversations that we have, that we initiate between researchers and the public, are going to continue to surprise us. And they’re going to continue to be shocking, and in some cases, very uncomfortable. But we need to have them.
So, no it is not easy. But yes we need to have them. And in the end, I think we’re all better for it. And we can really build technologies that people will embrace, and not protest.
Ariel: So Randi, I’d like to have you jump in now, because you’ve actually done, from the researcher side, you’ve done an event with Tech 2025, or maybe more than one, I’m not sure. So I was hoping you could talk about your experience with that, and what you gained out of it.
Randi: Yeah, so that event I was talking about a piece of research I had done, where I had children talk about their perceptions of smart toys. And so this is a huge, also, like Charlie was saying, inflammatory topic because, I don’t know, parents are extremely freaked out. And I think, no offense to the media, but there’s a bit of fear mongering going on around AI and that conversation. And so, as far as what’s easier, I think the first step, what makes it really difficult for researchers to talk to the public right now, is that we have been so far out of the conversation, that the education has gotten skewed. And so it’s difficult for us to come in and talk about algorithms, and machines making decisions, without first dealing with, you know, and this is okay, and it’s not a terminator kind of thing. At the end of the day, humans are still in control of the machines.
So what was really interesting about my experience, talking with Tech 2025, is that, I had all of these different people in the room, a huge variety of perspectives. And the biggest thing to hear, was what people already knew. And, as I was talking and explaining my research, hearing their questions, understanding what they understood already, what they knew, and what wasn’t so clear. So one of the biggest things is, when you see an AI system teach itself to play chess, and you’re like, “Oh my God, now it’s gonna teach itself to like, take over a system, and hack into the government, and this is that.” And it’s like, no, no, it’s just chess. And it’s a huge step to get any further than that.
And so it was really great practice for me to try and take people who are in that place, and say, “Well no, actually this is how the technology works, and this is the limitations.” And try to explain, you know, so when could this happen, in what particular universe could this happen? Well maybe, like in 20 years if we find a general AI, then yeah, it could teach itself to solve any problem. But right now, every single problem requires years of work.
And then seeing what metaphors work. What metaphors make sense for an AI scientist who wants to relate to the public. What things click, which things don’t click? And I think, another thing that happened, that I really loved was, just thinking about the application space. I’m asking research questions that I think are intellectually interesting for my work. But, there was a person from a company, who was talking about implementing a skill in Alexa, and how they didn’t know if using one of their characters on Alexa, would be weird for a child. Because, I was talking about how children look at an Alexa, and they think Alexa’s like a person. So Alexa is an Alexa, and if you talk to another Alexa, that’s a new Alexa. Yeah they have the same name, but completely different people, right?
So what happens when Alexa has multiple personality disorder? Like how does a child deal with that? And that was a question that never would have come up, because I’m not writing skills with different characters for children. So, that’s just an example of how learning as an AI scientist, how to give, how to listen to what people are trying to understand, and how to give them the education they need. But then also taking, okay, so when you’re at home and your child is doing xyz with Alexa, where are the questions there that you have, that researchers should be trying to answer? So, I don’t know which one is harder.
Charlie: I specifically went after Randi for this event. And I invited her because, I had been thinking in my mind for a while, that we are not talking about children in AI, not nearly enough. Considering that they’re gonna be the ones in ten to 15 years who are gonna be developing these things, and this technology and everything. So I said, “You know, I am willing to bet that children are thinking very differently about this. Why aren’t we talking about it?” So, I get online, I’m doing all my, as anyone would, I do all my little research to try to figure it out, and when I came across Randi’s research, I was blown away.
And also, I had her in mind with regards to this because I felt like this would be the perfect test of seeing how the general public would receive research, from a research assistant who is not someone who necessarily has — obviously she’s not someone who has like 20 years of experience behind her, she’s new, she’s a fresh voice. How would she be received? How would the research be received?
And on top of that, to be honest with you, she’s a young black woman. Okay? And in terms of diversity of voices within the research community, and within the AI discussion as a whole, this is something I want to address, aggressively.
So we reached out to the toy companies, we reached out to child psychologists, teachers, students, children’s museums, toy stores, I can’t tell you how many people we reached out to in the greater New York City area.
Randi was received so well, that I had people coming up to me, and high fiving me, saying, “Where did you get her? Where did you find her?” And I’m like, “Well you know, she didn’t drop out of the sky. She’s from MIT.”
But Randi’s feedback was crucial for me too because, I don’t know what she’s getting from it. And we cannot be effective at this if we are not, all of us, learning from each other. So if my researchers who come in and speak aren’t learning, I’m not doing my job. Same with the audience.
Ariel: So, Randi, I’m gonna want to start talking about your research here in a minute, ’cause we’ve just gotten a really great preview of the work you’re doing. But before we get to that, one, not final question, but for a little bit, a final question about Mission AI, and that is this idea of diversity.
AI is not a field that’s known for being diverse. And I read the press release about this, and the very first thing, in the very first bullet point, about what Mission AI is going to do, was about injecting diversity. And so my question to both of you is, how can we do that better? How can the AI community do that better? And in terms of the dialogue for who you’re reaching out to, as well, how can we get more voices?
Randi: You know in some ways, it’s like, there’s nothing you can do, to not do better. I think what Mission AI is really about, is thinking about who’s coming to the table to hear these things, very critically. And being on the board, as Charlie said, a black woman, the people who I talk to in AI are people of color, and women, right? So, I hope that as being a main part of this, and having Charlie also be a main part of that, we have a network that’s both powerful, in terms of having the main players in AI come to the table, but you know, main players that are also not, I guess the stereotypical AI scientist that you would think of.
So, what makes this different is who’s leading it, and the fact that we’re thinking about this from the very beginning. Like, “Okay, we’re gonna reach out. We want to recruit research scientists,” so I’m thinking of my peers who are in schools all across the country, and what they’re doing, and how this can be meaningful for them, and how they can, I guess, get an experience in communicating their research with the public.
Charlie: Yeah, I totally agree.
In addition to that, bringing in people who are from different backgrounds, and bringing diversity to the speakers, is very important. But it’s equally as important to have a diverse room. The first thing that I decided when I launched Tech 2025, and the reason that I’ve decided to do it this way, is because, I did not want to have a room full of the hoodie crowd. Which is, you know, white guys in their 20’s with hoodies on. Right? That’s the crowd that usually gets the attention with regards to AI and machine learning. And no offense to them, or to what they’re doing, everyone’s contributing in their own way.
But I go to tech events, as I know you guys do too. I go to tech events here, and in San Francisco, and across the country, and different parts of the world. And, I see that for the most part a lot of these rooms are filled, especially if you talk about blockchain, and cryptocurrency, which we do as well, they’re filled with primarily white guys.
So, I intentionally, and aggressively, made it a point to include as many people from various backgrounds as possible. And it is a very deliberate thing that you have to do, starting with the content. I don’t think a lot of people realize that, because people say to me, “How do you get such diverse people in the room?”
Well number one, I don’t exclude anyone, but also, the content itself asks people from various backgrounds to come in. So, a lot of times, especially in our earlier events, I would make a point of saying, it doesn’t matter who you are, where you’re from, we don’t care if you’re a technologist, or if you are a baby boomer who’s just curious about this stuff, come on in. And I have actually had people in their 60s come to me, I had a woman come to me last year, and she says, “My God Charlie, I feel like I really can participate in these discussions at your event. I don’t feel like I’m the odd woman out, because I’m older.”
So I think that’s a very important thing, is that, when researchers look at the audience that they’re talking to, they need to see diversity in that audience too. Otherwise, you can reinforce the biases that we have. So if you’re a white guy and you’re talking to an audience full of nothing but white guys, you’re reinforcing that bias that you have about what you are, and the importance of your voice in this conversation.
But when my guests come in to speak, I tell them first and foremost, “You are amazing. I love the work that you do, but you’re not the … The star of the show is the audience. So when you look at them, just know that they are, it’s very important that we get all of their feedback. Right? That we allow them to have a voice.” And it turns out that that’s what happens, and I’m really, I’m happy that we’re creating a dialogue between the two. It’s not easy. I think it’s definitely what needs to happen. And with going back to what Randi says, it does need to be deliberate.
Ariel: I’m going to want to come back to this, because I want to talk more about how Mission AI will actually work. But I wanna take a brief pause, because we’ve sort of brought up some of Randi’s work, and I think her work is really interesting. So I wanted to talk, just a little bit about that, since the whole idea of Mission AI is to give a researcher a platform to talk about their work too.
So, one of my favorite quotes ever, is the Douglas Adams quote about age and technology, and he says, “I’ve come up with a set of rules that describe our reactions to technologies. One, anything that is in the world when you’re born, is normal and ordinary and is just a natural part of the way the world works. Two, anything that’s been invented when you’re 15 to 35 is new, and exciting, and revolutionary, and you can probably get a career in it. Three, anything invented after you’re 35 is against the natural order of things.”
Now, I personally, I’m a little bit worried that I’m finding that to be the case. And so, one of things that I’ve found really interesting is, we watch these debates about what the impact of AI will be on future generations. There are technologies that can be harmful, period. And trying to understand, when you’re looking at a technology that can be harmful, versus when you’re looking at a technology and you just don’t really know what the future will be like with it, I’m really curious what your take on how AI will impact children as they develop, is. You have publications that, there’s at least a couple great titles. One is, “Hey Google, is it okay if I eat you?” And then another is, “My Doll Says It’s Okay, Voice Enabled Toy Influences Children’s Moral Decisions.”
So, my very first question for you is, what are you discovering so far with the way kids interact with technology? Is there a reason for us to be worried? Is there also reason for us to be hopeful?
Randi: So, now that I’m hearing you say that, I’m like, “Man I should edit the titles of my things.”
First, let me label myself as a huge optimist of AI. Obviously I work as an AI scientist. I don’t just study ethics, but I also build systems that use AI to help people reach their goals. So, yeah, take this with a grain of salt, because obviously I love this, I’m all in it, I’m doing a PhD on it, and that makes my opinion slightly biased.
But here’s what I think, here’s the metaphor that I like to use when I talk about AI, it’s kind of like the internet. When the internet was first starting, people were like, “Oh, the Internet’s amazing. It’s gonna be the great equalizer, ’cause everyone will be able to have the same education, ’cause we’ll all have access to the same information. And we’re gonna fix poverty. We’re gonna fix, everything’s gonna go away, because the internet.” And in 2018, the Internet’s kind of like, yeah, it’s the internet, everyone has it.
But it wasn’t a great equalizer. It was the opposite. It’s actually creating larger gaps in some ways, in terms of people who have access to the internet, and can do things, and people who don’t have access. As well as, what you know about on the internet makes a huge difference in your experience on it. It also in some ways, promotes, very negative things, if you think about like, the dark web, modern day slavery, all of these things, right? So it’s like, it’s supposed to be great, it’s supposed to be amazing. It went horribly wrong. AI is kind of like that. But maybe a little bit different in that, people are already afraid of it before it’s even had a chance.
In my opinion, AI is the next technology that has the potential to be a great equalizer. The reason for that is, because it’s able to extend the reach that each person has in terms of their intellectual ability, in terms of their physical ability. Even, in terms of how they deal with things emotionally and spiritually. There’s so many places that it can touch, if the right people are doing it, and if it’s being used right.
So what’s happening right now, is this conversation with children in AI. The toy makers, and the toy companies are like, “We can create a future where every child grows up, and someone is reading to them, and we’re solving all the problems. It’s gonna be great.” And then they say to the parents, “I’m gonna put this thing in your home, and it’s gonna record everything your child says, and then it’s gonna come back to our company, and we’re gonna use it to make your life better. And you’re gonna pay us for it.” And parents are like, “I have many problems with this. I have many, many problems with everything that you’re saying.”
And so, there’s this disconnect between the potential that AI has, and the way that it’s being seen as the public, because, people are recognizing the dangers of it. They’re recognizing that the amount of access that it has, is like, astronomical and crazy. So for a second, I’ll talk about the personal robots group. In the MIT Media Lab, the personal robots group, we specifically build AI systems that are humanistic. Meaning that we’re looking at the way that people interact with their computers, and with cellphones, and it’s very, cagey. It’s very transactional, and in many ways it doesn’t help people live their lives better, even though it gives them more access. It doesn’t help them achieve all of their goals. Because you know, in some ways it’s time consuming. You see a group of teenagers, they’re all together, but they’re all texting on phones. It’s like, “Who are you talking to? Talk to your friends, they’re right there.” But that’s not happening, so we built systems specifically, that try to help people achieve their goals. One great example of that, is we found educational research that says that your vocabulary at the age of five, is a direct predictor of your PSAT score in the 11th grade. And as we all know, your PSAT score is a predictor of your SAT score. Your SAT score is a predictor of your future income, and potential in life, and all these great things.
So we’re like, “Okay, we wanna build a robot that helps children, who may not have access for any number of reasons, be able to increase their vocabulary size.” And we were gonna use AI that can personalize to each child, because every child’s different. Some children want the competitive robot that’s gonna push them, some children want the friendly robot that’s gonna work with them, and ask them questions, and put them in the perspective of being a teacher. And, AI is the only thing, like in a world, where classroom sizes are getting bigger, where parents can’t necessarily spend as much time at home, those are the spaces where we’re like, AI can help. And so we build systems that do that.
We don’t just think about teaching this child vocabulary words. We think about how the personality of the robot is shaping the child as a learner. So how is the robot teaching the child to have a growth mindset, and teaching them to persevere, to continue learning better. So those are the kinds of things that we want to instill, and AI can do that.
So, when people say, “AI is bad, it’s evil.” We’re like, “Well, we’re using a robot that teaches children that working hard is more important than just being magically smart.” ‘Cause having a non-growth mindset, like, “I’m a genius,” can actually be very limiting ’cause when you mess up, then you’re like, “I’m not a genius. I’m stupid.” It’s like, no, work hard, you can figure things out.
So, personally, I think, that kind of AI is extremely impactful, but the conversation that we need to have now, is how do we get that into the public space, in an appropriate way. So maybe, huge toy companies shouldn’t be the ones to build it, because they obviously have a bottom line that they’re trying to fill. Maybe, researchers are the ones who wanna build it. My personal research is about helping the public build their own AI systems to reach these goals. I want a parent to be able to build a robot for their child, that helps the child better reach their goals. And not to replace the parent, but you know, there are just places where a parent can’t be there all the time. Play time, how can play time, how can the parent, in some ways, engineer their child’s play time, so that they’re helping the child reinforce having a growth mindset, and persevering, and working hard, and maybe cleaning up after yourself, there are all these things.
So if children are gonna be interacting with it anyways, how can we make sure that they’re getting the right things out of that?
Ariel: I’d like to interject with a question real quick. You’d mentioned earlier that parents aren’t psyched about having all of their kids’ information going back to toy companies.
Randi: Yeah.
Ariel: And so, I was gonna ask if you see ways in which AI can interact with children that doesn’t have to become basically massive data dumps for the AI companies? Is this, what you’re describing, is that a way in which parents can keep their children’s data private? Or would that still end up, all that data go someplace?
Randi: The way that the AI works depends heavily on the algorithm. And what’s really popular right now, are deep learning algorithms. And deep learning algorithms, they’re basically, instead of figuring out every single rule, like instead of hard programming every single possible rule and situation that someone could run into, we’re just gonna throw a lot of data at it, and the computer will figure out what we want at the end. So you tell it, what you have at the beginning, you tell it what you want at the end, and then the computer figures out everything.
That means you have to have like massive amounts of data, like, Google amounts of data, to be able to do that really well. So, right now, that’s the approach that companies are taking. Like, collect all the data, you can do AI with it, and we’re off to the races.
The systems that we’re building are different because, they rely on different algorithms than ones that require huge amounts of data. So we’re thinking about, how can we empower people so that … You know, it’s a little bit harder, you have to spend some time, you can’t just throw data at it, but it allows people to have control over their own system.
I think that’s hugely important. Like, what if Alexa wasn’t just Alexa; Alexa was your Alexa? You could rename her, and train her, and things like that.
Charlie: So, to Randi’s point, I mean I really totally agree with everything that she’s saying. And it’s why I think it’s so important to bring researchers, and the general public, together. Literally everything that she just said, it’s what I’m hearing from people at these events. And the first thing that we’re hearing is that people, obviously they’re very curious, but they are also very much afraid. And I’m sometimes surprised at the level of fear that comes into the room. But then again, I’m not, because the reason, I think anyway, that people feel so much fear about AI, is that they aren’t talking about it enough, in a substantive way.
So they may talk about it in passing, they may hear about it, or read about it online. But when they come into our events, we force them to have these conversations with each other, looking each other in the eye, and to problem solve about this stuff. And at the end of the evening, what we always hear, from so many people, is that number one, they didn’t realize that, it wasn’t as bad as they thought it was.
So there’s this realization that once they begin to have the conversations, and begin to feel as if they can participate in the discussion, then they’re like, “Wow, this is actually pretty cool.” Because part of our goal is to help them to understand, to Randi’s point, that they can participate in developing these technologies. You don’t have to have an advanced degree in engineering, and everything. They’re shocked when I tell them that, or when they learn it for themselves.
And the second thing, to Randi’s point, is that, people are genuinely excited about the technologies, after they talk about it enough to allow their fears to dissipate. So, the immediate emotional reaction to AI, and to the fear of data, and it’s a substantive fear, because they’re being told by the media that they, you know, they should be afraid. And to some degree, obviously, there is a big concern about this. But once they are able to talk about this stuff, and to do the exercises, and to think through these things, and to ask questions of the guest speakers and researchers, they then start asking us, and emailing us, saying “What more can I do? I wanna do more. Where can I go to learn more about this?”
I mean we’ve had people literally up-skill, just go take courses in algorithms and everything. And so one of the things that we’ve done, which is a a part of Mission AI is, we now have an online learning series called, Ask the Experts, where we will have AI researchers, answer questions about things that people are hearing and seeing in the news. So we’ll pick a hot topic that everyone is talking about, or that’s getting a lot of play, and we will talk about that from the perspective of the researcher. And we’ll present the research that either supports the topic, or the particular angle that the reporter is taking, or refutes it.
So we actually have one coming up on algorithms, and on YouTube’s algorithm, it’s called, Reverse Engineering YouTube’s Algorithms, and it talks about how the algorithms are causing the YouTube creators a lot of anxiety, because they feel like the algorithm is being unfair to them, as they say it. And that’s a great entry point for people, for the general public, to have these discussions. So researchers will be answering questions that I think we all have.
Ariel: So, I’m hesitant to ask this next question, because I do, I like the idea of remaining hopeful about technology, and about AI. But, I am curious as to whether or not, you have found ethical issues regarding children’s interactions with artificial intelligence, or with Alexa, or any of the other AIs that they might be playing with?
Randi: Of course there are ethical issues. So, I guess to talk specifically about the research. I think there are ethical issues, but they raise more questions than answers. So, in the first study that we did, the Hey Google, is it Okay if I Eat You? We would see things like, some of the older children thought that Alexa was smarter than them, because it could answer all of their questions. But then conversely, the younger children would say, “Well it’s not smarter than me, because it doesn’t know what my favorite song is,” or it doesn’t know about, some TV show that they watch. And so, that led us to ask the question, well what does it mean when a child says that something is more intelligent than them?
And so we followed up with a study that was also recently published. So we had children compare the intelligence of a mouse, to the intelligence of a robot, to their own intelligence. And the way that we did this was, all three of them solved a maze. And then we listened to the way that children talked about each of the different things as they were solving the maze. So first of all, the children would say immediately, “The robot solved it the best. It’s the smartest.” But what we came to realize, was that, they just thought robots were smart in general. Like that was just the perception that they had, and it wasn’t actually based on the robot’s performance, because we had the mouse and the robot do the exact same performance. So they would say, “Well the mouse just smells the cheese, so that’s not smart. But the robot, was figuring it out, it had programming, so it’s very smart.”
And then when they looked at their own intelligence, they would be able to think about, and analyze their strategy. So they’re like, “Well I would just run over all the walls until I found the cheese,” or, “I would just, try not to look at places that I had been to before.” But they couldn’t talk about the robot in the same way. Like, they didn’t intellectually understand the programming, or the algorithm that was behind it, so they just sort of saw it as some mystical intelligence, and it just knew where the cheese was, and that’s why it was so fast. And they would be forgiving of the robot when it made mistakes.
And so, what I’m trying to say, is that, when children even say, “Oh that thing is so smart,” or when they say, “Oh I love my talking doll,” or, “Oh I love Alexa, she’s my best friend.” Even when they are mean to Alexa, and do rude things, a lot of parents look at that and they say, “My child is being brainwashed by the robots, and they’re gonna grow up and not be able to socialize, ’cause they’re so emotionally dependent on Alexa.”
But, our research, that one, and the one that we just did with the children’s conformity, what we’re finding is that, children behave very differently when they interact with humans, than when they interact with these toys. And, it’s like, even if they are so young, ’cause we work with children from four to ten years old. Even if they’re four years old, and they can’t verbalize how the robot is different, their behavior is different. So, at some subconscious level, they’re acknowledging that this thing is not a human, and therefore, there are different rules. The same way that they would if they were interacting with their doll, or if they were interacting with a puppy, or a piece of food.
So, people are very freaked out, because they’re like “Oh these things are so lifelike, and children don’t know the difference, and they’re gonna turn into robots themselves.” But, mostly what I’ve seen in my research is that we need to give children more credit, because they do know the differences between these things, and they’re very curious and explorative with them. Like, we asked a six year old girl, “What do you want to build a robot for, if you were to build one?” And she was like, “Well I want one to go to countries where there are poor people, and teach them all how to read and be their friend, because some people don’t have friends.” And I was just like, “That’s so beautiful. Why don’t you grow up and start working in our lab now?”
And it’s very different from the kind of conversation that we would have with an adult. The adult would be like, “I want a robot that can do all my work for me, or that can fetch me coffee or beer, or drive my car.” Children are on a very different level, and that’s because they’re like native to this technology. They’re growing up with it. They see it for what it is.
So, I would say, yes there are ethical issues around privacy, and yes we should keep monitoring the situation, but, it’s not what it looks like. That’s why it’s so important that we’re observing behavior, and asking questions, and studying it, and doing research that concretely can sort of say, “Yeah, you should probably be worried,” or, “No, there’s something more that’s going on here.”
Ariel: Awesome, thank you. I like the six year old’s response. I think everyone always thinks of children as being selfish too, and that’s a very non-selfish answer.
Randi: Yeah. Well some of them also wanted robots to go to school for them. So you know, they aren’t all angels, they’re very practical sometimes.
Ariel: I want to get back to one question that I didn’t get a chance to ask about Mission AI that I wanted to. And that’s sort of the idea of, what audiences you’re going to reach with it, how you’re choosing the locations, what your goals specifically are for these initial projects?
Charlie: That’s a question, by the way, that I have struggled with for quite some time. How do we go about doing this? It is herculean, I can’t reach everyone. You have to have some sort of focus, right? It actually took several months to come to the conclusion that we came to. And actually that only happened after research was, ironically, research was published last month in three states on how AI automation is going to impact specific jobs, or specific sectors in three states that are aggressively trying to sort of address this now and trying to educate their public now about what this stuff is.
And from what I’ve read, I think these three states, in their legislation, they feel like they’re not getting the support maybe, that they need or want, from their federal government. And so they figured, “Let’s figure this out now, before things get worse, for all we know. Before people’s concerns reach a boiling point, and we can’t then address it calmly, the way we should.” So those states are Arizona, Indiana, and northeast Ohio. And all three, this past month, released these reports. And I thought to myself, “Well, where’s the need the most?” Because there’s so many topics here that we can cover with regards to research in AI, and everything. And this is a constant dialogue that I’m having also with my advisors, and our advisors, and people in the industries. So the idea of AI and jobs, and the possibility of AI sort of decimating millions of jobs, we’ve heard numbers all over the place; realistically, yes, jobs will go away, and then new jobs will be created. Right? It’s what happens in between that is of concern to everyone. And so one of the things in making this decision that I’ve had to look at, is what I am hearing from the community? What are we hearing that is of the greatest concern from both the general public, from the executives, and just from in general, even in the press? What is the press covering exhaustively? What’s contributing to people’s fears?
And so we’ve found that it is without a doubt, the impact of AI on jobs. But to go into these communities, where number one, they don’t get these events the way we get them in New York and San Francisco. We were never meant to be a New York organization. It was always meant to launch here, and then go where the conversation is needed. I mean, we can say it’s needed everywhere, but there are communities across this country where they really need to have this information, and this community, and in their own way. I’m in no way thinking that we can take what we do here in New York, and retrofit for every other community, and every other state. So this will be very much a learning process for us.
As we go into these different states, and we take the research that they have done on what they think the impact if AI and automation will be on specific jobs? We will be doing events in their communities, and gathering our own research, and trying to figure out the questions that we should be asking of people, at these events that will offer insight for them, for the researchers, and for the legislators.
The other thing that I would say, is that we want to begin to give people actionable feedback on what they can do. Because people are right now, very, very much feeling like, “There’s gotta be something else that I can do.” And understand that there’s a lot of pressure.
As you know, we’re at an all time low, with regards to employment, unemployment. And the concern of the executive today is that, “Oh my God, we’re going to lose jobs.” It’s, “Oh my God, how do I fill these jobs?” And so, they have a completely different mindset about this. And their goal is, “How do we up skill people? How do we prepare them for the jobs that are there now, and the ones that are to come?”
So, the research will also hopefully touch on that as well, because that is huge. And I don’t think that people are seeing the opportunities that are available to them in these spaces, and in adjacent spaces to develop the technologies. Or to help define what they might be, or to contribute to the legislative discussion. That’s another huge thing that we are seeing as a need.
Again, we want this to fill a need. I don’t want to in any way, dictate something that’s not going to be of use to people. And to that end, I welcome feedback. This is an open dialogue that we’re having with the community, and with businesses, and with of course, our awesome advisors, and the researchers. This is all the more of the reason too, why it’s important to hear from the young researchers. I am adamant on bringing in young researchers. I think they are chomping at the bit, to sort of share their ideas, and to get out there some of the things that they may not be able to share.
That’s pretty much the crux of it, is to meet the demand, and to help people to see how they can participate in this, and why the research is important. We want to emphasize that.
Ariel: A quick follow up for Randi, and that is, as an AI researcher what do you hope to get out of these outreach efforts?
Randi: As an AI researcher, we often do things that are public facing. So whether it be blog posts, or videos, or actually recruiting the public to do studies. Like recently we had a big study that happened in the lab, not in my group, but it was around the ethics of self driving cars. So, for me, it’s just going out and making sure that there are more people a part of the conversation than typically would be. Because, at the end of the day, I am based in MIT. So the people who I am studying are a select group of people. And I very much want to use this as a way to get out of that bubble, and to reach more people, hear their comments, hear their feedback, and design for them.
One of the big things I’ve been doing is trying to go, literally out of this country, to places where everyone doesn’t have a computer in their home, and think about, you know “Okay, so where does AI education, how does it make sense in this context?” And that’s what I think a lot of researchers want. ‘Cause this is a huge problem, and we can only see little bits of it as research assistants. So we want to be able to see more and more.
Charlie: I know you guys at the The Future of Life Institute have your annual conference on AI, and you produced the document a year ago, with 100 researchers or scientists on the Asilomar Principles.
Ariel: Yup.
Charlie: We took that document, that was one of the documents that I looked at, and I thought, “Wow this is fascinating.” So these are 23 principles, that some of the most brilliant minds in AI are saying that we should consider, when developing these technologies. Now, I know it wasn’t perfect, but I was also taken aback by the fact that the media was not covering it. And they did cover it, of course they announced it, it’s big. But there wasn’t any real critical discussion about it, and I was alarmed at that. ‘Cause I said, “This should be discussed exhaustively, or at least it should be sort of the impetus for a discussion, and there was none.”
So I decided to bring that discussion into the Tech 2025 community, and we had Dr. Seth Baum who is the executive director at the Global Catastrophic Risk Institute come in, and present what these 23 principles are, his feedback on them, and he did a quick presentation. It was great. And then we turned over to the audience, two problems, and one was, what is the one thing in this document that you think is so problematic that it should not be there? And number two, what should be there in its place?
It turned out to be a very contentious, really emotional discussion. And then when they came up with their answers, we were shocked at the ideas that they came up with, and where they felt the document was the most problematic. The group that came up with the solution that won the evening, ’cause sometimes we give out prizes depending on what it is, or we’ll ask the guest speaker to pick the solution that resonated the most with him. The one that resonated the most with Seth was a solution that Seth had never even considered, and he does this for a living, right?
So we hear that a lot from researchers, to Randi’s point. We actually hear from researchers who say, “My God, they’re people who are coming up with ideas, and I haven’t even considered.” And then on top of that, when we ask people, well what do you think about this document? Now this is no offense to the people who came up with this document, but they were not happy about it. And they all expressed that they were really concerned about the idea that anyone would be dictating what the morals or ethics of AI, or algorithms should be. Because the logical question is, whose morals, whose ethics, who dictates it, who polices it? That’s a problem.
And we don’t look at that as bad. I think that’s great, because that is where the dialogue between researchers, and the community, and the general public, that’s where to me, to becomes a beautiful thing.
Ariel: It does seem a little bit unfortunate since the goal of the document was in part, to acknowledge that you can’t just have one group of people saying, “These are what morals should be.” I’m concerned that people didn’t like it because, it was, sounds like it was misinterpreted, I guess. But that happens. So I’m gonna ask one last round up question to both of you. As you look towards a future with artificial intelligence, what are you most worried about, and what are you most excited about?
Randi: So, I’m most worried that a lot of people won’t have access to the benefits of AI until, like 30 years from now. And I think, we’re getting to the point, especially in business where AI can make a huge difference, like a huge difference, in terms of what you’re able to accomplish. And I’m afraid for that inequality to propagate in the wrong ways.
I’m most excited about the fact that, you know, at the same time as progress towards technologies that may broaden inequalities, there’s this huge push right now, for AI education. So literally, I’m in conversations with people in China, because China just made a mandate that everyone has AI education. Which is amazing. And in the United States, I think all 50 states just passed a CS requirement, and as a result, IEEE decided to start an AI K-12 initiative.
So, you know, as one of the first people in this space about AI education, I’m excited that it’s gaining traction, and I’m excited to see, you know, what we’re gonna do in the next five, ten years, that could really change what the landscape looks like right now.
Charlie: My concerns are pretty much the same with regards to who will be leveraging the technologies the most, and who will have control over them, and will the algorithms actually be biased or not. But I mean, right now, it’s unfortunate, but we have every reason to believe that the course on which we’re going, especially when we look at what’s happening now, and people realizing what’s happening with their data, my concern is that if we don’t reverse course on that, meaning become far more conscientious of what we’re doing with our own data, and how to engage companies, and how to help consumers to engage companies in discussions on what they’re doing, how they’re doing it, that we may not be able to sort of, not hit that brick wall. And I see it as a brick wall. Because if we get to the point where it is that only a few companies control all the algorithms of the world, or whatever you wanna say, I just think there’s no coming back from that. And that’s really a real fear that I have.
In terms of the hope, I think the thing that gives me hope, what keeps me going, and keeps me investing in this, and growing the community, is that, I talk to people and I see that they actually are hopeful. That they actually see that there is a possibility, a very real possibility, even though they are afraid… When people take time out of busy schedules to come and sit in a room, and listen to each other, and talk to each other about this stuff, that is the best indication that those people are hopeful about the future, and about their ability to participate in it. And so based on what I’m hearing from them, I am extremely hopeful, and I believe that there is a very huge opportunity here to do some incredible things, including helping people to see how they can reinvent the world.
We are being asked to redefine our reality, and I think some people will get that, some people won’t. But the fact that that’s being presented to us through these technologies, among other things, is to me, just exciting. It keeps me going.
Ariel: All right. Well, thank you both so much for joining us today.
Charlie: Thank you.
Randi: Thank you for having us.
Ariel: As I mentioned at the beginning, if you’ve been enjoying the podcasts, please take a moment to like them, share them, follow us on whatever platform you’re listening to us on. And, I will be back again next month, with a new pair of experts.
[end of recorded material]
The post Podcast: Mission AI – Giving a Global Voice to the AI Discussion with Charlie Oliver and Randi Williams appeared first on Future of Life Institute.