Podcast

We ask Dr. Sidney D'Mello, a cognitive and computer scientist, about his AI-assisted research on the connection between our emotions and how we learn, in this Pulsar podcast brought to you by #MOSatHome. We ask questions submitted by listeners, so if you have a question you'd like us to ask an expert, send it to us at sciencequestions@mos.org.

Don’t miss an episode – subscribe to Pulsar on Apple Podcasts or Spotify today!

Podbean URL

Transcript

ERIC: From the Museum of Science in Boston, this is Pulsar, a podcast where we look for answers to the most intriguing questions we get from our visitors. When we talk about artificial intelligence, we're often asked about its capabilities. What can it do for me? Well, it turns out that the answer can be as basic as helping you comprehend this podcast, or the book you'll read tonight, or the classroom lecture you'll go to tomorrow morning. My guest today is Dr. Sidney D'Mello, a cognitive and computer scientist at the University of Colorado Boulder. Dr. D'Mello, thank you so much for joining me.

SIDNEY: Thanks. Great to be here.

ERIC: So your research focuses on how people learn. So to start with, why is it so important to understand that?

SIDNEY: My research actually focuses on understanding kind of the mental processes that occur while people are learning, which are essentially non-obvious. They are kind of hidden, right? So how are ways in which you can elucidate what's going on in somebody's mind, because learning essentially involves thinking and cognition. Understanding the cognitive processes will help us develop more effective learning approaches.

ERIC: So are these kind of like every day emotions and words we can attach to this when you're talking about the different approaches to it, and the different effects while you're learning?

SIDNEY: That's an excellent question. Actually, no, it's surprisingly not the case. So when we think about every everyday emotions, we think about the so called 'big six' basic emotions: anger, fear, sadness, happiness, you know, surprise, disgust. But realistically, when you're in a learning session, you know, why would you really be feeling sad, or disgusted? In fact, when we think about learning, we've realized that everything we knew about emotions has to be rethought of. So we actually have things like boredom unfortunately, frustration, curiosity, interest, confusion, pride, hope, relief, anxiety. So you almost need a whole different science of emotions, to understand emotions during learning.

ERIC: And I imagine you need a whole different way to approach measuring these, because you're not really asking people to reflect on them as they're learning. You're trying to use parameters. What are those parameters? How can you tell what emotion someone is experiencing, if they're this subtle?

SIDNEY: So typically, emotions occur in response to events, right? Somebody cuts you off on the road, and you are angry or something like that. But in the case of learning, sometimes you're learning when reading reading a textbook, or watching a video lecture or something like that, right? In those contexts, it's extremely difficult to measure emotions, because there's not a lot going on in the face, for example, right? So we look at things that are more hidden, we look at eye tracking. We actually track people's eye movements. That can tell you a lot about how somebody is processing a text. That can tell you a lot about their level of interest, their level of boredom, whether the text is sufficiently engaging. In other learning environments, which are much more interactive, like learning from educational games, some of the exhibits you guys work on in your Museum, which are very carefully designed, they produce a lot of interactivity, it's exciting, people are responding to the world around them, and you, then you actually get a much easier way, a more direct way to access people's emotions.

ERIC: What kind of technology do you use for things like eye tracking and engaging that kind of emotion? Is it anything groundbreaking? Or is it something that's been around for a while?

SIDNEY: It's a bit of both. So you kind of map the technology to what are you interested in measuring. So for example, if you're interested in looking at, you know, facial expressions, use a camera. If you're interested in looking at speech, use a microphone. And if you're interested in looking at kind of brain signals, you will use, you know, like EEG or something like that. Eye tracking is really interesting, because it kind of gets that attention. And sometimes you're interested in the bodily responses, so you'll actually measure some physiological responses. So all of these technologies have existed for a while in the lab, but they can only be used in lab settings and they're extremely expensive. What's really changed the game over the last, I would say, 10 or 15 years is now you can do a lot of this sensing using commodity consumer-based equipment. So eye trackers, you know, previously would cost about $30,000 or $40,000, you can do decent eye tracking for about $100 now. You don't need sophisticated cameras, you can work with webcams. So it's really transformed how we can take these technologies outside of the lab where everything is controlled more into the real world, such as informal learning spaces, such as museums, learning on the subway, learning in classrooms. It's really expanded the bandwidth of when we can study learning in more authentic naturalistic context.

ERIC: Yeah, I was just gonna use that word authentic because I imagine that you don't have a really authentic experience reading a book when you know someone is across the lab, watching you and tracking your eyes. When it's just something like your computer webcam is on and you get asked to read a book just sitting in front of it. It's much more natural. You're in your own environment. So that has to be a big help, and tell us things that we wouldn't have been able to before. You do use artificial intelligence as part of your research. And that term seems to apply to just so many different things these days. So how specifically does AI help you get from some of these inputs to an answer?

SIDNEY: If you think about something, like an emotion, right. To understand what an emotion is, you simply can't read out somebody's face. That's like a misconception, that we can read out an emotion. Humans, we think we're actually very good at it. But we're not. If you actually put people to the test, we actually losr a lot. But humans are very good at finding patterns and applying what's called the heuristics, rules, understanding the situation, understanding context. This is precisely what computers are not good at. Computers are very good at measurement, right? So they can measure facial expression, the fine grain, measure eye movements, measure all kinds of things. What AI does is helps us connect the high level human inferences and human intelligence with what computers are good at: sorting and finding patterns in data. So one way we use artificial intelligence is to kind of connect these two levels. A second way we use it is actually to help learning. So if you know something about somebodies emotion, or somebody's knowledge states, how can you use that to 1) give them feedback on their performance. But second, to try out different interventions to make it concrete. We worked on a computer interface that tracked eye movements when people were reading. And as we all know, it's very easy to zone out whem you're reading, right? You suddenly realize: I have no idea what I just read. And what happens is we had an eye tracking algorithm that could identify when that occurred, but then use that in an intelligent way to give people an opportunity to regain their attention, and also to correct any comprehension deficiencies. And that actually helped out in that they had better learning outcomes with that little algorithm.

ERIC: So even in action, so far, you've been able to not just study this and understand it, but actually come up with ways to improve learning with this kind of rapid response.

SIDNEY: Exactly. Yeah.

ERIC: That's really cool. One of the things we're seeing as AI becomes used for more of these kind of things are the ethical concerns that go along with it, making sure that we understand the biases that get programmed into it that might come about at any point in the process. So how have you addressed that in your work?

SIDNEY: we're addressing it in many ways. Actually, we have another project where we are investigating fairness and bias in AI algorithms for specifically looking at gender biases. In automatic algorithms that scan, you know, it's very common now to do job interviews, or virtually, you may respond to questions assessing your personality. And an AI algorithm will actually sort through and give you a score. Turns out that those algorithms can propagate biases in many ways, actually. For example, there is bias in the data. There's bias in the ratings that humans give to judge personality. So we're trying to first discover, what are the sources of bias in this one application? So really thinking deeply in understanding what are the sources of bias? Secondly, how do we quantify bias? How do we measure it? It's not obvious. And thirdly, how can we actually de-bias these algorithms? So that's kind of one way we're approaching this. People think typically, bias is just a matter of data, it starts there. But it goes much deeper than just collecting representative data. So even if you have representative data, that does not necessarily mean your AI algorithms will be fair and unbiased.

ERIC: So yeah, speaking about that last point of being able to not just recognize it, but actively try to remove it. What's the best way to do that moving forward? Is there one set of rules that we've started to put together for that? Or does it really depend on the project because, again, they're so widespread and and diverse?

SIDNEY: We've been focusing on one type of artificial intelligence, specifically machine learning, but using machine learning to make an assessment of some psychological construct, like personality or emotion. So in that, in that narrow area, you get what you measure. So I think the first thing we should start doing as a community, which we don't do right now, systematically, is actually quantify it. Are my algorithms biased? Is my data representative? What is the checklist? Am I collecting data out of convenience? Or am I actually collecting data with representativeness in mind? So it starts with a long list of questions, starting with the data, thinking about the tools that I'm using to process the data. Thinking about the algorithms themselves, and looking for bias. You know, it's hard to believe, but we've just started looking for bias. It also depends on what inferences you can make, right? So if you're looking at somebody's eye movements with respect to, you know, whether they're zoning out for example, right. Actually, if you're collecting data from a given context where you're kind of working with a population that are, you know, well rested and well slept and non fatigued, suddenly the patterns you document there may not actually apply to a different population, where maybe they're food insecure, they're experiencing homelessness, and so on and so forth. So even these little subtle things can have big effects.

ERIC: And I think that gets into another point where sometimes it's obvious, and sometimes it's really subtle. And you don't think about it from the beginning. Training a speech recognition algorithm, if you don't do it with a range of voices, how deep and how high, then it's not going to perform equally. But, you know, that seems like it should be obvious from the beginning. And then there's things like this that are just subtle. So it must be so hard to pick them out.

SIDNEY: COVID is another one, right? So we had some work in measuring teacher discourse in classrooms. And now suddenly, you have teachers wear masks. And all of a sudden, the sound quality changes and the whole interaction changes because of the social distancing, and so that's a big problem. It's not totally related to issues of bias. But a big problem in artificial intelligence is that a lot of the work we do is on certain contexts, and the models don't really generalize and you have to think about how to get them to generalize to different contexts.

ERIC: So really just thinking about it every step of the way. Thinking about it before you start trying to eliminate it at that point, but then also being very careful at the end to to make sure that nothing crept in that you weren't expecting.

SIDNEY: Right. So I would say I would sum it up in three words, you know, accuracy, how accurate are things. Generalizability, how do they generalize. And third, fairness, how fair are they?

ERIC: So to wrap up, what drew you to study this kind of thing? Using artificial intelligence, studying emotions? How did you get into it? How does one become a cognitive and computer scientist at the same time?

SIDNEY: For me, it's always been being at the right place at the right time. When I was in graduate school we actually were studying learning through interactive tutoring, understanding how to tutor students, but by developing computers, because tutors is very expensive. So how can we give kids the benefit of tutoring without the cost. And we realized that after studying 10s of hours of tutors in detail, that in addition to capturing what they do in this pedagogical strategies, a lot of what they do is responding to students emotions, actually. If a student is frustrated, they'll approach a topic from a different way. If a student is bored, they may just try telling a joke. And so we realized that a big missing piece was our tutors, you know, were cognitive machines. They had no model of emotions. So that project at the time, was trying to understand emotions during learning, recognize them automatically, and then use them in interventions. This was from 2004 to about 2010. And we actually built the first what we call an affect aware intelligent tutoring system that would sense confusion, frustration, boredom through facial expressions, body movements, and an understanding of the context of the interaction and would respond with these motivational scaffolds and things like that to help students kind of stay together. And we showed that that actually had some benefits in terms of learning compared to a version from that did not. So I just kind of fell into it. And it was really fascinating. I never thought much about emotions at the time, but it's one of the most exciting topics I feel because it's emotion right. It's exciting.

ERIC: Thanks so much for coming on the podcast, Dr. D'Mello.

SIDNEY: Thank you.

ERIC: Be sure to follow the Museum of Science on social media for more thought-provoking content on the role of artificial intelligence in the future of our society. Until next time, keep asking questions.

Theme song by Destin Heilman