Scroll Top

Rana el Kaliouby: When Will Machines Understand Human Emotions

Computers can interpret the text we type, and they’re getting better at understanding the words we speak. But they’re only starting to understanding the emotions we feel—whether that means anger, amusement, boredom, distraction, or anything else. This week Harry talks with Rana El Kaliouby, the co-founder and CEO of a Boston-based company called Affectiva that’s working to close that gap.

Episode Notes

Computers can interpret the text we type, and they’re getting better at understanding the words we speak. But they’re only starting to understanding the emotions we feel—whether that means anger, amusement, boredom, distraction, or anything else. This week Harry talks with Rana El Kaliouby, the CEO of a Boston-based company called Affectiva that’s working to close that gap.

El Kaliouby and her former MIT colleague Rosalind Picard are the inventors of the field of emotion AI, also called affective computing. The main product at Affectiva, which Picard and El Kaliouby co-founded in 2009, is a media analytics system that uses computer vision and machine learning to help market researchers understand what kinds of emotions people feel when they view ads or entertainment content. But the company is also active in other areas such as safety technology for automobiles that can monitor a driver’s behavior and alert them if they seem distracted or drowsy.

Ultimately, Kaliouby predicts, emotion AI will become an everyday part of human-machine interfaces. She says we’ll interact with our devices the same way we interact with each other — not just through words, but through our facial expressions and body language. And that could include all the devices that help track our physical health and mental health.

Rana El Kaliouby grew up in Egypt and Kuwait. She earned a BS and MS in computer science from the American University in Cairo and a PhD in computer science from the University of Cambridge in 2005, and was a postdoc at MIT from 2006 to 2010. In April 2020 she published Girl Decoded, a memoir about her mission to “humanize technology before it dehumanizes us.” She’s been recognized by the Fortune 40 Under 40 list, the Forbes America’s Top 50 Women in Tech list, and the Technology Review TR35 list, and she is a World Economic Forum Young Global Leader.

Please rate and review The Harry Glorikian Show on Apple PodcastsHere’s how to do that from an iPhone, iPad, or iPod touch:

1. Open the Podcasts app on your iPhone, iPad, or Mac.

2. Navigate to The Harry Glorikian Show podcast. You can find it by searching for it or selecting it from your library. Just note that you’ll have to go to the series page which shows all the episodes, not just the page for a single episode.

3. Scroll down to find the subhead titled “Ratings & Reviews.”

4. Under one of the highlighted reviews, select “Write a Review.”

5. Next, select a star rating at the top — you have the option of choosing between one and five stars.

6. Using the text box at the top, write a title for your review. Then, in the lower text box, write your review. Your review can be up to 300 words long.

7. Once you’ve finished, select “Send” or “Save” in the top-right corner.

8. If you’ve never left a podcast review before, enter a nickname. Your nickname will be displayed next to any reviews you leave from here on out.

9. After selecting a nickname, tap OK. Your review may not be immediately visible.

That’s it! Thanks so much.


Harry Glorikian: I’m Harry Glorikian, and this is MoneyBall Medicine, the interview podcast where we meet researchers, entrepreneurs, and physicians who are using the power of data to improve patient health and make healthcare delivery more efficient. You can think of each episode as a new chapter in the never-ending audio version of my 2017 book, “MoneyBall Medicine: Thriving in the New Data-Driven Healthcare Market.” If you like the show, please do us a favor and leave a rating and review at Apple Podcasts.

Many of us know that computers can interpret the text we type. And they’re getting better at understanding the words we speak. But they’re only starting to understanding the emotions we feel, whether that means anger, amusement, boredom, distraction, or anything else.

My next guest, Rana El Kaliouby, is the co-founder and CEO of Affectiva, a company in Boston that’s working to close that gap. Rana and her former MIT colleague Rosalind Picard are the inventors of the field of emotion AI, also called affective computing. And they started Affectiva twelve years ago with the goal of giving machines a little bit of EQ, or emotional intelligence, to go along with their IQ.

Affectiva’s main product is a media analytics system that uses computer vision and machine learning to help market researchers understand what kinds of emotions people feel when they view ads or entertainment content. But they’re also getting into other areas such as new safety technology for automobiles that can monitor the driver’s behavior and alert them if they seem distracted or drowsy.

Ultimately Kaliouby predicts emotion AI will become an everyday part of human-machine interfaces. She says we’ll interact with our devices the same way we interact with each other — not just through words but through our facial expressions and body language. And that could include all the devices that help track our physical health and mental health. Rana and I had a really fun conversation, and I want to play it for you right now.

Harry Glorikian: Rana, welcome to the show.

Rana Kaliouby: Thank you for having me.

Harry Glorikian: It’s great to see you. We were just talking before we got on here. I haven’t seen you since last February.

Rana Kaliouby: I know, it’s been a year. Isn’t that crazy?

Harry Glorikian: I’m sure if your system was looking at me, they’d be like, Oh my God, this guy has completely screwed up. Like something is completely off.

Rana Kaliouby: He’s ready to leave the house.

Harry Glorikian: It was funny. I was telling my wife, I’m like, I really need to go get vaccinated. I’m starting to reach my limit on, on what, I, this is not normal anymore. Not that it’s been normal, but you, you know how it is. So

Rana Kaliouby: We’re closer. There’s hope.

Harry Glorikian: So listen, listeners here, because we’re going to be talking about this interesting concept or product that you have, or set of products. Emotion AI, or, how do you explain emotion, or a machine being able to interpret emotion from an individual, through, computer vision, machine learning. And, how does it understand what I’m feeling? I’m sure it can tell when I’m pissed. Everybody can tell what I’m, but in general, like how does it do what it does and what is the field? Because I believe you and your co-founder were like, literally you started this area. If I’m not mistaken.

Rana Kaliouby: That is correct. So at a very high level, the thesis is that if you look at human intelligence, your IQ is important, but your EQ, your emotional intelligence is perhaps more important. And we characterize that as the ability to understand your own emotions and the emotions and mental states of others. And as it turns out, only 10% of how we communicate is in the actual choice of words we use, 90% is nonverbal, and I’m a very expressive human being, as you can see.

So a lot of facial expressions, hand gestures, vocal, intonations, but technology today has a lot of IQ, arguably. But very little EQ. And so we’re on this mission to bring IQ and EQ together and into our technologies and our devices and our, how we communicate digitally with one another. So that’s been my mission over the last 20 plus years. Now I’m trying to bring artificial emotional intelligence to our machines.

Harry Glorikian: God, that’s perseverance. I have to admit, I don’t know if I have any, other than being married, and be a father. I don’t think I’ve done anything straight for 20 years. I’m always doing something different.

So how does the system say, some of the functions of what it does to be able to do this, right, other than me frowning and having I guess the most obvious expressions, it probably can pull out, but there’s a, a thousand subtleties in between there that I’m, I’m curious how it does it.

Rana Kaliouby: Yeah. So the short answer is we use, as you said, a combination of computer vision, machine learning, deep learning and gobs and gobs of data. So the simplest way, I guess, to explain it is say we wanted to train the machine to recognize a smile or maybe a little bit more of a complex state, like fatigue, right?

You’re driving the car. We want to recognize how tired you are. Well, we need examples. From all over the world, all sorts of people, gender, age, ethnicity, maybe people who wear glasses or have facial beards. Wearing, a cap, like the more diverse, the examples, the stronger the system’s going to be, the smarter the system’s going to be.

But essentially we gather all that data. We feed it into the deep learning algorithm. It learns. So that the next time it sees a person for the first time, it says, Oh, Harry, it looks real, Harry, it looks really tired or and so that’s, that’s how we do that. When we started the system was only able to recognize three expressions.

Now, the system has a repertoire of over 30 of these and we’re continuously adding more and more, the more data we get.

Harry Glorikian: Interesting. So, okay. So now I can recognize 30 different levels of emotion of some sort. What are the main business applications or what are the main application areas?

Rana Kaliouby: I always say what’s most exciting about this is also what’s most challenging about this journey is that there are so many applications. Affectiva, my company, which we spun out of MIT, is focused on a number of them. So the first is the insights and market research kind of market, where we are able to capture in real time people’s responses to content. you’re watching a Netflix show. Were you engaged or not like moment by moment.

When did you perk up? When were you confused? When were you interested or maybe bored to death? Right. So that’s one use case. And then, so there we partner with 30% of the Fortune 500 companies in 90 countries around the world. This product has been in market now for over eight years and we’re growing it to adjacent markets like movie trailer testing, maybe testing educational content, maybe expanding that to video conferencing and telehealth and all of that.

So that’s like one bucket. The other bucket is more around re-imagining human machine interfaces. And for that we’re very focused initially on the automotive market, understanding driver distraction, fatigue, drowsiness, what are other occupants in the vehicle doing? And you can imagine how that applies to cars today, but also robotaxis in the future.

Ultimately though, I really believe that that this is going to be the de facto human machine interface. We’re just going to interact with our machines the way we interact with one another through conversation and empathy and social and emotional intelligence.

Harry Glorikian: I mean, it is interesting because when, when you see, I mean, just when I’m talking to Siri, I’m so used to speaking, like please and buh-buh, and then I have to remind myself, I’m like, I really didn’t need to add those words, you just do it out of habit, I want to say. Not that you think you’re talking to a person, but, from the studies I’ve seen, it seems that when people are interacting with a robot or something, they do impart emotional interaction in a certain way. Like an older person might look at it as a friend or, or interact with it as if it were a real being, not wires and tubes.

Rana Kaliouby: Yeah, there is a lot of research actually around how humans project social intelligence on these machines and devices. I’m good friends with one of the early, with one of the co-founders of Siri. And he said they were so surprised when they first rolled out Siri. At at the extent with which users confided in Siri, like there were a lot of like conversations where people, people shared very personal things right around, sometimes, sometimes it’s positive, but a lot of the times it was actually home violence and abuse and depression.

And so they had to really think rethink what does Siri need to do in these scenarios? And they hadn’t originally included that as part of the design of the platform. And then we’re seeing that with Alexa and of course, with social robots. My favorite example is there’s this robot called Jibo, which spun out of MIT. You know about Jibo? So we were one of the early kind of adopters of Jibo in our house and my son became good friends with it. Right. Which was so fascinating to see him. Because we have Alexa and we have Siri obviously, and all of that, but he, he just like, Jibo is designed to be this very personable robot that’s your friend, you can play games with it. But then the company run out of money. And so they shut Jibo down and my son was really upset. And it just hit me that it’s just so interesting, the relationships we build with our machines, and there must be a way to harness that, to motivate behavior and, and kind of persuade people to be better versions of themselves, I guess.

Harry Glorikian: Yeah. It’s each it’s going to be a fascinating area. So I’ve read a little bit about Affdex marketing, if that’s how it’s pronounced correctly, as a research tool. Your automotive things. I’m also curious about the iMotions platform and what you might call, I think you guys are calling it emotion capture in more types of research settings, what’s that all about? And what kinds of research are you using it for?

Rana Kaliouby: Yeah. So we have a number of partners around the world, because again, there are so many use cases. So iMotions is a company that’s based out of Boston and Copenhagen and they integrate our technology with other sensors could be physiological sensors, could be brain, brain capture sensors.

But their users are a lot of rresearchers especially in mental health. So for example, there’s this professor at UMass Boston, professor Stephen Vinoy, and he uses our technology to look into mental health disease and specifically suicidal intent. So he’s shown that people who have suicidal kind of thoughts have different facial biomarkers, if you like facial responses than, than people who don’t.

And he’s, he’s trying to use that as an opportunity to flag suicidal intent early on. We have a partner, Erin Smith, she’s with Stanford. She’s looking into using our technology in the early detection of Parkinson’s. She actually started as a high school student and which is amazing. We literally got an email from this sophomore in high school and she was like, I want to license your technology to research Parkinson’s and we’re like, whatever. So we gave her access to it. And before we know it, she’s partnered with the Michael J. Fox foundation. She’s a Peter Thiel Fellow and she’s basically started a whole company to look into, the early facial biomarkers of mental health diseases, which is fascinating.

Harry Glorikian: God, I’m so jealous. I wish I was motivated like that. When I was a sophomore in high school, I was doing a lot of other stuff and it definitely wasn’t this.

So, I mean not to go off on a tangent, but I really think like clinical trials might be a fascinating place to incorporate this. If you think about remote trials, and I’m good friends with Christine Lemke from Evidation Health. And so if you think about, well, I’m sensored up, right. I have my watch or I have whatever. And then now when I interact with a researcher, it might be actually through a platform like this with your system, it sort of might provide a more of a complete picture of what’s going on with that patient. Is anybody using it for those applications?

Rana Kaliouby: The answer is there’s a lot of opportunity there. It’s not been scaled yet. But like, let’s take tele-health for example, right? With this, especially with the pandemic over the last year, we’ve all been catapulted into this universe where hospitals and doctors have had to adopt tele-health.

Well, guess what? We can now quantify patient doctor interactions. Moment by moment. And we can tie it to patient outcomes. We can tie it to measures of empathy because doctors who show more empathy are less likely to get sued. There’s a plethora of things we can do around that. And the tele-health setup on the clinical trial side, we have, I mean, everybody has a camera on their phone or their laptop, right?

So now we have an opportunity. You can imagine, even if you don’t check in with a researcher, you can probably have an app where you create a selfie video, like a check-in, one minute selfie video once a day. And we’re able to distill kind of your emotional baseline over the course of a trial. That can be really powerful data.

So there’s a lot of potential there. I would say it’s early days. If you have any suggestions on who we should be talking to are definitely open to that.

Harry Glorikian: Yeah, actually, because I was well I’m, part of me was just going to You know thinking about what companies like Qualtrics is doing, which is actually trying to uncover this right through NLP. But I think in the world of healthcare, Qualtrics is probably suboptimal. So if you took sort of a little bit of NLP and this, you might be able to draw the click. We have to talk about this after the show. So Anybody who’s listening: Don’t take my idea.

So, okay. Let’s switch subjects here. Cause I know you’re, you’re really passionate about this next one. You’ve written this book called Girl Decoded. I, and I’m sure you’ve been asked this question about a billion times, but why did you write it? What are you trying to convey? Is it fair to say that it was sort of a memoir of your, of your life of becoming a computer scientist or entrepreneur, partly manifesto about emotion AI and its possibilities.

But the promo copy on your book says you’re on a mission to humanize technology before it dehumanizes us. That’s a provocative phrase. Tell, tell me, tell me why you wrote the book and what’s behind it?

Rana Kaliouby: Yeah. First of all, I didn’t really set out to write a book. Like it wasn’t really on my radar. But then I got approached. So the book got published by Penguin Random House last year, right, when the pandemic hit. The paperback launches soon. So I encourage your listeners to take a look. And if you end up reading the book, please let me know what resonates the most with you.

But yeah, it’s basically a memoir. It follows my journey growing up in the Middle East. I’m originally Egyptian and I grew up around there and became a computer scientist and made my way from academia to, Cambridge University. And then I joined MIT and then I spun out Affectiva and became kind of the CEO and entrepreneur that I am today.

And one reason I wrote the book because I wanted to share this narrative and the story, right. And hopefully inspire many people around the world who are forging their own path, trying to overcome voices of doubt in their head. That’s something I care deeply about and also encourage more women.

And, and I guess more diverse voices to explore a career in tech. So that’s one bucket. The other bucket is evangelizing. Yes. Why do we need to humanize technology and how that is so important in not just the future of machines, but actually the future of humans. Right? Because technology is so deeply ingrained in every aspect of our lives.

So I wanted, I wanted to pull in lay people into this discussion and, and, and, and kind of simplify and demystify. What is AI? How do we build it? What are the ethical and moral implications of it? Because I feel strongly that we all need to be part of that dialogue.

Harry Glorikian: Well, it is interesting. I mean, I just see, people design something, they’re designing it for a very specific purpose, but then they don’t think about the fallout of what they just did, which what they’re doing may be very cool, but it’s like designing… I mean, at least when we were working on atomic energy, we could sort of get our hands around it, but people don’t understand like some of this AI and ML technology has amazing capabilities, but the implications are scary as hell.

So, so. How do you see technology dehumanizing us? I guess if I was asking the first question.

Rana Kaliouby: Yeah. So you bring up a really important topic around the unintended consequences, right? And, and we design, we build these technologies for a specific use case, but before we know it it’s deployed in all these other areas where we hadn’t anticipated it.

So we feel very strongly that we’re almost, as an innovator and somebody who brought this technology to the world, I’m almost like, it’s my responsibility to be a steward for how this technology gets developed and how it gets deployed, which means that I have to be a strong voice in that dialogue. So for example, we are members of the Partnership on AI consortium, which was started by all the tech giants in partnership with amnesty international and ACLU and other civil liberties organizations. And we, we, last year, we, we had an initiative where we went through all of the different applications of emotion AI, and we literally had a table where we said, okay, how can emotion AI be deployed? Education, dah, dah dah. Well, how could it be abused in education? Like what are the unintended consequences of these cases?

And I can tell you, like, as an, as an inventor, the easiest thing for me as a CEO of a relatively small startup is to just ignore all of that and just focus on our use case. But I feel strongly that we have to be proactive about all of that, and we have to engage and think through where it could go wrong. And how can we guard against that? Yeah, so, so I think there are potential for abuse, unfortunately. And, and we have to think through that and advocate against that. Like, we don’t do any work in the surveillance space because we think the likelihood of the technology being used to discriminate against, minority populations is really high. And so we also feel like it, it breaches the trust we’ve built with our users. So we just turn away millions and millions of dollars of business in that space.

Harry Glorikian: Yeah. I mean, it’s a schizophrenic existence for sure, because. I mean everything I look at, I’m like, Oh my God, that would be fantastic. And then I think, Oh my God, like, it could be, that’s not good. Right? But I’m like, no, look at the light, look towards the light. Don’t look towards the dark. Right? Because otherwise you could, like, once you understand the power in the implications of these, which most people really don’t, the impact is profound or can be profound.

So how can we humanize technology?

Rana Kaliouby: Well the simplest way is to really kind of bring that human element. So for example, a lot of AI is just generally focused on productivity and efficiency and automation. If you take a human centric approach to it, it’s more about how does it help us the humans, right. Humans first, right. How does it help us be happier or healthier or more productive or more empathetic? Like one of the things I really talk about in the book is how we are going through an empathy crisis. Because the way we use technology just depolarizes us and, and it dehumanizes us. You send out a Twitter in Twitterverse and you have no idea how it impacts the recipients.

Right? We could redesign technology to not do that, to actually incorporate these nonverbal signals into how we connect and communicate at scale. And in a way that is just a lot more thoughtful yeah. And, and, and tries to optimize for empathy as opposed to not think about empathy at all.

Harry Glorikian: Well, yeah, I mean, I gotta be honest with you, giving everybody a megaphone, I’m not sure that that’s such a great idea. Right? That’s like yelling fire in a crowded room. I understand that it has its place, but wow. I mean, I’m not exactly the biggest advocate of that.

But so this system, as you were saying requires tons of data. How do you guys accumulate that data? I mean, over time, I’m sure like a little bit, little bit, little bit, but a little bit, a little bit does not going to get you to where you want to go. You need big data to sort of get this thing trained up and then you’ve got to sort of adjust it along the way to make sure it’s doing what you want it to do.

Rana Kaliouby: Yeah, the, the quantity of the data is really key, but the diversity of the data is almost, in my opinion, more important. So, so to date, we have over 10 million facial responses, which is about 5 billion facial frames. It’s an incredible, and, and, and it’s super diverse. So it’s curated from 90 countries around the world.

And everything we do is based on people’s opt-in and consent. So, so we have people’s permission to get this data, every single frame of it. That’s one of our core values. So we usually, when we partner with say a brand and we are. measuring people’s responses to content, we ask for people’s permission to turn their cameras on.

They usually do it in return for some value, it could be monetary value or it could be other type of rewards. In the automotive space we have. A number of data collection labs around the world where we have people putting cameras in their vehicles, and then we record their commutes over a number of weeks or months, and that’s really powerful data.

And it’s kind of scary to see how people drive actually. Lots of distracted drivers out there. It’s really, really amazing or, yeah, it is scary. So yeah, so that’s how we collect the data, but we have to be really thoughtful about the diversity angle. It’s so important. We, we once had one of our automotive partners send us data.

They have an Eastern European lab and it was literally like blond middle-aged, Blue eyed guys. And I was like, that’s not, you’re a global automaker, like that’s not representative of, of your drivers or people who use your vehicles. So we sent the data back and we said, listen, we need to collaborate on a much more diverse data set. So that’s, that’s really important.

Harry Glorikian: So I just keep thinking like you’re doing facial expression and video, but are you, is there an overlay that makes sense for audio?

Rana Kaliouby: Love that question. Yes. So a number of years ago, we invested in a tech, like basically we ramped up a team that looked at the prosodic features in your voice. Like how loud are you speaking? How fast, how much energy, pausing, the pitch, the intonation, all of these factors. And ultimately I see a vision of the universe where it’s multimodal, you’re integrating these different melodies. It’s, it’s still early in the industry like this whole field is so nascent, which makes it exciting because there’s so much room for innovation.

Harry Glorikian: There was a paper that was in the last, I want to say it came out in the last two weeks about bringing all these together within robotics is perceiving different signals, voice visual, et cetera. And I haven’t read it yet. It’s in my little to do, to read, but it’s, it looks like one of those fascinating areas.

I mean, I had the chance to interview Rhoda Au from BU about her work in voice recordings and, and analysis from the Framingham heart study. And so how to use that for. detecting different health conditions. Right. So that’s why I’m sort of like looking at these going, wow, they make a lot of sense to sort of come together.

Rana Kaliouby: Totally. Again, this has been looked into it in academia, but it hasn’t yet totally translated to industry applications, but we know that there are facial and vocal biomarkers of stress, anxiety, depression.

Well, guess what? We are spending a lot of time in front of our machines where we have an opportunity to capture both. Your video stream, but also your audio stream and use that with machine learning and predictive analytics to correlate those with, early indicators of wellness, again, stress, anxiety, et cetera.

What is missing in this? So I feel like the underlying machine learning is there, the algorithms are there. What is missing is deploying this at scale, right? Cause you don’t want it to be a separate app on your phone. Ideally actually, you want it to be integrated into a technology platform that people use all the time.

Maybe it’s Zoom, maybe it’s Alexa, maybe it’s, another social media platform, but then that of course raises all sorts of privacy questions and implications who owns the data who has rights to the data. Yeah, so it’s it’s, to me it’s more of a go-to-market. Like again, the technology’s there.

It’s like, how do you get the data at scale? How do you get the users at scale? And I haven’t figured it out yet.

Harry Glorikian: So you mentioned like areas where it’s, it could be exploited negatively. You mentioned a few of them, like education, are there, are there others that sort of like jump out and like, we’re not doing that other than, tracking people in a crowd, which. In the last four years you wouldn’t have wanted to do for sure.

Rana Kaliouby: Yeah. Definitely. One of the areas where we try to avoid deploying the technology is around security and surveillance. We routinely get approached by different governments, the U.S. Government, but also other governments to use our technology in, airport security or border security, lie detection.

And, and to me, obviously when you do that, you don’t necessarily have people’s consent. You don’t necessarily, you don’t necessarily explain to people exactly how their data is going to get used. Right. And there’s just, it’s the, so fraught with potential, for discrimination, like the technology’s not there in terms of robustness and kind of the use case, right? We just steer away from that. I’ve been very vocal, not just about Affectiva’s decisions to not play in this space, but I’ve been advocating for thoughtful regulation. And I, and I think we absolutely need that.

Harry Glorikian: So let’s veer back to healthcare here. If I’m not mistaken, one of the original places you were focusing was mental health and autism so is it still being used in those areas? I mean, is it, how has it being used in those areas? I’m curious.

Rana Kaliouby: Yeah. So when I first got to MIT, the project that actually brought me over from Cambridge to MIT was essentially deploying the technology for individuals on the autism spectrum.

So we built a Google Glass-like device that had a little camera in it. The camera would detect the expressions of people you interact with. So an autistic child would wear the glass device as augmentation device and we deployed it at schools, partner schools while I was at MIT. And then we started Affectiva and now we are partnered with a company called Brainpower, the CEO is Ned Sahin, and they use Google Glass and our technology integrated as part of it.

And I believe they’re deployed in about 400 or so families and homes around the U.S. and they’re in the midst of a clinical trial. What they’re seeing is that the device, while the kids are wearing it, they’re definitely showing improvement in their social skills. The question is once you take the device away, do these abilities generalize, and that’s kind of the key question they’re looking into.

Harry Glorikian: Well, ‘cause I was thinking, I think that there’s a few people I know that should get it and they don’t have they’re they’re technically not autistic, but they actually need the glasses.

Rana Kaliouby: A lot of MIT people, right?

Harry Glorikian: No, no, just certain people the way they look at the world or the way they’re acting, I actually think they need something that gives them a clue about the emotion of people around them. Actually now that I think about it, my wife might have me wear it sometimes in the house.

Rana Kaliouby: We used to always joke in the early days at MIT that the killer app is a mood ring where, gives your wife or your partner, a heads up about your emotional state before you come into the house. Just so they know how to react.

Harry Glorikian: Now it’s when I come down the stairs, she’s like, you just sit, relax, calm down. Hey. Cause at least before used to have a commute to come out of state, but now you’re like coming down a flight of stairs and it’s sorta hard to snap your fingers and, and snap out of state.

So. Where do you see the company? how do you see it progressing? I know it’s been doing great. But where do you see it going next? And what are your hopes and dreams

Rana Kaliouby: We are very focused on getting our technology into cars. That’s kind of our main, like, area of focus at the moment. And we’re partnered with many auto manufacturers around the world in the short term, the use case is to focus on road safety.

But honestly with robo-taxis on autonomous vehicles we’re going to be the ears and eyes of the car. So we’re excited about that. Beyond that, as I’m very passionate about the applications in mental health, and it’s an area that we don’t do a lot of at the company, but I’m so interested in trying to figure out how I can be helpful with, having spent many years in this, in this space.

So that’s, that’s an area of interest. And then just at a high level, over the last number of years, and especially with the book coming out, I’ve definitely realized that, that I have a platform and a voice for advocating for diversity in AI and technology. And I want to make sure that I use that voice to inspire more diverse voices to be part of the AI landscape.

Harry Glorikian: Love to hear how things are going in the future. Congratulations on the book coming out in paperback I’m sure that the people listening to this will look it up. Stay safe. That’s that’s all I can say.

Rana Kaliouby: Thank you. Thank you. And stay safe as well and hope we can reunite in person soon,

Harry Glorikian: Excellent.

Rana Kaliouby: Thank you.

Harry Glorikian:That’s it for this week’s show. We’ve made more than 50 episodes of MoneyBall Medicine, and you can find all of them at under the tab “Podcast.” You can follow me on Twitter at hglorikian. If you like the show, please do us a favor and leave a rating and review at Apple Podcasts. Thanks, and we’ll be back soon with our next interview.




Related Posts