Rayid Ghani How AI/ML Can Both Predict and Shape Patient Behavior
Episode Notes
In this week’s show, Harry interviews Rayid Ghani, a computer scientist at Carnegie Mellon University who studies how to use AI and data science to model and influence people’s behavior in realms like politics, healthcare, education, and criminal justice.
Ghani tells Harry he grew up hating coding since the very need for it showed that “computers are really stupid and dumb.” But Ghani says he eventually realized that machine learning can change that by allowing programmers to teach computers the rules of the game, at which point they can improve on their own and learn to solve real problems.
Ghani went on to become chief data scientist for the 2012 Obama campaign, and he has since used what he learned about data analytics to study applications of AI to large-scale social problems in many areas, including healthcare. He’s currently a Distinguished Career Professor in the Machine Learning Department at Carnegie Mellon University’s School of Computer Science.
In political campaigns, Ghani says, machine learning and other forms of AI are used not just to predict voter behavior but, in combination with behavioral psychology insights, to change it. “Why not do the same thing for issues with effects that are much, much broader?” he asks. “In health, we do fairly macro policies around ‘everybody should get this vaccine.’ But often you don’t have enough resources to make sure that happens.” AI and machine learning may be able to help by predicting who needs help the most and then persuading them to make the necessary changes—for example, changing their diet and lifestyle to avoid Type 2 diabetes. But it’s all a tricky area to study, he says. “Those are the two things we need to couple together—prediction combined with behavior change—and that requires both the data about these individuals and, more importantly, creates ethical issues about how we test these ideas.”
Please rate and review The Harry Glorikian Show on Apple Podcasts! Here’s how to do that from an iPhone, iPad, or iPod touch:
1. Open the Podcasts app on your iPhone, iPad, or Mac.
2. Navigate to The Harry Glorikian Show podcast. You can find it by searching for it or selecting it from your library. Just note that you’ll have to go to the series page which shows all the episodes, not just the page for a single episode.
3. Scroll down to find the subhead titled “Ratings & Reviews.”
4. Under one of the highlighted reviews, select “Write a Review.”
5. Next, select a star rating at the top — you have the option of choosing between one and five stars.
6. Using the text box at the top, write a title for your review. Then, in the lower text box, write your review. Your review can be up to 300 words long.
7. Once you’ve finished, select “Send” or “Save” in the top-right corner.
8. If you’ve never left a podcast review before, enter a nickname. Your nickname will be displayed next to any reviews you leave from here on out.
9. After selecting a nickname, tap OK. Your review may not be immediately visible.
That’s it! Thanks so much.
Transcript
Harry Glorikian: Welcome to the show.
Rayid Ghani: Thank you.
Harry Glorikia: So pleasure to speak with you and glad that, you know, we were able to connect. I’m so curious that I was, you know, when we, the way I, read about some of the things that you’ve done was of course, in, you, or seemed to be the focal point or the center point of The Obama campaign and how they, they understood how to utilize AI, AML, and different analytical approaches to understand voter sentiment and how to understand how people were.
You know, thinking about things and, and, and how they might vote in the future. And, and it’d be interesting to hear, first of all, a little bit of your background, obviously for everybody that’s listening and then a little bit about that project, which I’m sure you’ve talked about many, many times over by now.
Rayid Ghani: Yeah. Well, so before we go further focal point is, is, you know extremely exaggerated, but I. I think my role. So my background is, is traditionally computer science machine learning before, before it was trendy. And before you could tell people you do it without getting, getting blank stares. But that’s what I did in grad school.
And, and what got me really excited about this area was you know, I wasn’t one of those kids where like, you know, I loved programming since I was seven, right. Coding. I hated coding. I thought it pointless. I thought computers were supposed to be smart. And then I realized the computers are dumb and you have to tell them exactly what to do, how to do it by that time, you might as well just do it yourself.
So, when I kind of encountered machine learning and AI, it sort of opened up an area, it says, well, yes, computers are really stupid and dumb, but we don’t have to tell them exactly what to do. We can tell them the rules of the game and they can play the game and adapt and get better as the world goes on.
And I think for me, that was the piece which got me to thinking about. How do I go from there to using these technologies to solve real problems? The Obama campaign was, was one of those massive challenges, massive organizations where, you know, people talk about it as a startup. And well, it was, it was a startup with all the problems of a legacy organization, you know, old databases, silo data, lots of people doing things in person too, you know, door knocks, phone calls, millions of volunteers. And the challenge was how do we integrate all of this, combine it, analyze it, and then inform these actions where, who should be prioritized for voter turnout, who should be persuaded, who should be registered to vote, who should be asked for money.
And, sort of a lot of my role was to take the things that were happening in the private sector around these efforts. And or coming from academia, newer techniques, newer methods of machine learning and see how do we apply these to the campaigning world, which hadn’t caught up to, to, to those types of things.
And. The idea there was really, you know, very much one of the things that, that the marketing world talks about a lot is, you know, micro-targeting, and, and, and one-to-one and individual. And that was a thing that political campaigns have figured out a long time ago, because all the outreach was individual all door knocks, and the phone calls were not mass targeted or segmentation.
It was all individual. What was lacking offing was, was analytics behind it that would inform those individual interactions that the interactions existed. Which is the opposite of a lot of the, the marketing and retail world, where you had. The data to do figure this out, but not the outreach in the store at the point of purchase and things like that.
So a lot of the idea was how do we use machine learning and social networks and, and, and combine that with behavioral psychology persuasion type methods to see, you know, it’s not about predict just about predicting, who’s going to vote or not vote or donate or volunteer, but. Coupling that with persuading, how do we persuade them?
How do we change the outcome that we care about? Because it wasn’t a, it wasn’t an exercise in trying to, trying to just predict and then watch things happen. It was an exercise and in changing the outcomes
Harry Glorikian: Well that sounds like a, that sounds like almost like especially at the time you were talking about kid in a candy store playing with, you know, and seeing I almost in real time. What the effects were and that must have been quite interesting.
Rayid Ghani: It was, and I think the, it would have been a kid in a candy store if you had unlimited time to explore. But one of the things about a campaign is that you have a very real deadline, which I don’t think I’ve ever had in any other job before, you know, you always answered a vote.
We’ll do better next time. And that’s amazing. You have a binary outcome. It’s not, Oh, we can, you know, we did well it’s UN or news, and there’s a very real deadline, which means that your effort’s going to have to be targeted towards winning. And, and sometimes you have to sacrifice those, those exploratory things that, Oh, it’ll be interesting to see what we can do there because you just don’t have, have the opportunity to explore those things.
What’s, what’s sort of unique about it. Campaign also is that you’ve got pretty fast cycles, right? So, so you’re, you’re getting data regularly, but you’re making. You’re doing analysis and you’re making recommendations about what to do, and then you’re collecting data again, to figure out what changed. So you’re, you know, for the, for the, for the rest of the world, polling is kind of telling them who’s winning, who’s losing for a campaign.
Polling is basically a resource allocation tool where they’re trying to figure out we did this last week. What was the impact of that? And how do we change the allocation next week? And how do we, how do we iterate? Which is, which is very similar to a lot of other areas we work on. Are we just, I think the urgency makes it all much more intense and, and helps you focus on a specific goal.
Whereas in, you know, that should be the strategy and regular government medicine where you’re doing exactly this, but I think we get distracted. Because we’ve got time sometimes.
Harry Glorikian: Well, I think that, you know, to, to, to sort of start slanting towards healthcare is, you know everybody always tells me, like, you don’t understand it’s complicated and, you know, getting people to do things and so on and so forth.
And I’m like, I know it’s complicated. I mean, I try to manage my own health and I realized like all the little voices in my head and how complicated it is, but. I know I can be influenced. I know other people can be influenced you know, Marlboro did a great job for years influencing people with the Marlboro man.
So there’s a way to get people to move in a particular direction. I don’t think you’ll get a hundred percent right. But you can get a lot of people. So now, you know, the Obama campaign gave you a lot of, sort of real on the ground tech, you know, experience. And so now I know you’re. You are at Carnegie Mellon and you’re experimenting with a lot of the same tools and techniques, obviously more advanced at this point, but utilizing it for community outreach and health care and, and, and making a difference too, I’m going to call it, you know, the average person or, or the mass masses out there.
Tell me a little bit about how you think these technologies can make a big difference in. The world of healthcare or just general health.
Rayid Ghani: Right. So, I think for me, the, the aha moment was really during, during the campaign, sort of what one of the lessons learned was really, if you can have this, this coordinated organization where you have data about people, you’ve got.
You’re making these recommendations and people are acting on them and then you’re testing what works and what doesn’t work. And if you can mobilize people to achieve sort of these, you know, goals of winning elections and win again, Baines Why not do the same thing for issues that affect us much, much, much broader than in fact everyone where they’re probably, some of them are even less controversial than, than, and then a lot of political candidates.
And so the last sort of several years, about seven, eight years or so, what, what I’ve been focused on first at university of Chicago and then at Carnegie Mellon is to see, can we apply these technologies, machine learning, AI, all the buzz words that, that, that, that all too common. Towards improving the Celts criminal justice education, workforce development, economic development.
And I think the idea is it’s really simple, right? In general, a lot of the things we do in health tend to be. No, we do some, we do fairly macro things. We make very macro policies around. Everybody should get this X. And I think everybody should get this vaccine. Well, that’s great. That’s the outcome we want.
But often you don’t have enough resources to make sure that that happens. You know, ideally we would have the resources, so that’d be, we go convince everyone, we figure out the right person to convince each person using their right message. But that doesn’t happen. So, so then we have two options either.
We focus on the people who need that help the most, who need that, who are more at risk of a specific disease or a specific buyers. Or we figure out what is the right thing that would work for, for each individual, each person. Right? So for example, if we’re, you know, one of the things that we started doing a few years back in Chicago, was looking at working with the health department on lead poisoning.
Lead poisoning is horrible. It affects way too many children. It has pretty much permanent damage. That cannot be reversed except our government. So we know that this has irreversible damage. There’s nothing you can do once somebody has been exposed, ha has had, you know, high levels of blood lead in their blood.
The most common tactic that’s used by governments is waiting until somebody gets lead poisoning and then fixing things after that. So we know that that, that. Nothing yet. We still go and do reactive things. And why do we do that? Because of pure efficiency reasons. Cause they’re all we can’t fix every home that’s going to be too expensive.
So the way we prioritize, as we find kids who have been poisoned, fix their homes so that the future kids in that house don’t get, which is there is some impact. But you’ve the impact on the kid you’re trying, you know, who just got tested positive or high levels of lead, that’s almost zero. Right? So, that’s, and that’s pretty common in, you know, we have a lot of reactive policies.We wait until something. Something goes wrong
Harry Glorikian: And It ends up costing you more because that kid is not going to be a productive member of society at the same level that they would have without the exposure that
Rayid Ghani: That kid is going to, you know is gonna have pretty bad outcomes. Their family’s going to be effected. It’s the community is going to be affected, you know, trying to take generally a lot of these things happen in clusters, right? So entire communities get affected.
Harry Glorikian: And this is like everything in healthcare, right? Healthcare is generally, historically has been reactive. You get sick, you go to the doctor and they manage you. But I think technology now is giving us the ability to get ahead of some of these things.
Rayid Ghani: Right. And I think that’s the goal is, is, is that there’s two things we need to figure out, right? One is we need to figure out who do we prioritize because we don’t have resources to, to prioritize everyone as much as we want to.
And so, and that question is a difficult question, depending on your Seidel values, do we are at most risk. Who most need, who are most at risk of lead poisoning? Do we prioritize people who most need the help and will not make it otherwise? Do we prioritize people who are easiest and cheapest to help?
Do we prioritize people who have traditionally have had no other issues and, and this would make it much worse. So that’s one is, can we figure out who is at risk? And we predict proactively who needs that help? Who do we prioritize? The second thing in many problems is how do we actually convince them to change, to potentially change the outcome?
Right? And the lead poisoning case, it’s a little bit easier because we can remove the source of lead and that takes care of the problem. But if we’re looking at diabetes, that’s another project that we’re doing with a hospital and clinic system in Chicago was identifying people who might be at risk of type two diabetes.
And there, yes, you can identify the people, but pretty high likelihood pretty accurately. And same going back to the voting case, predicting who’s not going to vote is not that hard. Or predicting who’s, who’s not going to vote for us for a candidate. It’s not that hard. The question is how do we persuade them to vote for us and how do we persuade them to change their, their lifestyle, to not get type two diabetes?
And so those are the two things that we need to couple together is not just prediction. But prediction combined with behavior change. And that requires both sort of the, the, the data about these individuals. But more importantly, it, it, it sort of creates ethical issues around how do we test these ideas?
How do we test persuading, somebody to see. Whether they get persuaded, but that’s interesting. Right?
Harry Glorikian: Right. So I would, you know, just from the advertising agencies, from historical marketing you know, credit score data that on how people react and it, there must be like a ton of data out there on. You know, when we, when we did why we saw X and when we did Z, we saw B and, and different testing because I would assume that they, all the marketers have been doing this since they gathered their first piece of data right back. I don’t know, go, go as far back as you can,
Rayid Ghani: You would think so it’s, this is one of those things which rarely gets collected as data. What is collected is. What happened? What was collected is what did they buy? Right. So, so what we typically know is who gets diabetes and who doesn’t, what we know is who bought a product and who didn’t, what we don’t often collect very well.
And, and, and test is what got them to buy that, or how, what did, what did the physician try to get them to do to change their behavior? And. What was done when and what was the outcome? We typically see results. We typically see outcome. We see fatalities, we see diagnosis. So most hospitals, EMR data claims data has diagnosis codes and procedure codes.
It doesn’t have, Oh, the doctor gave them this intervention. They gave him this advice. Here’s what they said in return case that I’m not going to do this, or they ask them, have you been doing this? Like, Oh no, I, I haven’t been working out. I haven’t been going and I haven’t changed my, my, all that data doesn’t really get captured in most cases and the same for marketing in most marketing things that have been done.
There’s a whole set of tactics and behavioral psychology that exists or persuade people or things like social proof. And, we generally know that those things have an impact. What we don’t know is what types of impacts, what types of those tactics work for, what types of people, for what types of behaviors, that’s sort of the taxonomy that we need in order to say, okay, we’re trying to help people change to get.
Employment or to change their health outcomes for this person, the best thing to do is for their family members to tell them about how they did something similar and things changed for them. If we knew that, then we could sort of, we could one test that we can then deploy that. But that also comes with, again, the ethical issues around these types of things is, is, is who decides what that. What that is, who decides that, that, that, that we are able to test with that test that idea. And then he went to say that that is beneficial for them. And then how do we make, how do we involve that community into that? How do we involve the people who are being effected into this process,
Harry Glorikian: But, okay. So, I mean, There have been paper like Geisinger has done. They wrote a paper of what they were going to do. And then what the outcome of it was. And I’ve talked to Glen Steele about this as, as part of my book and, you know, they put in a monitoring system for their type two diabetes population and the nurses actively interacted with that population to be able to You know, gain or help manage those patients to have a better outcome.
Right. And then the doctors got raised up to manage the harder patients. Right. And they saw, I, can’t remember if it, some at 23 to 25% decrease in comorbidities. Right. So. You, you have this captive data set of 35,000 people, which is not small. Yeah. So, you know, somebody’s done the experiment. Yeah. So couldn’t you start to use that, that core data set to drive, you know, what you would do next to do say broader influencing.
Rayid Ghani: Yeah, you could. Right. So I think, I think a couple of things there, I’m not familiar with this study, so, so, so I don’t know what the population is. I don’t know.
Harry Glorikian: I’ll send, I’ll send you a copy of the paper
Rayid Ghani: With all those caveats, right? It’s yes. If somebody has done this on a, I’d say a small number of people, 35,000 it’s large for, for a study, but, but small compared to the type of number of people who need or who, you know, who we’re trying to influence.
I, yes, I think, I think step one would be to sort of see, well, how well does it generalize to different populations? Because often these studies you know, typically, so like diabetes is one where it disproportionately affects minorities. And, and so if the study that was done. Had the underrepresented people were, were also the people who had disproportionate, you know, disparities in there. Then we have to kind of figure out how well does it generalize to the two people of different types where those persuasion tactics done were those influenced done through digital channels or in person channels, if it’s digital, do they have access to those types of, of devices that are using, if it’s in-person do, do the people who were really trying to, to impact and influence.
Is it accessible to them? Can they come in person to these types of things? All of those, you know, so I think, I think step one is really, you know, if we know that that study worked, the second piece would be to figure out how to test and see how does it generalize to the larger population. Does it result in kind of fair and equitable outcomes for everyone rather than the typically, you know the White males or in this, in this, in this case, I’m not sure what study was.
Right. And if that’s the case, if it does generalize, then you’re absolutely right. I think, I think we can start sort of. Defining community programs that involve the communities that are being effected and design. Okay. Here’s what we’re trying to do. Here’s our overall goal. We want to reach equitable reduction in type two diabetes in this community.
Here’s a trial that we’re running. We’re going to identify these people. We’re going to try to change their behavior. We’ll use the community and the community health workers and the nurses and the physicians to put this program together. Now what we still need to figure out. One challenge that comes into is again, is we often run into this lack of resources issue, right?
In which case, then we have to triage. And that’s where we need the data to. We know we can’t try everything with everyone. And so which, if you could only try one thing, which one thing should it be? That’s where the, the collected data is really helpful. And that’s where we have to collect the data also in the right way.
Right? If you only tried one thing with people, you don’t know. The counterfactual. You don’t know what would have happened if you tried something else. And so if it was not done as an experiment, then all we know is this person responded or did not respond to this tactic? Well, we don’t know is what would have happened if we had tried a different tactic, would it have been better?
Harry Glorikian: Worse? Wouldn’t you just love it. If the system could play and eventually say here are the top three ways that, that seemed to get the outcome that you’re looking for and then implement those.
Rayid Ghani: Yes. Except each, each turn in this play could cost somebody’s life. Right. So what would you want to figure out is really low?
So let’s say you do try were, we’re going to try to change there. There. Physical like a walking behavior. And we try to tactic that. I’m not sure if it’s going to work, but it didn’t work. That person ended up getting type two diabetes. And if you would only try to, if they were in group B, it would’ve worked out fine for them and they would have changed their behavior.
And so the question really is what, how do you do it ethically? But I think that the other question is how do you do this with lore? Lower risk outcomes, right? Where, and, and can you figure out a way to generalize from kind of small things so that you don’t have to go and test with the, with the high risk situations?
Harry Glorikian: Yeah. And I was going to say, I mean, you, you pick, could you pick things that have Lower impact, but human nature, you could sort of get an idea of what buttons to push that’s. Right. And then raise the bar when you move to the next level.
Rayid Ghani: Exactly. So, I mean, for example, if in a, one of the big things in, in a lot of these things is sort of adherence, right?
Is do people and you, you have a bunch of different, you know, you can do reminders, you can do sort of. Social proof type things. You can, you can have, you know, a bunch of other, other tactics. And so you could try this out with, with, with lower stakes outcomes. And then you still want to test and see.
So the large number of people, the trials you run can be, but the lower stakes outcomes, and then the smaller number. Once you figure out some of those things, we still have to figure out whether it works for people. For the outcomes that we care about. And I think that’s where. The more we can do coordinated trials and studies across, right.
They’re going to be that’s the thing where, and, and sharing data across them all. I think that’s, that’s one that we don’t do very often is we’ll share results, but we don’t share a lot of the, original data that was collected as part of these types of trials.
Harry Glorikian: Actually, it was interesting. Cause I was talking to somebody this morning from a Takeda and we were actually talking about that exact same thing of.
Everybody holds onto their data. Or if they give you their data, they didn’t give you the metadata. Right. Right. So it’s less valuable than if it had the metadata with it. And you know that for, for some of these things, especially in the healthcare arena, that’s gotta, we have to figure out a way around that.
Rayid Ghani: That’s right. And when we have to figure out a way to share that this, we tried this thing with this person and it worked or didn’t work instead, what we shared is 24% improvement in reduction in diabetes. Well, that’s great. That is an average, you improve 24%, but maybe for 10% of the people, it really hurt them for 50%.
It improved 5% and for the rest, it improved quite a bit. And so if you, if you had that data, you could say, well, these 10%, don’t try this intervention with them because it will make their outcomes worse. So, so I think, and again, that’s coming in to what we started with, with, with elections, you know, in, in 2012 campaign, that was one of the things we learned as we ran these experiments to persuade people, to vote for Obama.
And then we measured their change in support for Obama after the persuasion. And we found that, you know, some people became more pers, more supportive after somebody went and talked to them. Some people kind of kept the same, but then we found people whose, who were negatively persuaded. And those were the people at that time.
We said, well, no, no, no, don’t talk to them. We’re gonna we’re nobody should talk to them because if we talk to them, they’re going to start going against us. Now that’s a political campaign. So we don’t need to convince everyone to vote for us. Well, not talking to them is fine. But in a, in a health situation, we have to figure out what else can we do?
What are the other types of interventions we have that are positive for these people and identifying them and focusing on them. And that’s where I think a lot of, you know, research and foundations and can, can put money in, in identifying gaps. Like who are we leaving behind today? Our interventions are measured right now at, you know, the effectiveness is, is at a macro scale, right?
10% of active, 20%, 30%, we need to go much more fine grain and say, well, for these types of people, it’s this much for these types of people this much for these types of people, it didn’t do anything. So let’s now focus on, on those people who are being left behind because we can’t just live with averages, you know, for forever.
Harry Glorikian: So now fast forward, right? Obama campaign. No now we’re moving forward to today. Technology has moved a lot faster. The chip sets we have now are orders magnitude more sophisticated that software has gotten better. Cloud computing has gotten more abundant and cheaper. Where do you see you see this technology, these capabilities going and the impact it having on.
Health and health outcomes, because if we’re moving to a value-based payment system, this direction has to become front and center for every healthcare system, otherwise they’re not going to be able to achieve their goals.
Rayid Ghani: Right. And I think that’s a good point. Right. So, so I think the technology.
Has existed for a while. It’s not as if, you know, six months ago, two years ago, five years ago, all of a sudden technology changed. So now we can do value-based healthcare. At some level technology has existed. It’s better now that there’s more data there. Computers are faster. I think the,, the issue is of more of an organizational and political than, than technical.
And so right now What’s required to kind of figure out value-based payments and value-based healthcare. It’s really, what’s, what’s the cost of this person. If I don’t do anything and how much will it cost me to pretend like it’s, it’s fairly straightforward and I’m doing this calculation, but which actuaries have been doing for a long, long, long time, right?
What the risks associated with this person. If I don’t do anything in my community, because I’m responsible for this entire set of people. Here is the cost. And then w what can I do now? That’s preventative that will reduce that cost and, and then going on and executing on that. Right. So I think, I think if we put these types of, I’d say regulations in place, really?
Because, because that’s, I think what’s, what’s really needed is you have to achieve these goals and these goals have to be not average. On average, you have to spend this money. It’s, it’s more, I think today that a lot of these, these payment systems are not focused on equity. They’re focused on kind of average.
Like we’re going to give you $5,000 per person. You want to say is no, no, no, no, no. I’m not going to give you a $5,000 a person and you do whatever you want with it. I’m going to give you this much money for this, but your goals should be. That everybody ends up with equitable, equal or risk of type two diabetes, for example.
Right. And that’s a very different framing because with the $5,000, what you might do is take the 10 highest risk people will, you can easily change and go after them and say, look, you know, I’m done. That doesn’t get too, it doesn’t get to equity. So I think, I think what’s missing from the value based healthcare.
It’s not the technology right now is sort of a little bit more fine grain thinking around. How do you turn that into equity focused value-based payment and value-based healthcare, and then that’s kind of on the gold side, right? So what are your goals? How do you want to, what are you designing? Second thing is, do you have the right data?
Do you have the data? And I think that requires data sharing across both hospitals, health departments. Pharmacies pharmaceuticals, you know because each of those have different data that’s, that’s useful. And, and we don’t have a good way of sharing this in a way that’s both protects people’s privacy, but still is actionable.
If it’s anonymous, it’s not actionable. If it’s totally open, then it’s done. It’s too risky. So we kind of need to figure out data sharing practices that are not difficult. I think it’s just needs to be done. And then we need the analytical infrastructure in place to be able to combine that data, figure out who are the people actually at risk figure out what’s, what’s going to, to, to change their behavior.
And then an experimental infrastructure to actually run these tests and see what works with micro tests. Right. We’re not doing massive changes. We’re testing things continuously because things are going to change over time. So with the right goals in mind, the right data infrastructure and the analytics, the experimentation.
And, and very much a community-based design. I think that’s, that’s the system we need in order to truly do sort of equitable value based health care.
Harry Glorikian: So are, do you have any examples of any of the things that you’ve done that, that, that, cause you’re you speak as if like you’re talking about a very, you know, specific experience as opposed to something in general?
Rayid Ghani: Well, I’ve done pieces of faith, so that’s the challenge with doing projects with an organization. That when I’m working. So I work a lot with government agencies and I’ve worked some hospitals and you can only do one piece. Right. And that one piece. So, so the part that we focus on a lot is how do we come up with these sort of goals and, and, you know, how do we combine the goals of efficiency and equity and effectiveness design the system.
That’s able to get this data and. And, and run these, these types of models. So I’ll give you one example. It’s, it’s somewhat in the ma it’s a held space, but it’s kind of criminal justice and health. So there’s a project we’ve been doing for a few years with a County in Kansas called Johnson County. So the border of Missouri and Kansas, and they like many other places in the U S I mean, many of their places that they have a recidivism problem in their jails, right.
So huge incarceration rates, and one of the root causes, which Is unmet mental health needs. So a lot of people who have mental health needs and the outcome is that they end up cycling through jails. And so in their case, they decided, look, we’ve got some resources to proactively. So right now what they do, they had assist data team.
That was a mental health outreach team. What they would do is when somebody would get released from jail, they would try to contact this person in the first few days to sort of see if they needed any, any support, any help. And that was just a routine thing. And that wasn’t resulting in any, any reduction.
So they came to us and said, you know, we want to, we have some extra capacity. We want to be able to try out doing outreach to a couple of hundred people a month to see if if we can provide them with mental health services. And our goal is to reduce recidivism. So we sort of work with them to kind of figure out what are the overall goals.
Do we want to make sure that we use these resources efficiently, but we also use them. To effectively reduce the recidivism rate, but also make sure that it’s being reduced in an equitable way. So if we only focused on the, on people who were cheapest or easiest to help, we leave people behind. So we want to make sure it’s not doing that.
It’s not increasing disparities. And so that one was sort of trying to figure out these goals. Step two was what data do we need for this? Well, we need data from the jail system. Obviously, you know, who’s coming in and going out, but that’s not enough. We need data from mental health services to figure out who’s had encounters with mental health.
But that’s not enough because they’re not, they might not be going to mental health. They might be going to emergency department. And so we need data from, you know, ER and nine 11 calls. Or we might admit the other criminal justice things going on with the police. We need data from the police. So we sort of work with the County to get access to all of this data.
And they did an amazing job of sort of pulling all these things together. So now we, we kind of see a person across these systems. And we can then build the analytical infrastructure to predict how what’s the risk of this person coming back to jail. And we can now look at their historical mental health history and see that they have mental health needs and they’re higher risk of recidivism.
And we find that actually working pretty accurately predict who’s going to be coming back to jail, which is fine and a good first step. Right. But, but that’s not enough. Now we can figure out who we need to help. The question is, can we help them? So we actually started a trial about 12 months ago with this County where they were giving them a list they’re going out and doing this mental health outreach to that set of people.
And when we’re getting this data back to measure two things, one is it actually reducing the recidivism rates because that’s the whole point, right? If you can predict them and watch them go to jail, that’s horrible and depressing. And so that’s one second thing that we’re testing is what I was mentioning about the healthcare world though.
So this is what types of people is this intervention working with and what types of people is it not working for them? So that when we actually implemented, after the trial, this intervention goes towards people who it’s more likely to help. And the new interventions get designed for people who it’s leaving behind.
So that trial is just finishing now. And over the next 12 months, we’re going to be getting this data back to measure how many people actually return to come back to jail the next 12 months compared to the ones, you know, based, based on the people that intervened on versus the people. That they didn’t have capacity to intervene on.
So that’s an example of a much, much, much smaller scale system, still an ambitious big problem, but, but we’re kind of tackling a small piece of it saying, can we take the people who have unmet mental health needs? Can we proactively, can we identify, which of them are at risk of recidivism can be. Intervene and see if this intervention is effective.
And if it is, then we can implement that type of program and iterate on, on who it’s not working on. So let’s, we’ll, we’ll find out over the next 12 months, you know, if, if it works
Harry Glorikian: and the sneak peek, did you, do you know if it worked?
Rayid Ghani: I don’t know. Right? Because it’s one of those things where they, well, two reasons.
One is they’ve been doing this for the last 12 months, so now we’re going to start. For the first month, we basically, the idea is if the people that intervened on 12 months ago, how many of them have come back to jail in the last year? Right? So, so in one month they only intervene in a couple of hundred people, 150 people.
And so we don’t have enough numbers to know until 12 months have passed. To find out. So right now I just have, you know, that’s, it’s one of those things where you just don’t know at all. And then especially given the COVID situation the last few months, that’s going to be mixed up in there. So we’re going to have to figure out how to, how to tease that apart and figure out, you know the effectiveness of this that’s just that that’s just comes along with, you know, doing these live trials.
Harry Glorikian: Yeah, no. They’re, they’re, it’s, it’s extremely complicated. But you know, like I, I actually believe that technology is getting better. I think we’re understanding how to do these things better. I think depending on the disease center systems are getting better, too. Sure. So you can monitor things over a longer period of time.
So I, just see this, if, if. Implemented properly. And people actually pay attention to the data that you could move the ball forward faster. It’s just getting this distributed to broad two more systems. Once you prove that it works. And that’s the part that I always have issues with. Like one group, one place did it and everywhere else says, well, we haven’t done it. Our population is different. And. They don’t pay attention to the, to the study and they don’t implement it themselves, which makes it difficult to have a broad, nationwide impact on healthcare.
Rayid Ghani: Absolutely. I mean, everyone thinks they’re special and you know, it turns out all of us are not that special. Most of us are not well who we know and. Similar tactics work on, you know, on all of us. So I think, I think if we, and that’s, that’s the thing that I think, I think as you were talking about, you know, the, the being able to coordinate these types of tests and, and these trials to, to, to make sure that.
They’re representative of different populations so that when, when we get results, we can say, look, it’s worked on these types of people then work on these types of people. So the next one go focus on these, verified, this, and investigate this.
Harry Glorikian: It’s just, I mean, it would be, you’d constantly be able to make it better if we had some sort of process of doing that.
Rayid Ghani: Exactly. Because we don’t have to start from random. We can start from what we know works. And adapt from there as opposed to start anything good work. So let’s try everything as opposed to, well, this is the current best approach. Let’s start from there and then adapt from there.
Harry Glorikian: Yeah. And I hate saying it, but sometimes you almost wish this was centralized.
Rayid Ghani: You think it should be centralized? I think, I think that’s where, you know, I mean, this is an interesting piece of it because a lot of this is done by pharmaceuticals independently, but. In a world where, where we had a more if we had a better centralized, you know I’m not gonna use, I’m gonna call them FDA, but some other body that could sort of have incentives for, for, for some sort of centralization, right?
Because, because overall, I mean, that’s the problem it’s globally better, but it may not be locally better for, for each. Each pharmaceutical based on their, you know, their own, own business goals. And so the question is how do we, how do we align those business goals with, with, with the, with the global social goals?
Harry Glorikian: No, no, I agree. I mean, I, you know, the more and more I look at where this is going, the more you can centralize different data sources. The more testing you can do the more refinement you can do. And therefore the more impact that it will have in the long run on a broader level of people. Right. That’s the hope.
Harry Glorikian: So it was great to talk to you. I really appreciate your time. I hope that all your work turns out positively, although that’s impossible when you’re, when you’re doing experiments like this, but one can hope for the best.
Rayid Ghani: Hopefully we’ll learn something and hopefully, you know, it’s going to result in something, something better in the future.
Harry Glorikian: Excellent. Great, great talking to you too, as well.
Rayid Ghani: Thanks.
Harry Glorikian: Thanks.