Scroll Top

Healthcare AI isn't magic, but it can save lives

The Harry Glorikian Show 

Nassib Chamoun, Health Data Analytics Institute 

For November 7, 2023  

Final Transcript  

Harry Glorikian: Hello. Welcome to The Harry Glorikian Show, where we dive into the tech-driven future of healthcare.  

There’s a lot of talk out there about how artificial intelligence will change the way doctors and nurses take care of patients.   

You hear some of it right here on this show.  

But all of that still feels like a forecast rather than a present reality.   

When you look really closely, it’s hard to find concrete examples where AI is already helping healthcare providers make better decisions that improve patient outcomes and take costs out of the system.  

That’s why I wanted to have Nassib Chamoun on the show.  

He’s the founder and CEO of Health Data Analytics Institute, or HDAI for short.  

Over the last year or so HDAI has been working with a major healthcare system, Houston Methodist, to test out a working platform called HealthVision.   

It’s a collection of AI-driven models that use huge amounts of data, both from Medicare and from Houston’s own electronic health record system, to make predictions that help doctors and administrators spend less time poring over records and data, and more time interacting with actual patients and making good clinical and management decisions.  

Nassib has a way of talking about HDAI and HealthVision that leaves out the hype and focuses on the real-world problems healthcare AI can solve for doctors and administrators.  

Like how to identify the patients discharged from hospitals to their homes or to skilled nursing facilities who are at the highest risk of complications—and which interventions could help keep them alive and out of the hospital. 

Something you’ll hear Nassib say a lot is that “AI is not magic.”   

He points out that even the most famous large language models, like ChatGPT, are just massive statistical representations of data created, collected, or curated by humans.  

And while these models are powerful, Nassib argues they’ll need guardrails around them to guarantee transparency and explainability and to prevent bias, before they can be useful in high-stakes fields like healthcare.  

To me it’s refreshing to hear a healthcare entrepreneur talk about the real and legitimate ways healthcare AI can help with care and management decisions right now—without veering into the more science-fiction-style scenarios that are probably five or ten years away.  

HDAI has raised tens of millions of dollars of capital and spent seven years developing HealthVision, and now the company is getting ready to grow beyond Houston Methodist and deploy the system at other big healthcare institutions like the Cleveland Clinic and the Dana-Farber Cancer Institute.  

So more providers will get a chance to test whether healthcare AI can keep patients healthier and make healthcare delivery more efficient. 

Here, without further ado, is my conversation with HDAI founder and CEO Nassib Chamoun.  

Harry Glorikian: Nassib, welcome to the show.  

Nassib Chamoun: Good to be here, Harry. Nice to see you.  

Harry Glorikian: Yeah. You too. You too. Actually, it’s interesting, because I’m not even in my, um. My usual spot. Most people that watch the show would see my office, and I’m actually in LA. I’m out here for work, and then I’ve got a wedding over the weekend. So the the way things look might be a little different to people that are normally, um, you know, watching the show, but it’s great to have you on the show. I know we had a chance to, like, meet face to face and get to know each other a little bit better. But I wanted to sort of start by asking you to share a little bit of your life story. Mean, like so many founders that I’m used to in the healthcare and life sciences world, you’re an immigrant to the United States and it would be great to understand, like where you grew up, where you studied, and how did you get this, you know, bent to go into healthcare.  

Nassib Chamoun: Um, great, great question and wonderful opportunity to be here with you. And to start with where, you know, I grew up, I’m a Lebanese immigrant. I, um, the civil war in Lebanon started when I was 13 years old, spent about five years in a very, very difficult situation with my family and left in 1980 to come to Boston and go to school at Northeastern University. And through the co-op program there, I had an amazing opportunity to go to the Harvard School of Public Health and join the research team of Dr. Bernard Lown, one of the world’s leading cardiologist, humanitarians, Nobel Peace Prize winner, um, and start working on health care research, primarily in cardiology, and during this period the lab acquired one of the first computers that Digital Equipment Corporation, here in Massachusetts, company produced to allow us to collect vital signs, heart rate, blood pressure, and even an EKG from and turn it into digital data that we could analyze. And that’s where my curiosity and interest in healthcare AI and analytics was born. And it’s been a continuous endeavor since I went on after Northeastern on a full scholarship from the Lown Foundation, today, it’s the Lown Institute, to start my graduate studies and computer engineering and biomedical engineering at Boston University, and continue to do my PhD research at the Harvard School of Public Health.  

Nassib Chamoun: And in 1987, at the age of 25, I got introduced to the concept of anesthesia by one of my collaborators at the Harvard School of Public Health, who was doing a research project with me in Dr. Lown’s lab. And I got very intrigued by the brain and the complexity of the brain wave signal, especially when people argued so much about the interpretation of these waves. And when you looked at the literature at the time, you could give a certain brain wave to ten clinicians and only about five would agree on its interpretation. I said, well, computers can help us read this and do a better job. Well, you know, my assumption was it was going to take two years and cost about $2 million. Well, that journey was roughly 23 years. And during that period, I had to raise a quarter of a billion dollars. I was a little bit off, but it gave me the opportunity to really engage with data and analytics in a way that I don’t think would have been possible anywhere else outside of the United States and even the Boston area, with incredible collaborators, you know, across the Harvard and MIT system, as well as incredible sources of funding that have given me the opportunity to build my first company, Aspect Medical, which ultimately developed and introduced the first brain monitoring technology for monitoring consciousness during anesthesia and sedation. We built the product. We took it to the FDA. This was an idea in graduate school, took it through the FDA, commercialized it, built it into a public company, which I ran for about a decade. And that’s when we sold the company to Covidien, which is now part of Medtronic. The product today is thriving and doing very well in the portfolio of Medtronic and the technologies and the majority of operating rooms and intensive care units around the world, and has helped over 100 million patients. And that number continues to grow. So that’s kind of the beginning. That and I’ve been passionate about bringing technology to help clinicians improve outcomes since it’s been wonderful.  

Harry Glorikian: Well, that’s a good segue because, you know, we want to now sort of switch a little to your company, but I just want to help the listeners sort of understand a few things before we get there. And I think, if I’m not mistaken, your biggest customers for the your current company are what are called accountable care organizations or what you and I would call ACOs, right. And for the listeners who may not be immersed in this, I think it would help maybe, you know, pave the way for the rest of the conversation if you could spend maybe just a few minutes explaining what is an ACO, what incentives do they operate under and what problems they have that healthcare AI may be able to help solve?  

Nassib Chamoun: Absolutely. I think one way to think of an ACO is a collection of medical practices that come together as an entity that collaborates on managing patients to deliver better, more cost effective care. Why is that important? Because Medicare today is spending over $1 trillion a year. That number is growing to about $2 trillion by 2031 or so. You know, potentially more than that. And primarily because our population is aging, the underlying complexity of the patient population is growing. And already health care is close to 20% of our GDP. Total US health care spending is going to be roughly as forecasted by Medicare, $8.2 trillion by 2032. To put that number in perspective, the economy of Japan is about has a GDP of about $6.2 trillion. So US health care spending will become the world’s third largest economy after the US and China. It will surpass Japan. Medicare and Medicaid are a big chunk of that. So as a society, we have to find a way to deliver better care to more people and to manage that cost accordingly, because it’s, you know, it’s not, we don’t have infinite resources. So Medicare’s view and many, actually, payers’ view is the best way to achieve that is to push that risk to the individuals who are delivering the care, whether it’s a practice or it’s a health system. And those are more generally called a risk-bearing entity, an entity that is involved in the delivery of care that goes to the payer, whether it’s Medicare or an insurance company, and says the average patient is going to cost, let’s say, $10,000 or $12,000 a year for, for for for a Medicare patient.  

Nassib Chamoun: Write me a check for that amount, and I will take care of the patients. If I’m able to deliver a better care that costs less, I get to keep that difference. And if for some reason I’m unable to manage that and it’s going to cost more, I’m going to have to write a check. And that’s why you have Medicare Advantage plans as well. It’s another form of risk delegation by Medicare to a third party. So ACOs are a flavor of a risk-bearing entity. That is, this collection of physicians that enter into a contract with Medicare, where Medicare says you have a budget for these patients, you’re going to take care of them. And at the end of the year, if you’re above the budget, you’re going to have to write us a check for 50% or 75% of what you have lost. And if you do better and you save cost, we will write you a check. So there’s an economic incentive to do better by the patient.  

Nassib Chamoun: Now, the first reaction when you say that, the immediate question that comes to mind, well, are these physicians saving money by restricting care? And the answer is actually the complete opposite, because under Medicare, restricting care is the, you know, is illegal. You can’t do that. And when you look at the cost of care, it’s largely due to adverse events, people that have complications from chronic diseases, the people that end up after a hospitalization and a skilled nursing facility, people that end up in the emergency room for basic issues that could have been prevented through better clinical management. These are the big drivers of costs. So what ACOs tend to be focused on is identifying the patients who are at risk of developing certain conditions and putting in place preventative programs to allow them to stay healthier, and therefore mitigating the cost of more complex downstream care. And for people who have complex conditions and they’re already there, you work with them in a way that allows you to more actively manage those conditions to to avoid unnecessarily frequent emergency room visits or hospital visits that cost a lot of money. They’re not good for the patient. They create a huge burden in our health system. And ultimately, you know, in many cases, not in all cases are preventable. So that’s where the opportunity comes for accountable care organizations and other risk-bearing entities like them who try to optimize care.  

Harry Glorikian: So this this is a good pivot now, so that we’ve, because now it’s all about data, data, data right. The more data that you can sort of make sense of, the better you can make the system you just described sort of work for everybody. And, you know, you started HDAI back in 2016, I believe, and you must have had a founding vision when you when you sort of put this together so you know, what was your founding vision? You know, what was the state of health care at the time? Because this was long before all this large language model and generative AI and everything else. Right. And, and and then what was what was the problem? What was deficient or disappointing with the way that health care analytics was done. 

Nassib Chamoun: Um, absolutely. And I think let me start by going a little bit back in time in the late 90s, early 2000s. We wanted to do a better job controlling and risk adjusting the trials that we did in my first company, Aspect Medical Systems, because we wanted to understand outcomes and we wanted to understand how much of the outcomes we were observing when our technology was being used were due to the underlying risk and the history of the patient, and how much was due to what happened in the operating room, and how we could adjust that to deliver a better outcome. That’s been an obsession of mine from day one: technology empowering clinicians to deliver better outcomes. So we went and acquired a large Medicare data set, and we started to work with it. And at the time when we acquired the data, Medicare had a policy in place that any use of that data can only be applied to non-commercial purposes, and therefore you could use it to risk-adjust a study, you could use it to publish, but you can’t create commercial tools. And I felt that that was a big miss for us as a society, because it is the largest, most complete longitudinal data set that exists not just in this country, but potentially around the world. It covers trillions of dollars in health care spending and every encounter that is possible, and that has been paid for in that population for a long, long time.  

Nassib Chamoun: And so we did the work, and we published on it quite a bit, and that gave me a deep appreciation and understanding for the power of quality data to develop and train predictive models, healthcare AI models, because, you know, you can use fancy tools to predict things. You can use simple tools to predict things. And sometimes if you have a lot of data, simple tools will get you to the same answer much more efficiently than very complicated tools. And for me, during those ten years, between 2000 and 2010, when we sold the company, the big aha moment was that there was a lot of power and a lot of information and understanding the patient’s history. And interestingly enough, my mentor, Dr. Lown, always trained his fellows, and his students to say history is the first and most important job you have as a clinician, before you even start doing anything for your patient. And that experience made it clear to me that if we wanted to build an AI platform that’s broad, that’s applicable, we needed this kind of data to to train it and also to help us understand what’s going on right now. Because if you can’t measure something, you can’t change it. And so understanding the current intersections of cost, outcomes and utilization was going to be central to how you aim and deploy AI tools to try to make a difference. So that work. After we sold the company, I continued to collaborate with academic researchers for the subsequent five years, and we’ve published some papers knowing that there’s nothing that could be done about it commercially until, you know, the ACA hit and Medicare created a mechanism to give access to the data for innovators who could use it to develop solutions and tools that can have a positive impact on the health care system.  

Nassib Chamoun: And that’s when HDAI was born. We were one of the early organizations to apply to the program. We’ve been in it a little bit more than five years now and have leveraged that access to do two things. First is to build a complete suite of predictive models, machine learning, healthcare AI-driven predictive models to help us understand outcomes, utilization and cost, and how they intersect across every entity in the country, and to allow us to use those same models to deploy, at the bedside, in the clinician’s hand and the care manager’s hand, to take whatever we’ve learned from those outcomes and modify the areas where we believe there’s an opportunity to to improve. So that’s kind of how we got here. I wish I could have started it 20 years ago, but that was not a possibility. And I don’t think the data and the quality of the data at a national level existed to enable the kind of work we are doing today. It’s all about the data. I think that’s the bottom line.  

Harry Glorikian: Yeah. So I want to go into a little bit more detail about how, the vision, the sort of thinking about, okay, what can it do for ACOs who are the typical users of the platform? I mean, who sits down and sort of logs in and says this is what I’m trying to do? And then what kind of data is on, you know, can the platform ingest, you know, what kind of insights and views and predictions? I know you were just at health and you guys did a deal with Methodist, I believe, if I remember the entity correctly. You guys talked about some of these things. But, you know, at some point at the most fundamental level. What can I do that doctors, nurses and ministers and administrators, maybe, you know, either don’t have the time to do or, you know, there’s a lot of data, so you got to make sense of it to then do something actionable. So if you could sort of go through some of those levels of the product and, and you know who’s using it and what are they trying to get out of it?  

Nassib Chamoun: Um, I think that’s a great place to start in talking about HealthVision and primarily the target user, the use cases and ultimately the potential impact. Let’s start at a high level. What does healthcare AI do today in my opinion? Most people want to think about it as magic or evil or good. And and I think I take a simpler view. And that view is we’re swimming in data. Everybody has data, but the amount of knowledge that people have is very low. And that’s the difference that we’re trying to make. We want to take advantage of all the data that’s there and reduce it to knowledge and insight that is actionable by every constituent in the care process. So it’s not just about doctors or nurses and care, you know, care coordinators or even social workers. It’s about all of them coming together with a common understanding of a patient or a population, and then leveraging that understanding for a personalized action plan that’s going to deliver better outcomes. So what HealthVision does is it brings together our analytics using the national data, which is a very important starting point, regardless of whether you’re an ACO or a health system like Houston Methodist, because we also work with large health systems. You know, Houston Methodist was with us on stage at HLTH, and we shared an incredible story about how this deployment has come together and the impact it’s having both across the system, not just for their ACO, but for all their other settings. We’re about to go live at Cleveland Clinic in the next few weeks, and we’re working with Dana-Farber here in the Boston area to introduce those capabilities.  

Nassib Chamoun: And the message is very consistent. Our goal is to take screen time and turn it into patient time. I think anyone who’s been to their doctor’s office recently or has been to the hospital recently, will tell you that potentially more of the clinician’s time is spent trying to figure out what’s in the medical record, or digging through it before they have a conversation with you as a patient. We think there’s something wrong with that picture because the clinical team, the care team is our most valuable asset. Do we want them to spend time as a human search engine, digging through a pile of data and reconciling that? Or do we want to leverage healthcare AI to synthesize this data and reduce it to a snapshot, or a view that gives them a starting point that is organized by risk and opportunities for action, as well as condensed so they can look at it at a glance? And today in health care everybody ultimately does the dive in the data and reaches, it may be similar, sometimes different sets of conclusions. The advantage here is you can normalize the knowledge and everybody will get the same starting point. And now clinicians can focus more on engaging with the patient, focus more on engaging with some of the problems and the challenges this patient is facing, and personalizing their care. I think that’s the value of AI. I don’t think it’s there to replace doctors. I don’t think any of the healthcare AI out there is robust enough and is accurate enough to allow this kind of fly by wire that people are concerned about.  

Nassib Chamoun: In fact, we don’t recommend it for our technology or anybody else’s technology, and I think it would be a mistake. Our job and healthcare AI’s job is to really bring to bear those tools, to simplify the story, to create a standardized view that allows the clinicians to drill deeper, where the risks are, where the opportunities are to make a difference for the patient. And what HealthVision does is it enables that to happen across multiple settings for multiple users. So first we start by the data we generate on the Medicare servers to help every organization understand not just its performance, but also the performance of everybody around it. So if you’re a health system, you want to understand how your primary care physicians are doing, how the skilled nursing facilities that you’re sending patients to are doing, how your specialists are doing, how the home health agencies that you’re delegating the patient to after they leave your hospital is doing, because all of them are contributing to the costs and the outcomes for for each patient. So there’s a multiple modules that we use in HealthVision to help organizations understand their performance and their network performance. Once you do that, and that leverages our digital twinning technology, where we take those predictors and twin every patient to patients exactly like them that have the same risk for all the endpoints that we’re looking at, and reduce that information to summaries that are actionable. And then once you identify, well, there’s this heart failure population we need to target, or we have a readmission problem or a mortality problem in this sub-population that we need to go after. 

Nassib Chamoun: HealthVision deploys the same model at the point of care to allows clinicians to both filter those populations and identify them fairly quickly. And then for each patient within that population, if you’re a care provider for that patient, you can click on them and see in one page that summary that’s going to help you hit that starting point right out of the gate, instead of doing that dive into the EHR to try to figure out where you start. So. That’s kind of HealthVision at a high level. Why is it relevant? It’s extremely important to realize that in our health system today, resources are not infinite and that if you want to do better for patients, you have to identify those patients that may require your attention sooner rather than later. And identifying those patients means you don’t have to do everything for everybody. And you can focus on those requiring that higher service level or attention level and develop a plan for them. And we think this is where the complications come from. This is where the cost comes from, and this is where the bad outcomes come from. So creating that funnel that allows you to go after these populations and target them is what is going to drive improvements for everybody and also create better satisfaction for the health care workers who are overwhelmed right now. Absolutely. If you talk to any body in health care, they will tell you it’s just they can’t keep up.  

Harry Glorikian: All right. Well, it’s funny because as I was watching the Methodist talk or when you guys were talking at HLTH, I was like, especially the one where it’s the, uh, skilled nursing homes or home health and so forth. I was like, wow, if I was a patient, I would want access to this data to know where I should be going, you know, or not going, which is sort of interesting. I mean, I wonder, you know, one of these days, will that be available to patients?  

Nassib Chamoun: Um, it should be. And at some point it will be. It’s tough for us as a small company right now to put it out there and help people understand it, because there are a lot of nuances. But I’ll give you an example and I’m glad you raised it. So when we went to Houston Methodist, they said, look, we’re one of the best hospitals in the country. But when we look at our mortality performance, we’re like rank 15 or, you know, 14th in the country. But when we look at our in-hospital mortality, we’re doing great. We’re like second or first in many of the areas. So we dug into the data, and what we ended up working with them to understand is what happens after the patient leaves the hospital. And it turned out that while they ranked very high on in-hospital mortality, when we looked at mortality post discharge in the first 14 days after the patient left there, their performance was like 32nd. And when we dug deeper into it, there were two primary drivers. One, a third of their, for probably the portion of their patients that went to a skilled nursing facility, which contributed a high component of the mortality, For about two thirds of those patients, they went to skilled nursing facilities that had good performance, but one third of them went to a set of nursing facilities where the mortality rates as matched by the digital twins, so these are patients that were matched exactly in both sets of SNFs, those that are the good SNFs and the bad SNFs, they had double the mortality rate. So you’re talking about a 6% mortality rate in the good SNF it was, or six and one half percent. It was 13% in those low quality SNFs. And those numbers change in real time. So that’s why we work with our partners so they can understand the outcomes. So the first order of business was let’s give patients a clear view of their choices and explain to them, you know, because they can’t force anybody to go to a particular SNF, but they can educate them and inform them and allow them to make a more informed choice about where they want to go. The second piece—and it’s happening in every health system in the country-when they looked at how often they they saw the patients post discharge, they effectively pretty much saw everybody within 30 days. But then you look at people that got readmitted or died, and the question is, how many of them did you see before they had the event? And that number was roughly like 35%. So only a third of the patients that were high risk that had an event were seen before that event. That doesn’t mean if you saw everybody, you’re going to eliminate all events. But for a subset of those patients who are your highest risk patients, you’re going to prevent an adverse event for sure. There’s no doubt in my mind. Well, why is that? It’s because when you book an appointment for a follow up, you go for the patients who are willing to see you or when they’re willing to see you, or they’re easiest to communicate with. But sometimes the sickest and riskiest patients may be the more difficult to set up that appointment. So their reaction like, oh my God, we do the right thing, but it’s being done randomly here. We need to target, we need to focus. So now they have a program in the system that if you are in the highest risk 20% of patients, they’re going to work very hard to make sure they see you within the first few days post discharge from the hospital, so they can do whatever they can to avoid those downstream adverse events. And that’s really how high performance organizations work. And I’m very impressed because they always want to raise the bar for themselves, and they wanted to do what’s best for their patients.  

[musical interlude]  

Harry Glorikian: Let’s pause the conversation for a minute to talk about one small but important thing you can do, to help keep the podcast going. And that’s leave a rating and a review for the show on Apple Podcasts.  

All you have to do is open the Apple Podcasts app on your smartphone, search for The Harry Glorikian Show, and scroll down to the Ratings & Reviews section. Tap the stars to rate the show, and then tap the link that says Write a Review to leave your comments.   

It’ll only take a minute, but you’ll be doing a lot to help other listeners discover the show.  

And one more thing. If you the interviews we do here on the show I know you’ll my new book, The Future You: How Artificial Intelligence Can Help You Get Healthier, Stress Less, and Live Longer.   

It’s a friendly and accessible tour of all the ways today’s information technologies are helping us diagnose diseases faster, treat them more precisely, and create personalized diet and exercise programs to prevent them in the first place.  

The book is now available in print and ebook formats. Just go to Amazon or Barnes & Noble and search for The Future You by Harry Glorikian.  

And now, back to the show.  

[musical interlude] 

Harry Glorikian: So just switching gears a little bit, like I was looking at the website and it says the HealthVision platform uses generative AI. Right? Which is this, you know, specifically large language models and, at the risk of being sort of indelicate, right, those are buzz phrases right now that a lot of health care companies are attaching to products and services, and often those claims seem a little like, okay, how is this being used? Where is it being used? Right. So I’m curious in what ways is the healthcare AI built into HealthVision. Generative. I mean, for example, we know that ChatGPT collates the whole internet, and to find statistical patterns that allow it to generate responses to a human question. So what bodies of data does HealthVision learn from, and what types of output does it generate?  

Nassib Chamoun: Great question, and I think this probably opens the conversation around the, you know, what’s important for healthcare AI. There’s a big national debate about fair use of AI, responsible use of AI. You know, a lot of concepts you hear out there are, explainability, bias, where you don’t want it to select against certain populations. And from the get-go, HDAI has taken a position that whatever technology we deploy in the hands of clinicians has to meet the transparency test, the explainability test, as well as the bias. And we do it as follows. We use machine learning algorithms that are managed directly by us and trained in a way that are fully deterministic. And I’ll come back to what that means. You put an input, you’re going to get the same output no matter how many times you do it. We’ve published our methodology, and each one of our models that we use, I can give you a spreadsheet that has a bunch of numbers. That’s the weight for, or the risk associated with each condition. And we literally, arithmetically add them up to generate the risk. So when we apply that model to any patient and we say somebody has an x percent risk of having the following outcome mortality, you know, readmission, heart failure, COPD, whatever end point you’re looking at, we’re going to tell the clinician these are the three or four conditions that are driving this this prediction. So that goes down to the explainability bias. 

Nassib Chamoun: We don’t include anything in the model other than your medical history. It’s basically what your clinician sees. We don’t try to include your race or your socioeconomic status or anything else that could effectively cause the model to either bias it for you or against you. And it’s really tough to try to figure all that out. So your question, the question becomes, how do you track whether if there is bias in the behavior of the system? Well, by using digital twinning we can twin, for example, certain populations against the overall population. And we can show you where bias exists so we can help uncover bias rather than get caught in the bias. Now the methodologies we use are fully transparent, explainable and deterministic. When you get into generative AI like ChatGPT, and I’m sure you and every person out there has tried to like feed a question and you change like one word and you get a totally different answer and you say, oh my God, what happened? Well, that’s generative AI. It’s non-deterministic. You’re not going to get the same answer if something ever so slightly, sometimes you can enter the same exact prompt and get a different answer. And I’m sure everybody has explained. So that that makes people very concerned. So when you talk about generative AI and health care, the alarm bells go off. So how do we use it and where do we think there’s value? Our approach is to take out of generative AI the value we can get today within the constraints that we have set to ourselves as a company, which is transparency and explainability and bias. 

Nassib Chamoun: So we use the ChatGPT engine by Microsoft. We have endpoints that we call that are HIPAA compliant. We use it to interpret clinical notes. Why is that important? Because clinical notes are the backbone of all the activity that happens in the health care system. Those notes get translated by coders in the future. But if you’re trying to do something real time and HealthVision is a real time platform and somebody is going through the process of care and you need to track how their condition has changed, you have to read the notes, you have to interpret the notes. That’s where clinicians spend a lot of their time trying to read and interpret notes, not theirs, but somebody else’s, and then write notes for somebody that’s going to come after them. So our approach takes those notes and first curates them before we feed them into the ChatGPT engine, because we want traceability between statements in those notes and the codes that are generated at the other end. So our request is we’re going to give you notes. We expect at the other end, codes, because codes are structured data that our algorithms and most algorithms can efficiently use to generate predictions of, of risk, of outcome, of cost, of utilization. 

Nassib Chamoun: So if we want to do that, first we want to guardrail it and to guardrail it you have to curate the input in a certain way that retains a connection between what the input is and what the output is. And at the other end, when the output comes back, the generative component is. Now I’ve gone from free form unstructured text to a bunch of ICD codes that represent somebody’s condition or some action that the physician has taken to address it, which we then feed to another model that’s our own, that checks the relationship between the input and the output and says, is that output possible or is it a hallucination? So we further process it in a model, effectively creating the output of two models to decide whether a code is likely to be real or not. We validated against millions of notes over 6.6 million notes that Houston Methodist, where we generated close to 200 million codes. Of the 200 million codes, we’ve only kept about 10%. Because most of it is noise. So I understand why people get very anxious about that. And our goal is to create a very narrow use case, a use case that ultimately can be transparent and explainable. So when we generate those codes, we show the clinician, by the way, in reading the notes, we identified the following conditions. And we then pop up the note. We show them exactly the statement in the note that the models have interpreted to be a new condition, and we give them the opportunity to reject it. So full human visibility and control. 

Nassib Chamoun: And of course, we set the guardrails so tight that we’re very specific and less sensitive. And the goal here is not to create an enormous amount of rejections for the clinicians, but also to capture the big events. And typically, if there are big events in in the notes, they tend to repeat across multiple notes from multiple doctors. So if one doctor wrote it slightly differently and we didn’t pick it up, we’ll pick it up from the other doctor. And that allows us to enrich our predictions and the information as care is progressing. Once the notes are coded, then we don’t need to do that. So it’s not, you know, once you have truth, then you use truth. And when you have a blackout on truth, because the coders have not gotten to it and you have notes, then that’s where technology helps. And what HealthVision does is it takes all the structured data, all the codes from the record that are there, all the codes from the billing record, all the feeds from the payers if we have it, and then all the coded information by generative AI for that kind of last 100 yards, it’s not even the last mile, brings it all together to give clinicians a complete and comprehensive view of that patient, regardless of where their data have come from, with pure visibility and explainability for every component. So if they want to change it, they can click and modify it. Does that answer your question? 

Harry Glorikian: Yeah. Yes. Yes. Yeah. And you know, you almost want to play with it to see it in in action. But which sort of brings me to the next thing is, like, and I think most innovators would agree that changing culture is a lot harder than changing technology. So how do you operationalize a tool like HealthVision and make sure it gets integrated into the actual workflow, clinical pathways, care processes at an ACO. I mean, it seems like that would be almost a bigger challenge than building the models. I mean, human beings are typically the speed bump along the way of trying to put something in place. 

Nassib Chamoun: Well. I’m fortunate to have been in health care, introducing new innovations for the last 35 years, and I’ve lived that speed bump many times, and changing the paradigm of how everybody in the health care system, you know, again, across the continuum of care, tens of people interact with the patient—it’s not just the physicians, it’s it’s a team effort—and changing the paradigm of how individuals act and how teams act is where the real innovation is going to come. And that’s exactly what we covered when we presented with Houston Methodist at HLTH. And that’s where I give this health system enormous credit from day one. When we went in to start the deployment, they pulled every constituent and brought them to the table and got them involved in the implementation, got them involved in the analytics up front to understand the data and understand the opportunities where they wanted to act. So they had a goal, because using AI for the sake of using AI in health care is a fool’s errand, you know? And it shouldn’t be wasted. If you’re happy in what you’re doing and you don’t think healthcare AI is for you, I think it’s probably too early. Let others do that discovery for you. And I call it, and I think you and I, when we had coffee, talked about it, I call it healthcare AI literacy. 

Nassib Chamoun: And what I mean by AI literacy is not the fact that you’re not, you haven’t read about AI, but reading about AI versus using healthcare AI are two different things. And the majority, 99.9%, in fact, almost just shy of 100% of people who are in the health care environment have not experienced AI in a way that they they can create context for it and they can create the use cases. So at Houston, they said, we’re going to identify some initial use cases that were driven by the data. We’re going to bring a team that’s going to start deploying and leveraging the technology for this, those very narrow use cases that we have identified. But we’re also going to encourage our care team to start using healthcare AI information that’s being delivered through HealthVision, because we provide a very rich view and compact view for every patient that allows you to trace every condition back to its source, literally with a click from where you see the risk or the opportunity, and encourage the clinicians now to start thinking how they would change what they were doing, or they would redesign that workflow to take advantage of this information that’s being delivered to them. And lo and behold, what was really amazing to us and to the administrators at Houston Methodist, the number of use cases that started to emerge was kind of mind boggling because, you know, we have now social workers that are trying, you know, that use this to really drill into some comment or some complaint that the patient had.  

Nassib Chamoun: And you look into the EHR and there’s nothing about it, but then you click into HealthVision, and HealthVision has flagged that as a risk. And they’ve identified through some billing record from a third party that, in fact, that patient has had a history of this condition. And now they’re trying to customize their follow up program and what they need at home to make sure they have the best experience possible. And we see that across the health system and the ACO, sometimes they get a list of patients that have just been discharged from the hospital, and they want to follow up with them. The first thing they do is they go to HealthVision because they want to see what are the conditions that these patients have that represent the greatest risk for a complication. So when they reach out to them, their talk track is aligned with where they want the patient to focus their attention on their own well-being, when things are progressing through the recovery process. So we have use cases around what I would call patient shared decision making. And a couple of our organizations have realized that some patients were operated on or were hospitalized at a time where they’ve approached end of life, and that probably could have had other alternatives in terms of palliative care that should have been considered. But they could never quantify the risk or the likelihood that this may be a consideration for the patient. We give them models to identify that possibility. So now these conversations are taking place, sometimes before a patient gets admitted. Or if the patient is in the hospital having a conversation with them about support they could give them after they leave the hospital. So it’s enriching that shared decision making process in a way that might not have happened before, because now individuals who could focus on that can see those patients who would be a possibility for that conversation, and they’re engaging with them. So this is not healthcare AI saying push this drug into this person or, you know, do this intervention into that person. This is AI that is informing clinicians and empowering them to start thinking differently, acting differently and collaborating differently. 

Nassib Chamoun: And I think the biggest lever we have in our health care system is to move more from an individual kind of effort, because health care still is largely an individual effort in the United States, where you meet with your doctor, they take care of you, they write a note, you go meet with the next doctor, they’re going to read the first doctor’s note. They’re going to ask you the same questions 5 million times, and then they’re going to do their thing. And just imagine what it’s like if everybody was on the same page. And we created that kind of shared understanding of who you are as a patient, what your risks are, where the opportunities to optimize and personalize care for you are. It doesn’t mean they’re not going to ask you questions, but their questions may be more pointed on the the current problem. And that’s the power that we’re going to get. That’s the innovation. And I think you said it right. It is easier to build the healthcare AI and the infrastructure and the algorithms, but creating an AI literate clinicians and users in the health care system is where the innovation is going to come from, because you’re going to move from rule based care, where everybody has rules and everybody plays by their own rules, to kind of AI-informed decisions that are more uniform across the care team and more synchronized across the care team. And I think that’s the biggest lever we have by upping our, you know, team effort and team synergy to drive better care at a lower cost for a lot of the patients in this country.  

Harry Glorikian: So you guys, which brings me to, you guys just raised $31 million series C funding round led by Invus. So how will you put that money to work? What are the milestones you feel like you need to hit? Or what do you want to prove in order to get to your next stage of growth? 

Nassib Chamoun: I think right now HDAI is at a very exciting, you know, juncture. We have spent the last roughly five years somewhat behind the scenes. We’ve already invested over, you know, almost $40 million in our platform to date because we do have revenue from another segment on the insurance side. And so between the revenue that we have brought in and prior investment, we have spent the last five years building what I believe is a comprehensive, end to end analytic and continuum of care solution that brings all the pieces together. And we moved over the last couple of months from an MVP to almost a fully featured platform that can be deployed, fully integrated in an EHR out there for a health system. It can be deployed in an ACO or a practice, and we can turn it on in no time. We can do one of the more complex integrations today and literally 60 to 90 days for an ACO. We can activate all your patients using the Medicare data feeds within 24 hours. So we created the level of efficiency that’s going to make the deployment costs for the end user very low, the operating costs for the end user very low. Because we work with healthcare AI that’s efficient. That is compute. 

Nassib Chamoun: You know, because you heard ChatGPT costs you know, Microsoft costs more to run GPT than what we’re paying them for the $20 a month today because there are billions of coefficients. We like to simplify the model so we can run hundreds of models and deliver them to each patient efficiently so we can create the leverage. And it took time to build this platform. So now we’re moving into a phase where we’re deploying. As I mentioned, we just finished Houston Methodist. We’re about to turn on the Cleveland Clinic. You know, we’re moving forward with Dana-Farber. And we are in conversations with several other health systems and large ACOs where these will be big deployments for us in the coming year. And our goal there is to create that human component that drives use cases. Innovation in terms of workflows, pathways for patients that are AI enabled, AI facilitated and empowered, and to create a framework for what an efficient health system would look like with the integration of the health vision type capabilities. And we believe this is going to be an incredibly compelling story on multiple dimensions. The first is the time of the clinical health care worker. Again, for me, it’s screen time versus patient time. And our mission is shift more of the screen time towards patient time. 

Nassib Chamoun: Number two, it’s about teamwork. Empower the teams to work more closely together and be on the same page. And that means the beneficiary. There is going to be a patient who is more engaged with their clinician, more in sync with the care team that’s delivering and customizing the care for them. And ultimately they’re going to get a better overall experience, better outcome. And that will inevitably is going to drive costs down for our society. So for us these deployments next year is where we’re going to focus and we’re going to begin to start generating revenue. So the good news is people are willing to pay for this because we’re pricing it for value. We’re pricing it competitively. We’re we’re trying to empower the system to take advantage. You know, we’re not selling data lakes and we’re not trying to you know, everybody wants to collect data. I mean, there’s there’s so much data we can get up and running with the data people have. And this is about moving from data to knowledge to insights to actions and creating a template that we can replicate and scale across the health care system. That’s our goal for for spending the money in the coming year. 

Harry Glorikian: I mean, and sort of like, I know we’re running up against the clock here on time, but just to wrap this up, we talk a lot, you know, all the time on the show about how AI is transforming health care. And I think one of the reasons HDAI is so interesting is that you’re actually doing it on the ground. I mean, you know, as we’ve been talking about, you’re using big data to identify patients at the highest level of risk of health complications, and taking proactive measures to keep them out of the emergency department or out of a problem. Or if I heard you correctly, you’re using this to help ACOs look at which physician groups or skilled nursing facilities have the best track record, so they can decide who they want to bring into those organizations. And those seem like very down-to-earth, almost no-brainer kinds of applications of AI. So. The last question is: I wonder what you would say about the hype around healthcare AI today. What are some of the myths that are out there about what AI is good or bad at, and what are some of the realistic explanations about how AI can change health care? 

Nassib Chamoun: I think the hype and the myth is driven by a lot of companies who aspire to, to to do this work or, you know, want to do this work, but have realized that it’s a lot of effort to get there. And I’ve done this before. I’ve introduced new technology. It’s about the ground game. It’s about the blocking and tackling and bringing technology and people together and helping people go through a paradigm shift through experience. You’re not going to teach a paradigm shift. You’re going to empower. You’re going to engage people so they can do their own paradigm shift. And healthcare AI is not magic. AI, you know, it’s machine learning. And, you know, none of the AI we have today, even when you look at generative AI, it’s just a massive model. It’s a massive statistical representation. Nothing is running on its own. It’s a massive statistical representation of big data. And there are ways to do it that are less efficient but can do more complicated things. But for what we need in health care, I would say there are very efficient ways to apply healthcare AI to synthesize data and make the clinician’s time more focused on where they can be more valuable, which is personalizing care, coordinating together. 

Nassib Chamoun: So for me, the hype is everybody wants to kind of make healthcare AI look like magic. And when you create something that is magic, there’s black magic too, which is evil. And it’s neither, you know, for health care, we have to think much more straightforward and practical applications, which is we have massive data that needs to be simplified and turned into actionable knowledge and insight. AI is just an engine that does that, that a competent clinician can do on their own. If you spend an hour, you don’t need AI, you’ll figure it out as if you went through every record, read every note, looked at every billing record. You can do that. But the answer is people have less time and more patients. So what I can do is do that for you all day long. In fact, every time somebody touches the patient at Houston Methodist, the models get recomputed, and that view of that patient is refreshed in real time for everybody who’s caring not just for you, but for everybody else who may touch this patient as they’re moving through their care process, or even if they’re at home and they go see a specialist. Well, if the primary care physician doesn’t pick up the phone and call the specialist, they may not figure out what to do next. 

Nassib Chamoun: But the machine is going to see oh specialist visits. Let’s pick up those new conditions. Let’s integrate them. Oh, boy. Mr. Smith, risk just went up. I’m going to ping Dr. Jones, who is their primary care. He probably should know that there’s a change there that may require them to see Mr. Jones, you know, a little bit more frequently. So that’s what healthcare AI is. It’s about simplifying data and it’s about empowering clinicians to do more. And I think any AI that doesn’t involve, you know, clinician oversight and involvement right now, I think is premature. We don’t know enough. We don’t. We’re illiterate enough. And I think that literacy needs time and it needs to work its way through the system. Maybe in five or 10 years from now, you can start administering drugs or making decisions. But and I don’t even think that that’s that’s even reasonable for me. And I’m somebody who’s been living with this for, for a long, long time. So that’s where I think the future is. And I think people should think about it as a statistical tool that’s synthesizing a bunch of data in the background and giving it to clinicians so they can do more for their patients and less for their EHR. 

Harry Glorikian: Well, Nassib, it’s been great having you on the show. You know, I’m very excited about the platform and what you’re doing with it. And, you know, you almost think to yourself, like, I would love to go to the hospital that has a system like this as opposed to a system that doesn’t have this. So I wish you the great, great success and good luck and it’s been great talking to you. 

Nassib Chamoun: Thank you so much for your time and looking forward to catching up when you’re back in town. 

Harry Glorikian: Excellent. 

Harry Glorikian: That’s it for this week’s episode.  

You can find a full transcript of this episode as well as the full archive of episodes of The Harry Glorikian Show and MoneyBall Medicine at our website.   

Just go to glorikian.com and click on the tab Podcasts. 

I’d to thank our listeners for boosting The Harry Glorikian Show into the top two and a half percent of global podcasts. 

To make sure you’ll never miss an episode, just open Apple Podcasts or your favorite podcast player and hit follow or subscribe.  

Don’t forget to leave us a rating and review on Apple Podcasts.  

And we always love to hear from listeners on Twitter, where you can find me at hglorikian. 

Thanks for listening, stay healthy, and be sure to tune in two weeks from now for our next interview.