Scroll Top

How to make Generative AI in Healthcare Safe, with's Lana Feng

The Harry Glorikian Show 

Lana Feng, CEO, Huma AI  

Final Transcript 

Harry Glorikian: Hello. Welcome to The Harry Glorikian Show, where we dive into the tech-driven future of healthcare. 

It’s been less than a year since OpenAI opened up ChatGPT to the general public, and less than six months since OpenAI introduced GPT-4, the large language model that currently powers ChatGPT. 

But in that brief time, the new crop of generative AI tools from OpenAI and competitors like Google and Anthropic has already started to transform the way we think about managing information. 

We’re entering an era when machines can generate, organize, and access information with a level of accuracy, speed, and originality that matches or exceeds the abilities of humans. 

That doesn’t mean machines are making humans obsolete. Not at all. 

But it does mean that organizations that deal in information need to figure out how to equip their people to use the new generative AI in healthcare tools effectively.  

If they don’t, they’re going to get outperformed by competitors that do that better. 

And just from what we’ve seen in the last year, I’m firmly convinced that professionals in drug discovery, drug development, and healthcare don’t quite understand the scale of the change that’s coming.  

They need to get up to speed right now if they want to incorporate generative AI in healthcare into their work in a way that’s effective and safe. 

Fortunately there are plenty of people in the life sciences industry thinking about how to help with that. 

And one of them is my old friend Lana Feng. 

She’s the CEO and co-founder of Huma AI, and under her leadership the company has been working with OpenAI to find ways to adapt large language models for use inside biotech and pharmaceutical companies. 

GPT-4 and competing models are extremely powerful.  

But for a bunch of reasons that Lana talked me through, it wouldn’t be smart to apply them directly to the kinds of data gathering and data analysis that go on in the biopharma world. 

Huma AI is working on that problem. 

They’re building on top of GPT-4 to make the model more private, more secure, more reliable, and more transparent, so that companies in drug development can really trust it with their data and not get tripped up by issues like the hallucination problem. 

I think anybody who wants to understand how generative AI  in healthcare could change practices in the drug industry needs to know what Huma AI is up to. 

But before we dive into my conversation with Lana, a quick request. 

We want more listeners to find out about the Harry Glorikian Show, and enjoy the show as well as maybe learn something. 

And one of the best ways for a show to get noticed is to have lots of positive ratings and reviews on the majority of listening apps like Apple Podcasts.  

So if like the show and you enjoy the fact that we don’t have any commercials or interruptions, please, take a couple of minutes to share your opinion. It’ll really be a huge help. Just go to the show page in Apple Podcasts and tap the stars to leave a rating or click the link that says “Write a Review.” 

Thanks! And now let’s go on to my chat with Lana Feng. 

Harry Glorikian: Lana, welcome to the show. 

Lana Feng: Thank you. Thank you, old friend. Harry, it’s so good to reconnect and and get on your show. Very excited. 

Harry Glorikian: Yeah, it was. It was funny. I mean, as we were just, you know, talking, it’s like, you know, it’s a good thing this is such a small world, right? So you reconnect with those people that you’ve known for quite some time and you can, you know, celebrate their success. But you know I want to talk about the company and what’s going on and some of the developments. But you know for a lot of people that aren’t necessarily familiar with what’s going on in the space and what are the really big pain points or market opportunities that I think, you know, Is trying to address. I mean, I think if I you know, it’s the fact that developing new drugs takes so long, costs so much money that we’re talking, what, and you can correct me. You probably know the numbers better than I do, but an average of ten years and 2 to $3 billion with, what, a failure rate of 95%. And we only and we only talk about the successes. We never really talk about the failures. But can you talk about like in what way does data play a role in that? What is the problem? And how can your approach, your style of generative AI  in healthcare helped drug developers cut through their problems and speed up drug development

Lana Feng: Oh, there are so many great questions in there. It’s a very great topic. Maybe let me kind of dive into this like, you know, piece by piece. Right. You know, you mentioned the stats. I mean, it is it’s very, very long. And it takes like billions of dollars to develop a new drug. And then it’s like with a high failure rate. Right. And I think sometimes people don’t understand why our innovative drugs are so expensive because of all the R&D. Right. Like you put in there, too. It’s the odds, right? So everyone I mean, there’s you know, healthcare is 20%, close to 20% of the GDP. Right. And life science is a big part of this. So, you know, when sometimes we talk to techies, we talk to someone from a big tech companies like, you know, you know, health care, life science is a niche market. I’m like, are you kidding? 20% of the GDP, every dollar, one of every five dollars, like, you know, we spend is on health care. Right? So it is not a niche market. So that’s the first thing. And second thing is that, you know, the fact we mentioned, anything that we can do to either lower the cost or reduce the timeline to bring a new medicine to market is going to impact not only early access. Right? So more patients can get on this, particularly, you know, when you talk about cancer. And secondly is maybe we can that would be a way to bring down the drug price. Right. To make it more affordable, more equitable. So there’s all kinds of things in there. 

Lana Feng: So if you look at kind of the development cycle, I’m kind of preaching to the choir. You’re an expert in this, right from discovery, right? And then you file the animal studies. You file IND and then you go to clinical trials, start testing it in patients, right, in human bodies. And then you kind of phase three, registration study or phase two or, you know, a breakthrough drugs and what have you, phase two, three, and then you get approval and then you do kind of commercialization and medical kind of in between and then post-market, right? So there’s this very, very kind of long process. And there’s a lot of people, even companies, lots of companies kind of targeting different points. Right? We see a lot of companies trying to use very sophisticated molecule models, for example, to to speed up the discovery phase. Right. And that’s a great use case is actually for generative AI. That’s probably original use cases, if you will, of generative AI  in healthcare before kind of ChatGPT came to the scene. Right. But then the problem that we’re trying to address at Huma AI Is the fact that, you know, healthcare, life sciences actually have so much data, right? We don’t you know, we’re not short of data, but we can’t put those data to good use. And why is that? And then there’s three problems, right? Kind of related directly to this is, is the lack of data automation. One is that data sets are very complex. And secondly, the useful data are kind of dispersed in disparate systems, right? That don’t talk to each other. And some companies, these global pharma, have hundreds of different data systems. That’s why there’s billions spent on like data lakes, right? And things like that to kind of put them in one place. 

Lana Feng: And then lastly, which is a really, really critical hurdle is 80% of the data is we call unstructured, right? They’re documents. Right. And then they are even in free text in the in tabular data. Right. So these three things make it really, really difficult to kind of automate data analysis, if you will. Right. So the industry very much relying on experts like you and I manually looking at the data. The publication is a great example, right? You’re putting you’re putting your keywords and you return, I don’t know how many papers, the less the better because you don’t have to manually curate it, right? So can we use AI to really automate this manual curation not only from a single data source and also across data sources to create that critical intelligence? So that’s what we set out to do at Huma AI in 2020. In 2018, that’s when we founded the company. At the time, there were no large language models. It’s very much using transformers like BERT models the to kind of say, can we do this? Not only make it into conversational front end, right, ask a question to get answered. And we used to call it just like Google and we say, just like how you use ChatGPT and then at the back end being able to use NLP to analyze many, many documents or across data sources. 

Lana Feng: So it’s a very natural kind of progression to use the latest large language models. So we started collaborating with OpenAI over a year ago now, because this is already August, really kind of started accessing those DaVinci models and we actually launched a validated, a deployed, a validated generative AI  in healthcare platform for medical affairs to multiple clients in Q4 of last year. So just as OpenAI was launching ChatGPT, So really kind of give you an idea of the timeline, how early we are. Right. So it’s very much kind of like it was a lot of hard work because ChatGPT, you know, GPT models by itself is not enough, right? You can’t just kind of throw a data into ChatGPT. There’s no security. But into like, for example, of secure environment, large language models, right? You got to kind of have more because there’s hallucination problems. There’s like lack of citation. In fact, ChatGPT is going to make up citations for you, right? So how do we kind of what we call OpenAI, Huma AI starts where ChatGPT ends, right? Take the really power, really amazing power of GPT models or large language models in general, and then take it to the next level in order to be usable for life sciences. That is to make it private, make it secure. Right? More importantly, make it accurate. Right. And then make it transparent. Right. Being able to provide citations. So we’re actually one of the few companies on the market who can provide citation to every single generative content out of the generative output, out of the analysis. 

Harry Glorikian: Interesting. Yeah. I mean, basically you’re talking about something that, you know, is sort of compliance-ready as is, you know, being able to show where the information came from.  

Lana Feng: And it’s also the believability, Right. And then can I trust I you can’t be black box, right? It has to be particularly, our end users are scientists at these life sciences companies. Right? They are inquisitive. They’re also you know, you got to show me the data is accurate. So that’s kind of like a no brainer. Right? 

Harry Glorikian: Right. So when you said you said. You launched it. You worked with OpenAI. Last year you launched something in Q4. But I remember something. There was an announcement in February of a new AI platform. Is that the same one we’re talking about or is the one in February a more updated version? 

Lana Feng: It is more updated, but it is the same platform, right? We launched in private with clients. Right. And then we went public with it in February. 

Harry Glorikian: Okay. I can only see the public. I couldn’t see the private. So so, you know, at a high level, you know, what are the main features and capabilities of the platform. I mean, what kind of data does it search? Is it web based? I mean. You know, what types of output does it give? Who’s it designed for? Can you give me sort of the soup to nuts overview of it and why anybody would be super excited about it. 

Lana Feng: Okay. The ability. What is exciting is the ability to actually surface intelligence, not only from massive amount of unstructured data. Right. Publication is a great example, to showcase what is the power of generative AI. Right. I mentioned you go to the status quo is you go to PubMed, right? You surface relevant papers and you kind of read them one at a time. That’s the current status quo, right? So our Huma AI 1.0 prior to generative AI  in healthcare was that we were able to use NLP to basically surface the sections from each paper and then put it in one place, Right? If there are 400 papers in a hotly researched area, then you don’t have to read 400 papers. You just take a one file that has 400 paragraphs, but still 400 paragraphs. So the generative AI  in healthcare does is that it basically takes those kind of, you know, the enriched data, if you will. Right. Because large language models is very limited in how much data they can analyze. Right. Even though the model models are trained on billions, billions of parameters, hundreds of billions of parameters in terms of GPT four. But then there’s the tokenization, right? I think it’s 8000 tokens. You’ve heard this, so you can’t really put in probably three or 4 or 5 papers, ten papers max.  

Harry Glorikian: I think they’re up to 32,000. Now. I want to say on something like the larger versions. 

Lana Feng: Right, But then you still can’t because there are like 34 millions of scientific publications in PubMed alone. Right? So it’s still not sufficient. So this is very much kind of our approach is some of our pieces, right? We use kind of our existing platform to kind of give you those regions of the, give you those papers that are relevant, surface and then take those and then to put it into the large language models. 

Harry Glorikian: Understood. And then so you’re doing some so you’re doing some sort of sorting upfront or ranking up front and then feeding those that smaller subset to the system to then generate an answer. 

Lana Feng: So that is actually… Let me take a step back, right? That is kind of the approach. And then the other thing you mentioned is that the use cases. Right. That’s really your original question. So what are the use cases? The use cases for us spans from clinical trials on the clinical development side to medical affairs. And that is another kind of really common use case for us to post-market surveillance on particularly publication literature review heavy, to real world evidence. Right? Being able to, you know traverse like maybe 15 different massive datasets being able to kind of, you know, look at patient journey and figure out, you know, are my products better than others? Right? Are they causing less adverse events? Are they shortening hospital stays and all these kind of KPIs, if you will? So that’s kind of a use case. And then what is important for us is that we created a generative AI  in healthcare platform that is scalable, meaning on the large language model side, we can swap in and out of large language models at will, right? So for example, all through configuration, we currently use OpenAI’s models. But then if Google shout out to Google, give us their models, we can just swap it in and out just through configuration and on the back end, the use cases are also through configuration. So we can create a new use case with a new different data sets in three weeks. 

Harry Glorikian: Interesting. So the back end is not the key factor that that’s driving this.  

Lana Feng: You’re absolutely right. Wow. You’re spot on. We call it the sum of all pieces. It’s a critical piece, but it’s alone. You know, LLMs alone is not going to do the type of analysis that we need for our clients. 

Harry Glorikian: So is there a reason you wouldn’t train your own transformer from the ground up on this and you would use the Pre-trained model? 

Lana Feng: It’s too expensive, right? Too expensive and takes massive amount of time. I mean, we don’t confess. We can we can somehow create a model as well from, you know, from like billions, you know, billions of investment from, you know, you know, OpenAI. And of course, on the Google side, yeah, that’s not our core competency. 

Harry Glorikian: Okay. I was going to ask you something and we.  

Lana Feng: That’s fine. 

Harry Glorikian: It went out of my mind. I got so many questions like going through my head at the same time. But. Okay, now let’s talk about. You know. Starting this company. Right. I. I saw Greg’s LinkedIn. I saw your LinkedIn. I think Greg had something like 2015. You have 2018. You know, they were, walk me through the timeline and how you guys got here because I know that it’s never a straight line. It’s always a bumpy road when you’re an entrepreneur. 

Lana Feng: Right? You know, Greg’s background, you know, came from Apple. Right? And he used to do demos for Steve Jobs. And it was like and also he was a team leader that that kind of built the core engine for Firefox. So a very tech right It’s like, you know, tech product development and what have you. So he actually kind of registered Huma AI name, but he was doing something different, was completely different. He’s using an NLP, the NLP is the same, but he was using NLP to do prototyping. It’s like a Canva or something, right? It was completely unrelated to life sciences. So. So that’s why we said we actually started Huma AI in 2018 is because that’s when we said, you know, let’s take the core technology is actually build a business out of it, right? Start with life sciences because that’s where the pain was. 

Harry Glorikian: Did that poor guy understand what he was getting himself into? He’s like, my God, tech is so much simpler than what are you guys doing?  

Lana Feng: Yes, that’s the first complaint. Right. But now he’s half an expert. And secondly, because I think what he has, which is really valuable, is that he is a learner. Right. He continues to learn. Right. And completely new domain. I think he knows more about life sciences than some of the life science data science scientists we have. I think it’s the it’s the thirst to learn, the inquisitive nature. Yeah. Yeah. But he would play. He does. Sorry. He does complain that life sciences is too slow.  

Harry Glorikian: I’m sure I still have those conversations. You know, once somebody understands something, they’re like, You guys are crazy. It’s funny because even investors, when I talk to them and they, if they really understand what’s going on, I mean, it’s almost like they run in the opposite direction because it’s just because let’s face it, we are not funding businesses. We’re funding science experiments. Yeah. And science doesn’t always do what you want it to do. 

Lana Feng: Exactly. 

Harry Glorikian: So. So were there any, I don’t know, main technology advances that sort of really got you to where you were going or was it? Because. Because for you guys to even know about what was available from OpenAI before everything really became public. I mean, some of that is just, you know, right place, right time, knowing the right people.  

Lana Feng: You know, you actually talked about earlier, it’s like, you know, timing. Right? You could be earlier. A lot of our technology, you know, as entrepreneurs who are doing kind of novel technology innovation tend to be early. Right. That’s the case too late is, you know, late follower. That’s kind of not really our MO. We’re too early. Right. So so we actually had that problem. It’s like you know we talk to clients and it’s like, why do you want to do this? So I think we’d already done, like I said, right? It’s the sum of all pieces we’ve already before OpenAI, we’ve already done lots of really hard work and trying to understand the use cases, trying to use NLP to solve those use cases. Really want to say generative AI  in healthcare just gave us a superpower. I want to say that’s probably the single kind of value-add of us transforming health care, I want to say. 

Harry Glorikian: And so when you’re talking about the large language models. Right? I mean, I want to say that. End of ’22, beginning of ’23 seem to be that inflection point of what these models could do before that. And then, you know, basically Sam just jammed more data into this thing. And then, you know, other things started to appear from these models that you couldn’t do in the previous models. And so did that have something to do with the superpower you’re talking about?  

Lana Feng: Um, this really kind of goes to OpenAI’s unique approach, right? So we, we actually, because our early collaboration we’re alpha testers. We’ve done like we saw, you know, GPT3, we actually, you know working with like, you know, massive amount of scientific data using like three, 3.5 and now 4, we saw really like, you know dramatic improvement between these models. Because we haven’t had our hands on Google’s model, so we don’t really know the performance. But then from what we heard is the GPT model is still by far the best, at least in people’s hands. But then that’s, that’s the reason. The reason is that they started this what we call reinforcement learning through human feedback loop very early on. And then now, of course, everyone is using this approach for training their language models, right? There are so many large language models these days. So but they were the first one to do this. That’s why their models are very more accurate than others to begin with. 

Harry Glorikian: So I have a question about that. Right. So I don’t know if you’ve seen some of the latest stuff coming out, but it’s there’s a basically a discussion about how OpenAI is getting I don’t want to say dumber, but less accurate, right. And some of the stuff I’ve been talking to people about is, is it it could be the reinforcement learning because humans are not as good as the machine is. So it’s basically getting worse because of some of the reinforcement learning. And I’m wondering if you guys have run into this issue. 

Lana Feng: From what I heard, right, because they’re like bullseye in the news all the time is because of the sheer volume, because they have, what, over a billion users now? I think it’s just a cost, right? GPU cost and everything is just like really hard to take. What happens, I think from what I heard is that they have these smaller models, right, that are kind of domain trained for each domain. So when they take a question, right, sometimes they dumb it down to smaller models to kind of decrease the volume. Right? So that yes, even if you look you signed, I pay for the the subscription, right, for ChatGPT. In fact, the default is the 3.5. You actually have to click to get to 4. So that’s kind of another way for them to kind of divert traffic, if you will, from the most expensive model. 

Harry Glorikian: Right? Right. Yeah. So. Greg, your CTO said Huma AI starts where ChatGPT ends. And I know we’ve been sort of talking about that in a different way, but what does that really mean? What is that? In what ways does Huma AI go say, beyond ChatGPT? Or are there sort of limitations in ChatGPT that prevent it from doing the things that say Huma can do? Right. Does it have to do with the training data? Because I know that ends in 2021 or what other issues are there that sort of limit this to allow you to go that next step. 

Lana Feng: Okay, so maybe I can start with current ChatGPT, right? Even GPT-4, it’s trained by tons of public data, right? You mentioned the publication up until what have you, but it is a general general model, right? It’s not domain specific. So our clients would not want to kind of just feed their data just into ChatGPT. Right. Because that’s kind of a no brainer. That’s the first thing. And second thing is the sort of that’s the first is that, you know, the privacy and security, right? So therefore, they cannot use ChatGPT to analyze their private data, Right? So that’s the first thing. And second thing is that the hallucination is a big deal. Right? Right. You know, we want like, you know, close to 100%. The higher the better, right? You got to do like at least like, you know, over 90%. Right. This is like, you know, and lastly, is this really a deficiency, the inability to provide citations. Right. Even make up fake citations that don’t exist. Right. So it’s like so therefore it’s like, but it’s really smart on other things, right? So can we somehow take this really, really powerful tool like GPT models? Can we somehow retrofit it? That’s a word, right? We actually refine, what have you, give it even another layer of superpower, right? Can we use it in, you know, in a private, you know, our environment now is, can we use them for in a private and secure environment to analyze our clients’ private data is the first question? Right. And how do we solve hallucination problem? Right. Because we basically, you know, we have the data we have really make it more domain focused for life sciences. Right? Take this well-rounded, powerful stuff. We want to refine it. Right. We want to give it super power for for this particular domain. Right. So this is where accuracy is really important. And we we have multiple approaches to increase accuracy. So when we say high accuracy, we’ve actually done is that for every client we engage with using our generative AI  in healthcare platform, they always ask for validation. So what we do, we do a blinded validation, right? It’s the human curation, manual curation, which is our standard status quo, right? And then it’s the gold standard, hence, and then with generative AI, and then basically look at this. So that’s how we reach to that number. And then lastly, the citation, basically that was a must have for us. So that was actually a really, really hard problem to solve. Even when we were talking with OpenAI, we showed them, they’re like. This is really hard to do. 

Harry Glorikian: Yeah, yeah, yeah. I mean, it’s one of the companies I’ve been working on in the finance space is sort of been able to do the same thing, which is yes, you know, here’s the generative paragraph, but here’s the paragraphs we used to come up with that master paragraph. So and it’s a must have in these spaces. I mean, you can’t just say, Hey, just came up with it, right? I mean, you I don’t think the well, I guess the FDA, if they don’t look up the citations, maybe you could slip it through. But otherwise, I think you have to have the backup data. Otherwise you’re never going to get these things through. 

[musical interlude] 

Harry Glorikian: Let’s pause the conversation for a minute to talk about one small but important thing you can do, to help keep the podcast going. And that’s leave a rating and a review for the show on Apple Podcasts

All you have to do is open the Apple Podcasts app on your smartphone, search for The Harry Glorikian Show, and scroll down to the Ratings & Reviews section. Tap the stars to rate the show, and then tap the link that says Write a Review to leave your comments.   

It’ll only take a minute, but you’ll be doing a lot to help other listeners discover the show. 

And one more thing. If you the interviews we do here on the show I know you’ll my new book, The Future You: How Artificial Intelligence Can Help You Get Healthier, Stress Less, and Live Longer.  

It’s a friendly and accessible tour of all the ways today’s information technologies are helping us diagnose diseases faster, treat them more precisely, and create personalized diet and exercise programs to prevent them in the first place. 

The book is now available in print and ebook formats. Just go to Amazon or Barnes & Noble and search for The Future You by Harry Glorikian.  

And now, back to the show. 

[musical interlude]  

Harry Glorikian: So it seems like you guys have this really good relationship with OpenAI. I think you guys actually have a collaboration working with them on complex life science data. 

Lana Feng: So, we’re not like actively, like, you know, building use cases with them. No, no, that’s not the it’s more like, you know, we have access to all their kind of alpha testers for all their new stuff. Right? There’s a handful of companies who are allowed to do this so very much because they’re so busy now. It’s the OpenAI pre- and post-GPT, but we still have a really great working relationship with them. Yeah, we have because that that that relationship allowed us to build new stuff. Right. 

Harry Glorikian: Right. No absolutely. And so now I’m thinking like, you know, there was like, I don’t know, five, six episodes ago, I talked to the MedPaLM people and we were talking about MedPaLM-2. So, you know, just popped into my head of, would you guys be better off plugging into a MedPaLM-2 model that’s that already has sort of the guardrails on it, that’s focused just in health care AI? 

Lana Feng: Absolutely right. If we have access to it. That’s why I’ve been like knocking on doors. If anyone from Google were listening. We have multiple conversations with Google right now. Is is basically saying, can we have access? I think there’s the I don’t know what’s what’s behind the doors happening there. You know, I think it’s a closed beta, right? So yeah, so. 

Harry Glorikian: They’ve let people play with it. From what I know, actually in what’s the date? August 22nd. So in mid September, I’m actually interviewing Scott Penberthy for, you know, an upcoming show. I would be happy to ask him once I get him on the show. 

Lana Feng: That’s right. Put in a good word for us. Yeah. So we strongly believe this is only the first inning of the game is we’re really, really early. We also don’t see OpenAI is the only provider, right? There are going to be so many people working on really, really awesome models, not only from the tech giants, right? And you have open source like Huggingface and what have you. Also you have Cohere and Anthropic that are really, really working on these models. So we see that, of course, you know, AWS right bedrock and what have you. So there’s going to be many really, really great LLMs. That’s why we built the system to be able to swap in and out at ease. 

Harry Glorikian: Yeah. I mean it’s, I try to tell people that are not sort of, you know, my wife asked me today, why are you spending so much time like playing with all of these things? I’m like, because it’s going to, it’s like electricity. It’s actually I think it’s going to be more profound than electricity on how it’s going to change things in everybody’s life. It’s just, nobody can wrap their head around, like the degree of change that it’s going to cause. So I don’t even think we’re in the first inning. I mean, totally agree. Yeah. Yeah. 99% of the people I talk to have no idea what I’m talking about unless they’ve just, you know, fooled around with OpenAI. But so let’s talk about, you’ve mentioned it a number of times, you know, confabulations or hallucinations, whatever word you want to put on it. How do you guys guard against that? When making sure that the system doesn’t have a fit. I mean, if you’ve ever seen the jailbreak versions of these, it’s actually quite scary what you see, what you see going on in the background until the guardrails are put on it and then it magically comes out with a better answer. How do you guys manage that? How do you know that the output is accurate or say, more accurate than a human expert could collate or curate. And then how do you measure the model’s accuracy? 

Lana Feng: That is such a good question. Many layered again, Harry. So, yes, we actually see the models throw a fit like in real time. So how we do this? Because we started out as an NLP platform for life sciences, right? So accuracy has to be important. So our approach has from day one is an expert in the loop. You probably hear about this human in the loop, expert in the loop, right? So there has to be every output has to be received by our internal life science experts, right? So this is how we kind of guardrail, put a guardrail to make sure it is correct. So we we very much take an iterative approach. Right. It’s not like you feed your thing and then spit out an answer. It’s very much about this iterative, you know, expert and then put it in for reinforcement learning. This is actually a classic case of reinforcement learning through human feedback loop. So that’s the first thing. We don’t label the data. We’re never in the label business because we’re a small team, right? So, so very much this can we can we basically loop in expert knowledge there? That’s the first thing. And second thing is that we take the human expert in the loop through reinforcement learning to another level where on our front end so we can put in kind of the ability for the end users, these would be folks within life science companies, to be able to take those generative outputs and saying, is this true? Is it not? So there’s another level of outside our organization, but at the customer level, being able to kind of solicit or loop in our customer end user’s expertise to further refine the models. Yeah. 

Harry Glorikian: So when they make a change, does it actually have an effect on the model or does it just let you guys know and then you guys then go back and and make the change? Or is it is it real time?  

Lana Feng: It’s not real time kind of training. It’s basically then we take those in the system automatically captures this and then and then we use it to retrain. Yeah. 

Harry Glorikian: Excellent. Okay. So. If I’m not mistaken, you also recently contributed to a Forbes article where there was a number of experts and you guys were asked to write about the current and potential ethical crises in technology. I think you asked how do we ensure AI is used for good? How do we provide the necessary guardrails of privacy and security? Uh, let’s see. How do we minimize deception and more importantly, provide transparency about generative AI? All of these are big issues. So do you think better regulation and guardrails are necessarily necessary in life sciences? I specifically I mean, I was I think about the paper that I saw at at MIT, high school students where they were able to sort of design the next pandemic and figure out where to buy the raw materials from. What are your thoughts on this since you participated in this, do you think there should be more regulation, or are we okay the way we are. Are we waiting for more clarity? What are your thoughts on this? 

Lana Feng: As a provider in this equation, right, in this ecosystem, I don’t know how much faith I have in terms of external regulation from the government bodies. I mean, you know, you heard Sam Altman is like, regulate me, regulate me. It’s like. Right. It’s like but, you know, they just can’t move fast enough, right? So I really think like, you know, providers, people who are actually in the trenches building solutions like us, like I it’s kind of need to step up, right? Building those responsible components like you mentioned into their solution. Right. AI for good, that’s got to be kind of our slogan. It’s our slogan, right? How do we accelerate bringing life saving medicine to market faster? Right. And there are many, many companies doing kind of similar thing, not just in life sciences in the like fintech company that you mentioned. Right? This is another vertical where generative AI  in healthcare has been has been applied really kind of thoughtfully because of the regulations. So I think that’s where we can contribute. So we’ve been kind of following that responsible AI since day one, right? Partly also because we’re in this vertical that is highly regulated and we’re dealing with patient lives. So in fact, just kind of a external validation of why our approach and we’re on the right track. Is that so we actually you’ve heard of Gartner, right? So we’ve been letting the cat out of the bag because this is a podcast. We’re actually on 24 Gartner Hype Cycles. They basically cited us as a vendor for generative AI. First is that they really love our scientific approach, right? It’s not putting a wrapper on and just take it for a spin. It’s very much this layered scientific approach to this. So they love it. But more importantly is the permeation of generative AI for healthcare in every aspect of the industry enterprise, Right? Going back to what you said, we’re not even at first inning yet. It’s the economic potential of a generative AI  in healthcare is huge. 

Harry Glorikian: Well, it’s funny because you mentioned Gartner and that was going to be actually my next question, which is, you know, when I was looking at the the chart of how they map it out, generative AI  in healthcare is sort of right at the top of that peak of inflated expectations curve. Right. And then. Because I think what they’re saying is, if I read it correctly, they believe that this field overall is sort of a little overhyped. And it’s predicting that expectation for relative business will decline through what do they call it, the trough of disillusionment. Right. And then. So it’s not always good to be at the at the top of the curve. Before it, you know, they’re showing it come down. You know, I mean, it ultimately goes up to the plateau of productivity, if I remember the scale correctly. And that takes some time. Do you agree with that analysis? I mean, I have my own opinion of, and I won’t share it until you tell me what yours is. But is it fair to that? Gartner is saying that this is overinflated. 

Lana Feng: Um, it’s two-fold for us, right? As a generative AI  in healthcare startup being quoted as a vendor along with like Microsoft, Google, and Hugging Face is a big deal. 

Harry Glorikian: Huge. 

Lana Feng: Particularly we broke we broke Gartner’s record. They’ve never seen a vendor basically mentioned in the Hype Cycle so many times. Right. As far as where the hype cycle dot is in terms of the curve, I do agree that it’s kind of, you know, overheating. Right. You know how many startups that basically I’ve actually heard stories like our investors like, oh gosh, I met with ten generative companies just today. Right? Everyone’s kind of trying to like, you know, put a wrapper on top of, you know, a large language model to do something. Right, because we expect that we can influence other people, right? We’re focusing on this vertical, this really, really tough vertical to trying to solve tangible problems. I think in the long run, if you look at the Gartner for the generative AI, it actually says, you know what, like 2 to 5% market penetration, right? We’re still at it. The power is transformative, but we’re still kind of at early stage of this, right? So I think we focus on what we do. I think a lot of the generative companies are going to flame out. But I think this being able to take the large language models focus on highly complex vertical, right, building generative solutions that solve tangible problems is going to basically give us that longevity. 

Harry Glorikian: Yeah, you know, and it’s funny because I just I really I’m not sure that I guess I don’t fully appreciate their full data of what’s going into this from how they’re defining this. Right. Because I’m talking to dozens of companies and they are implementing generative AI healthcare approaches or other AI approaches for that matter. Not to say it’s just generative, but really changing how they’re doing. Customer service, inventory management, you know, all sorts of things that are changing, how many employees they need, how fast they can get something done, their profitability. Et cetera. I mean, I’m not sure all of that is being baked into the dot or where the dot is. But from the other standpoint, you’re right. I saw ten generative AI companies. First question, are they really generative AI healthcare companies?  

Lana Feng: Are they really AI companies? Right. We used to be, yeah. It’s like maybe less than 10% are true AI companies. 

Harry Glorikian: Yeah. I mean, you know, are they real AI companies too? Are they a chat bot? I mean, we’ve been doing that for a long time. Okay? It’s gotten easier and it’s gotten better. But if it’s just going to spit out what I could have found anyway, it’s not really generating from the ground up. I mean, the number of companies that are truly building the scaffolding to truly do this, I think is astounding. The number is really small. Right. 

Lana Feng: Exactly. And the impact the potential impact is huge. So I want to make a comment. I don’t really know how they come up with the cycle. Right. It’s like it’s all opaque to us, right? Because it’s not, because it’s independent. We just did vendor briefing and then it came up with it. So I think I’m, I’m, I’m agreeing with you in that hype cycle maybe is kind of misused, right? If you look at this as maturation cycle, then it makes more sense, right? 

Harry Glorikian: Yeah. I think the impact of this is going to, because I’m like, I participate in a chat group of like, 1000 global CEOs. Right, and I’m watching what they’re doing, how they’re implementing it. A lot of you know, there’s a saying, oh, I’ll use the nice phrase, which is “fool around and find out” as opposed to what we usually say. But yeah. People are really digging into like, how do I take this and apply it and really streamline something in my company and I don’t think they’ve realized like. How impactful this is going to be. Like if you’re not using it, you’re toast, I think in the competitive dynamics, from what I can see. 

Lana Feng: Absolutely. Absolutely. 

So I always I tell people like, you know, it’s that age old thing of like if the doctor using AI is going to outperform the doctor, that’s not the lawyer that is going to outperform the. It’s the same thing. And companies that are using it are going to outperform or be more profitable than the ones that are not. Um, so what does success look like for Huma? You know, do you see a world where every drug discovery and development company subscribes to your platform. I mean, what do you guys envision as, um, okay, short of anybody paying billions of dollars for it, let’s assume that is the that is the ultimate success. But how do you define success? 

Lana Feng: So on the sort of the mission level, right. Our mission has always been can we accelerate creation of new medicine? Right. So any kind of contribution we can make, as you know, right, even just 30 days earlier is a huge in terms of, you know, treating more patients in terms of, you know, upsides from the company perspective. Right. On the revenue financial bottom line perspective. So what we see is this being able to rapidly not only kind of increase adoption for existing use cases on the platform and also creating new AI use cases in healthcare, our hope is to create a, we call it generative AI operating system, right? Kind of linking these. And we also now started offering APIs to clients, particularly some of these globals that are building their own solutions. Now we can be part of their solution, right? And also we can make like IT and data science folks heroes because they can build really amazing stuff on top of this, right? That’s kind of our our goal because we hope that, you know, a rising tide raises all boats, right? Because we actually have folks who really like we don’t mind our competitors working with you, right? Being able to kind of, um, kind of really help in terms of, you know, this bringing new medicine to market faster. 

Lana Feng: And and the other thing I want to say is really, you know, can we use this copilot concept, right, it’s really, alone is not going to, you know, help help us achieve that goal. Is it really the copilot concept is AI plus experts. I totally agree with you, AI is not going to replace you. It’s the people who use AI who will replace you. Right. I think there is fear within these companies, of course, generative AI going to replace me, Right. It’s but then if like us, we’re in the trenches, right? It’s just the start. It’s the way we are using AI in healthcare that matters. Humans are going to be the starting point and the end point even just ChatGPT right. The better questions you ask, the better answers you get, right? So that’s the human that’s the human brain piece. And so, you know, if I were the expert, I would not be too worried. 

Harry Glorikian: Well, it was great having you on the show. I only wish you incredible success because I’m getting older, so the more good drugs that we have out there, the more that I can benefit from it. So it was great to have you on the show. 

Lana Feng: Thank you so, so much, Harry. That’s a great, great conversation. Very interesting and enlightening. 

Harry Glorikian: That’s it for this week’s episode.  

You can find a full transcript of this episode as well as the full archive of episodes of The Harry Glorikian Show and MoneyBall Medicine at our website.  

Just go to and click on the tab Podcasts 

I’d like to thank our listeners for boosting The Harry Glorikian Show into the top two and a half percent of global podcasts.  

To make sure you’ll never miss an episode, just open Apple Podcasts or your favorite podcast player and hit follow or subscribe.  

Don’t forget to leave us a rating and review on Apple Podcasts.  

And we always love to hear from listeners on Twitter, where you can find me at hglorikian.  

Thanks for listening, stay healthy, and be sure to tune in two weeks from now for our next interview. 

FAQs about Generative AI in Healthcare

What is generative AI in healthcare?

Generative AI in healthcare refers to the use of advanced artificial intelligence techniques to create new and innovative solutions within the medical field. It goes beyond traditional AI systems by leveraging vast amounts of medical data to generate personalized diagnoses, treatment plans, and insights. Think of it as a virtual medical assistant that combines knowledge from diverse sources to provide novel medical solutions.

How is generative AI transforming healthcare?

Generative AI is revolutionizing healthcare by enabling quicker and more accurate diagnoses and treatment options. Imagine it as a medical collaborator that analyzes complex data to identify subtle patterns and correlations that might elude human observation. This leads to faster interventions, improved prognoses, and enhanced patient outcomes, ultimately raising the bar for the quality of medical care.

What are some AI use cases in healthcare?

Generative AI has diverse applications in healthcare. It enhances medical imaging by providing more detailed interpretations of X-rays, MRIs, and CT scans. In drug discovery, it simulates molecular interactions to identify potential new medications. AI-powered chatbots offer instant medical information, while predictive algorithms help hospitals anticipate patient deterioration and intervene in a timely manner.

How are we currently using AI in healthcare?

Currently, AI is being employed across various healthcare sectors. AI algorithms predict patient deterioration, aiding in early interventions. Pathologists use AI to improve cancer diagnosis accuracy. Wearable devices with AI monitor vital signs, enabling individuals to actively manage their health. AI also assists in medical research, analyzing large datasets to identify trends and insights.

What is the role of generative AI in the future of healthcare?

Generative AI will play a pivotal role in shaping the future of healthcare. It’s not meant to replace medical professionals but to enhance their capabilities. Think of it as an artist’s tool that elevates their masterpiece. Generative AI will continue to refine diagnostics, personalizing treatment plans, and advancing medical research. Its ability to analyze complex data will unlock new dimensions of medical understanding, ultimately improving patient care.

How can patients benefit from generative AI?

Patients benefit from generative AI through quicker and more accurate diagnoses, personalized treatment options, and improved disease management. AI-powered wearable devices monitor health in real time, allowing individuals to take proactive measures. Moreover, AI-driven chatbots provide instant medical information, offering reassurance and guidance for health concerns anytime, anywhere.

Will generative AI replace doctors and clinicians?

No, generative AI will not replace doctors and clinicians. Instead, it will serve as a powerful tool that enhances their capabilities. Generative AI can analyze vast amounts of data and provide insights, but human expertise, empathy, and nuanced decision-making are irreplaceable. Medical professionals will continue to guide and oversee AI-generated recommendations to ensure the best possible patient care.

How can healthcare professionals integrate generative AI into their practice?

Healthcare professionals can integrate generative AI by staying updated on the latest advancements and collaborating with AI specialists. They can use AI systems for data analysis, research assistance, and personalized treatment planning. By embracing AI as a complementary tool, healthcare professionals can offer more accurate and effective care to their patients.

What does the future hold for generative AI in healthcare?

The future of generative AI in healthcare is promising. As AI technology evolves, it will become even more adept at processing complex medical data and generating valuable insights. Its role in medical research, diagnostics, and treatment will continue to expand, fostering a healthcare landscape that’s driven by data-driven precision and improved patient outcomes.