Charles Fisher is the founder and CEO at Unlearn, a San Francisco company using purpose-built machine learning algorithms that use historical clinical trial data to create “digital twins” of actual participants in randomized controlled drug trials to help predict how each participant would have fared if they’d been given a placebo. By comparing a patient’s actual record to their digital twin, Fisher says, the company can estimate the treatment effect at the patient level and conduct trials with fewer placebo patients.
Charles Fisher is the founder and CEO at Unlearn, a San Francisco company using purpose-built machine learning algorithms that use historical clinical trial data to create “digital twins” of actual participants in controlled drug trials to help predict how each participant would have fared if they’d been given a placebo. By comparing a patient’s actual record to their digital twin, Fisher says, the company can pinpoint the treatment effect at the patient level and conduct trials with fewer placebo patients. Fisher tells Harry that Unlearn’s software can help drug companies run clinical trials “twice as fast, using half as many people.”
Fisher’s own history is somewhat unconventional for someone in the pharmaceutical business. He holds a B.S. in biophysics from the University of Michigan and a Ph.D. in biophysics from Harvard University. He was a postdoctoral scientist in biophysics at Boston University and a Philippe Meyer Fellow in theoretical physics at École Normale Supérieure in Paris, France, then went on to work as a computational biologist at Pfizer and a machine learning engineer at Leap Motion, a startup building virtual reality interfaces.
Unlearn built a custom machine-learning software stack because it wasn’t convinced existing ML packages from other companies to help in the simulation of clinical data. Fisher says the company focuses on the quality rather than the quantity of its training data, with a preference for the rich, detailed, longitudinal kind of data that comes from past clinical trials. The outcome is a simulated medical record for each treated patient in a trial, in the same data format used for the trial itself, that predicts how that patient would have responded if they had received a placebo instead of the treatment. These simulated records can be used to augment existing randomized controlled trials or provide an AI-based “control arm” for trials that don’t have a placebo group.
Please rate and review MoneyBall Medicine on Apple Podcasts! Here’s how to do that from an iPhone, iPad, or iPod touch:
• Launch the “Podcasts” app on your device. If you can’t find this app, swipe all the way to the left on your home screen until you’re on the Search page. Tap the search field at the top and type in “Podcasts.” Apple’s Podcasts app should show up in the search results.
• Tap the Podcasts app icon, and after it opens, tap the Search field at the top, or the little magnifying glass icon in the lower right corner.
• Type MoneyBall Medicine into the search field and press the Search button.
• In the search results, click on the MoneyBall Medicine logo.
• On the next page, scroll down until you see the Ratings & Reviews section. Below that, you’ll see five purple stars.
• Tap the stars to rate the show.
• Scroll down a little farther. You’ll see a purple link saying “Write a Review.”
• On the next screen, you’ll see the stars again. You can tap them to leave a rating if you haven’t already.
• In the Title field, type a summary for your review.
• In the Review field, type your review.
• When you’re finished, click Send.
• That’s it, you’re done. Thanks!
MoneyBall Medicine Podcast Interview with Charles Fisher
Harry Glorikian: We’re all told regularly. Think outside the box and look for the white space. That’s exactly what my next guest actually did after spending over a decade in the academic arena of biophysics and machine learning. Dr. Charlie Fisher. Dr. Charles Fisher looked around and realized that while the big names in tech were busy building machine learning capabilities for financial services, consumer goods, retail, and other areas, none of these were actually relevant for biological work in particular clinical trials.
So was born the concept behind unlearn.ai by innovating an entirely new approach. There’s three and a half year old startup is committed to applying machine learning to increase the uncertainty to increase the certainty of clinical trials by significantly shortening the time it takes to achieve outcomes and to validate them relying on fewer participant what’s their [00:01:00] trick, the use of digital twins. Dr. Charles Fisher holds a BA in biophysics from university of Michigan and a PhD in biophysics from Harvard university.
He completed post-doctoral work at both Boston university and Ecole Normale Superieure in Paris, France on the commercial side of things. Charles served as a machine learning engineer at leap motion and a computational biologist at Pfizer.
Charles welcome to the show.
Charles Fisher: Thank you for having me,
Harry Glorikian: you know, great to talk to you at the AI bio biopharma conference and you know, learning more about the more, more about the company, but I want to step back for all the people that are listening to this podcast and don’t know much about your company and.
You know, how would you describe it sort of in the simplest way possible, so that to give the broadest understanding of it?
Charles Fisher: Sure. I think the simplest thing to do is to start off by focusing on the value proposition. Right? And so if you think about the way we develop new medicines today, we have to test them in, clinical trials to make sure that they’re safe and effective.
Those clinical trials can take an extremely long time and that costs a lot of money. And so we are developing technologies that will help biopharma companies run those clinical trials twice as fast, using half as many people, the way that we do that. So every single clinical trial is a comparison. We want to compare what would happen to a person, how would they respond if they receive a new kind of treatment?
Two, how would they respond if they didn’t, if they did not get that treatment and that difference between would they get better if they got the treatment, did would they not get better if they didn’t tells us if that treatment is effective? and there’s something, in the business that we call the fundamental problem of causal inference.
And the problem is that you can never do both of those things. You can give the patient, the drug, or you can not give the patient that drug you can’t simultaneously do both. So you can only observe one of the outcomes. What happens if they get the drug or what happens if they don’t, but never both. So what we do as a company is that we collect lots of historical patient data.
And then we use artificial intelligence technologies to try to simulate how a patient would respond if they did not receive a new therapy. So if they were to just be put on the current standard therapy, and then we can use that information. As a comparison to a real patient who gets a drug in a clinical trial to tell whether or not that’s rug is really effective for that patient.
Harry Glorikian: How did this epiphany sort of, how did you come to this? Like, I, I mean, I know some of your background isn’t necessarily didn’t start in. Healthcare fire is I read your bio correctly.
Charles Fisher: My background is so the path to starting unlearn.AI, which now was about three and a half years ago, is relatively long and winding.
I, I, you know, I’m not a person who set out to become an entrepreneur that was never my goal. I’m a scientist and I went to, Undergraduate university of Michigan. I studied biophysics. and then I did my PhD in biophysics at Harvard, and I had every intention of being an academic.
I did a post-doc at Boston university where I worked again on, on biophysics. Then I did a second postdoc. That’s how, you know, I wanted to be an academic is the second post-doc, in, in France, I moved to Paris and did a post-doc there, a sale again, biophysics. Right? And so all of that, it really all the way back to my undergraduate, Michigan PhD and my multiple postdocs, all of that work was on applications of machine learning to different problems in biology.
And that means many different problems. So my, sort of undergrad and PhD work was about understanding how small, very, very small molecules. So we’re not small molecules, but small things that we are way too small, like proteins and nucleic acids for us to actually take pictures of. We wanted to understand how they move.
Right. So these are actually flexible dynamic engines that they actually have to move around. And we all really understand how that works. So it was trying to use machine learning to understand that. And then my postdocs were looking at trying to use machine learning to understand the microbiome. So all the bacteria that live mostly in a, in a person’s gut.
And then after that, That I eventually decided that I actually, it was more like, I didn’t know what happened in industry. That was, it was, it wasn’t so much, like I decided to go into industry. It was more like, Oh, I’ve spent the last decade in academia. And I actually have no idea what happens in industry.
Like. Is that, is, you know, so decided to go find out, and moved to Pfizer. So then I worked at Pfizer, as a machine learning scientists doing a lot of work, looking for biomarkers and things in clinical trials, a lot of work, in phase two trials, looking for biomarkers. And then I took a complete left turn, just total left there because it was biology, biology, biology, biology.
I moved out to San Francisco and, started working at a virtual reality startup.
Harry Glorikian: Yeah, it was magic leap, right?
Charles Fisher: It’s c’mon, it’s called Leap motion. A lot of people mix them up because they’re both virtual reality startups and they both have leap in the name. but yeah. So what Leap Motion was trying to do was instead of needing a controller to interact with virtual objects.
Harry Glorikian: It was your hand, right above a sensor system.
Charles Fisher: Right. So in the end, the idea was that they actually shifted long before I worked there in like keyboards for computers, but for virtual reality, it would basically be Mt. Head head-mounted sensor system so that like you could reach out and grab virtual objects. Anyway. I only worked there for a few months.
I didn’t ended up, I didn’t end up enjoying it. it just wasn’t for me, I’m not interested in virtual reality. I don’t, I don’t play games like video games or anything like that. And I spent my whole career, my whole career had been in biology and, so yeah. But I met my co-founders at that company. so myself and my co-founders, John Walsh and Aaron Smith, we were all machine learning scientists, all working at Leap Motion.
We all left the company around the same time. And so then we got into this idea of, deciding to start online. And the original thought around it was that pretty much all machine learning research in the entire world was being driven by like five or six countries. Right. Google, Facebook, and you know, the like, and yeah, they have a particular agenda, right?
They are working on problems that are relevant for their businesses. That’s that makes sense. Right. But the problems that are relevant for their businesses, aren’t the same problems that are relevant for medicine is a completely different kind of data, completely different kinds of problems. So medicine has been really understood, served when it comes to a lot of machine learning research.
There just hasn’t really been much, actually, especially, I mean, if you look at it, sometimes people feel like there is, but dollar for dollar, it’s not a comparison. You know, maybe 1% of the total research expenditures to machine learning has been spent on medical problems. If that.
Harry Glorikian: Right. Right.
Charles Fisher: So our, our, you know, our idea was basically, well, let’s just start over.
Right? Let’s get, let’s look at the kind of data and the kinds of problems we really have in medicine and then invent machine learning methods to solve them. and you know, that, that’s where we started, you know, three and a half years ago.
Harry Glorikian: So, but, but in reality, we are sort of utilizing tools that are produced by.
You know the Fang, right, right. Facebook, Apple, and all these guys are,
Charles Fisher: We use a completely custom software stack and we use, machine learning methods that are new and that we’ve invented and that we have the patent applications on. So yeah, most companies are using, most companies are not actually inventing anything.
They are just using things, invented by others software written by others and then repackaging it. But that’s not us. No. The whole fundamental thought of our company is that that won’t work. Is that those things were designed for other problems and that we really needed to focus, especially for looking at clinical data on creating new technologies designed for that problem.
So everything that we’ve built we’ve done is completely custom built the first year and a half of the existence of the company. We just, you know, turned on the fluorescent lights and went into a tiny little office. It’s just code, right? Like we didn’t. So the first year and a half was just building technology more than anything else.
Harry Glorikian: And so to do what you want to do requires a significant level of data coming in right to, to, to build what you’re calling us a digital twin of it, of a person. How have you been testing out your system, getting real-world data to then optimize it to, to build something that you believe is representative.
Charles Fisher: I think that people actually often, because again, because of the research coming out of Google and these other companies overestimate how much data you need to solve these problems. we do need, more data than you get like out of one clinical trial. Right. But you see stuff coming out of, like GPT three, this language model that I think open AI put out, right.I forget how many millions of dollars it costs just to train the model, like the
Harry Glorikian: $12 million in three runs. Right. Because, and that was just the electricity cost if I remember.
Charles Fisher: Yeah, yeah, yeah, yeah. Right. So, you know, that’s, that’s, that’s one that there are problems like that. Right. But in healthcare, again, it’s just very different.
The kinds of data that we have, the problems we have are so different from what you see in other machine learning areas. The problems we have are that the data sets individual data sets are small. That’s the truth. Individual data sets are small. There’s an enormous amount of missing data, right?
No one has, and then there’s a lot of heterogeneity that in some cases isn’t, isn’t real heterogeneity. Right? So, because the data is that a small, we will aggregate many datasets. Right. But we see a difference between two data sets. What is that? Is that because of how the data were collected or is that really reflecting underlying biology?
Harry Glorikian: You don’t. Yes.
Charles Fisher: Yeah. So wait, so there are all these problems that are just different from Facebook downloading everybody’s face. Right, right, right, right. Totally different. So, you know, that’s why I say we start with, we’ve really gone a very different direction in our methods because we’re focused on the problems we encounter working with clinical data.
Right. So when we think about collecting data, when we think about collecting data, We’re primarily focused on data quality and data quantity is the second thing that we think about. Right. and to focus on data quality, we want to have rich data sets where you have a, quite a bit of information about one particular patient we’d like that information to be longitudinal.
So where we can say, how is this person progressing in terms of their disease? and then, you know, we’d like it to be, relative as standardized as we can get. And then we worry about data set size. So, you know, our building, our datasets, we focused not actually on what people call real-world data, data coming out of a routine care data coming out of the EHR is terrible.
It’s terrible, but it’s definitely not rich. Right. And I like to give the example, you know, we start primarily our foundation is data from clinical trial because it’s very standardized as extremely rich. So like an Alzheimer’s disease. We have datasets where a patient will go in, but they’ll take a whole battery of like five or six different cognitive tests.
They’ll get an MRI, they get a pet image, they get all this blood tests and then they do that every three months for two years. And that just doesn’t happen in clinical practice. Right. Like no one does that and just to go to their doctor. So if you want to get rich datasets, high quality data sets, clinical trial data is the place that we kind of have to start.
And then the problem we encounter is kind of, I alluded to earlier is that individual clinical trial tends to have like two or 300 people in it. And now that is too small to do machine learning on. So now we have to try to integrate data from lots and lots of different clinical trials. which is a lot of work.
but that’s, that’s one of the things that we’ve focused a lot of our time and effort on.
Harry Glorikian: And so you get this information from these clinical trials, you aggregate to a certain number that makes sense for your system to then ingest, right. And then what comes out. Right. I have this incredible, you know, fantasies about what is a digital twin, right.
To a certain degree. Right. I think I go way to star Trekish or, or Star Wars ish. When I think about that, but how do you describe it to someone when you’re describing? Yeah, we have a digital twin on someone.
Charles Fisher: And I think that it’s almost impossible for people to really, for it to really describe it.
Well, the best thing that I like to do is when I’m giving like a presentation with slides is to show an example. So if I show an example, people can really get kind of what it is. Right. I think people tend to go like one way or another. Some people think that like a digital twin is going to be like a molecular simulation of a person, right.
Like extreme detail and like that’s facts clearly, like, just so far outside the realm of technological possibility today. and then other people look at like what’s happening in the real world data space where people do these matching techniques where you do like, you literally basically like if you were in a clinical trial, I would just try to find a similar person in the, in who, where I have data on it.
That’d be like that. They’re similar enough, they count as a control. and what we do is just so different from both of those extremes. So what, what we create our medical records. So you could imagine that a patient in a clinical trial will have a whole set of medical records that are collected for that patient.
And they’re in a particular format. So whenever you want to submit data to like FDA as part of a clinical trial, they require you to use or strong. I don’t know if it’s required or strongly encouraged, but either way there’s a particular data standard that people use. and that, that would have a patient in all of these different measurements that were made on that patient in the clinical trial. And so we simulate patient data and our goal was to make it as seamless as possible for our customers. Okay. And what that, the most seamless thing is, if our simulated data look exactly like the real data that you would collect that up here, patients in the clinical trial. Yep. So that’s our goal.
We use the format’s called flexi disc is this data standard. And we create medical records, simulated medical records in that same exact data standard. So what you’ll get out of this is you’ll have to match to medical records. You’ll have a. Some medical records that you’ve collected from a real patient when they received the truck.
And then you’ll have another set of medical records, which look basically exactly the same format, same everything else, but that are predict how that patient would have responded if they had received the placebo instead of the treatment
Harry Glorikian: Have you put that into general practice yet? Or where is the company at the stage of, of development?
Charles Fisher: [00:16:50] I don’t know. Agenda general practices. We, we, so we, we are still, I would consider ourselves at early stage company, right. Being around three and a half years. you know, we have about 20 people. although we are growing, we’re hiring, people all the time. so in terms of. Product development and deployment.
we have, we kind of go through a general progression of like, for all of our products, we kind of think about where we start off. We do research, we publish a scientific paper, so, right. So that’s, so we had a paper come out little over a year ago, describing some of this work in Alzheimer’s disease.
And then once we have that paper, we went to go talk to the FDA about the work that we were doing on all sides. So that was step two. So first step, second step FDA. and then the third step is to start working, with customers who are running either who have previously completed clinical trials in the space where we can then reanalyze those trials as part of validation.
And to start working with customers who are running prospective trials. So we, we are currently working on a single phase, three clinical trial for a company developing a medical device, aimed at the treatment of Alzheimer’s. and that’s ongoing now. and then we are working with a couple of different companies, and a number of some academic groups, doing these retrospective analysis to present validation studies, looking at completed clinical trials.
And we’re actually, so this, In about a month from that first week of November, there’s a big conference in Alzheimer’s disease called the clinical trials in Alzheimer’s disease. So it’s very straightforward. And so we’ll be presenting a number of results at that conference from, one, some of these collab, academic collaboration.
Harry Glorikian: So let me see if I’ve got the validation, correct. The validation is the way that you validate your product is going and getting a trial and seeing if you come out with the same answers.
Charles Fisher: Not necessarily, actually the same answer. so this is a tricky one [00:19:00] people off. This is, this one is a, this is a tricky concept, right?
So a clinical trial. Is random right. A single clinical trial produces a random result. and actually, so the P value, the thing that people talk about, that’s a random number, right?
Harry Glorikian: Yeah. I’ve had a whole podcast on p-value. Yeah.
Charles Fisher: Like one way that you could, you could get type one error rate control, which is why people do this.
The way they do it is if you just, whatever it is, you are doing any sort of experience. You’re running clinical trials at the end of your clinical trial. You don’t even look at your patient data. You take two dice and you roll the dice. And, and if you, if you roll it, you know, at 12, then you approve the drug, right.
That would achieve, the same level of pipeline error rate control that you have out of the way we currently do it. It would have zero power, power, terrible, but that would achieve, so, I don’t remember where I got into a critique of P values, which is a concept I love to critique there for a second, but could you remind me of the question I saw?
Harry Glorikian: It’s okay. Because I had two podcasts talking about how crappy p-value is and, and how we need to move beyond it. But w what I was saying is you validate your product against, or your process or product against. A real trial and see how you come out against it.
Charles Fisher: Yeah. So we basically do three types of validation studies.
So the first one is actually patient level because, because we, we make simulations of individual patients. Right? So the first thing that we’ll do is we’ll take some patient who received the placebo in a clinical trial. It will just compare them individually to this simulation. And say, you know, did we do well on these patients?
Right? And then you can take a step up from that and say, okay, well now let’s look at like a cohort level, right? Let’s look at a clinical trial, look at the control arm of that clinical trial and ask whether or not we could have predicted the behavior of that entire controller. Right. So that’s, that’s a sort of higher level.
And then the third level is to reanalyze a previously completed trial. And to try to demonstrate that we can get much, much better results. So it’s not about, it’s not about necessarily saying you would get the same result of the trial. It’s actually trying to demonstrate actually very clearly demonstrating that we can get much better results.
And what does a much better result need? Yeah. it doesn’t necessarily mean that you take a drug that wasn’t approved and you get it approved because maybe the drug doesn’t work. Right. You know, you don’t want to get right. Right. So that’s not necessarily better. What we’re talking about there is, is kind of two, it’s really about the statistical properties, the uncertainty that you have in whether or not the drug works, we want to be able to show, and we can show actually I really, really quantitatively that you can get much smaller uncertainties.
So you can, you can be much more confident in the result of a clinical trial, using a much smaller patient population if you, if you leveraged this approach. So, so that’s, that’s ultimately what we do when we do these re analysis is to demonstrate. Those characteristics, better statistical properties using a smaller patient population.
Harry Glorikian: And so how were customers let’s say, or people in the field responding to this rather than the tried and true way?
Charles Fisher: I mean the value proposition I think is, is enormous. Everyone appreciates the value proposition. Right? So if you’re an Alzheimer’s disease day, you got to run a five-year clinical trial and you go spend $200 million.
Right. it, you know, the idea that you could run that and half the time with significantly less expense. Yeah, it’s huge. And it’s not just huge for the customers. You think about that downstream for patients that makes a really big deal for patients, not only who are participating in these studies, but also all the patients who are waiting for new therapies that can, that can help them get better.
Right. Right. so cause everyone does, that’s the value proposition and the thing that, that, that, that we have a challenge with, I would say, as a company is just that we’ve invented a new technology. That didn’t exist before. And this enables you to do something that people had never even really thought about never even considered.
Right. And so there’s a, there’s really an educational campaign that we have to do to present data and to demonstrate to people with data that the approach that we’re taking is sound and that it provides the value we say it does. I think that there’s a really good benefit to us, you know, a lot of times.
I have the similar discussion about people with the FDA they ask, what has the FDA feel about these things and, and, you know, our experience dealing with the FDA. Has actually been very positive. they are quite supportive of innovation. They’re quite supportive of technologies and new approaches.
They just want you to present data, but to demonstrate that what you’re doing is actually reasonable and that it will work well. And I think that’s how it should be. Right. So we’re, we’re, we’re kind of at this inflection point, we think we’re at this inflection point now where at the end of this year, we’re going to be.
Putting out five or six different validation studies. but that’ll all be these analysis where you will be. so the trial that we are working in this phase three trial, that’s actually reading out at the end of this year. So we’ll be producing results for that trial. So we’re going to have sort of a huge dump of evidence at the beginning.
End of 2020, beginning of 2021 and enormous creation of evidence to validate that. Our approach works. Not because we say it does, but here’s the data that demonstrates it.
Harry Glorikian: So you’re actually doing the two jobs. You always want it. Well, you’re learning the new one, but you are an academic, right? Cause you’re have to be doing at this level of research to a certain degree, but you’re putting it into a, more of a commercial product.
So you might be getting the best of both worlds.
Charles Fisher: I do a lot of, I, I actually personally don’t do that much scientific research anymore. When we started the company three years ago, I wrote code and I did things like that. Now I do, you know, other things
Harry Glorikian: We’re raising money, I’m sure is like front and center,
Charles Fisher: Raising money, giving talks management’s right.
As you start to get a, your company grows as a lot of things to keep out of it. So there’s a lot of other things to do. I think that, you know, one of the things that. Is really a big difference between what we do in academia. And it’s really a big cultural difference. and I hope none of the acamdemics who are going to listen to this at it, you know, but there was a reason I left academia, right.
To an extent it was that I didn’t feel like I could really have a significant impact. An enormous amount of what happens in academia as competition. People think about business as being competitive, but it’s actually way more collaborative than academia, right? Academia you’re in business for yourself and everyone else is a competitor.
Everyone else in the world is a competitive. and the result is that, you know what, you basically use write a paper and then you go talk about you yell at, and everyone yells at you about your paper’s bad and you yell at them and not a whole lot gets done. Then you move and you move into industry. And we have, you know, not only is there within the company, we have a whole team of people who are dedicated to a problem.
Right. But you actually have a whole ecosystem of other companies who want you to succeed. Right. every single pharmaceutical company that we talk to wants us to succeed, right. Because if we succeed, they succeed. Right. So I end up having really a huge network of people collaborating kind of all towards the same goal in a way that I think you don’t see so much in academia.
And then what that enables us to do is to really. Marshall, a lot of resources towards things that you don’t. No one in academia really works on. Right. And one of the biggest things, honestly for that is just software engineering, right. An enormous amount of what we do is software engineering and writing documentation for software and testing the software and making sure
Harry Glorikian: And auditing it and yeah.
Charles Fisher: And all of that. Right. That’s something that in academia, you, you typically just, just don’t see. No,
Harry Glorikian: no, no, no. You need to audit it. And then, you know, you’d love to get certified that your audit is done properly. I mean, I, I know, I know I, I, I have to deal with this with some of our companies all the time, so, correct me if I’m wrong, but Alzheimer’s is like the place you guys have stuck a stake in the ground.
At least it sounds like you’ve got the most there.
Charles Fisher: Yeah. That’s our first indication that we’re really going after.
Harry Glorikian: And so if you played this out into the future, where do you see this expanding into? Is it just expanding into other disease States or is it utilizing this for other application areas?
Charles Fisher: Depends on the timescale. in the short term, you know, the next few years we’re talking about expanding this into new disease areas. there’s no reason the way that we work. It’s one of the advantages of using artificial intelligence for these problems is that they’re data-driven so that enables us to say, okay, well, let’s, let’s look at immunology problems.
Let’s look at oncology problems. Let’s look at all these other areas, cardiology. We can do that because it’s data-driven. and so that’s the first thing. I think that if we look farther into, into the future, Then we get into compare what we call comparative effectiveness. So when you run a clinical trial for regulatory purposes, you’re basically just trying to demonstrate that in a really, really homogeneous patient population, there is some patient population where the drug works better than nothing or better than what’s currently available or what’s standard what’s typically used, I should say. And so. So that’s, what’s used for regulatory approval. It can work in this, in this population, but when you actually start thinking about down the line, what. Prescription should be given to this patient. And how much should their insurance company pay for it? A lot of that starts to then ask, well, okay, if there’s 10 therapies available, which one of them is really going to be best for this patient.
And how can you understand that comparative effectiveness and how is that going to relate to enabling that patient to go back to functioning in their daily life? So maybe somewhat. Sick and they can’t go to work, right. If they are able to get better, they go back to work and that’s, so this, this economic aspect that gets that’s do it.
And what we’d like. Well, I mean, as a society, what would be helpful is to have clinical trials that do head to head comparisons of all of these different medicines. That’s not done. Right. it’s a variety of reasons why it’s not really done, but it’s not done. and those are things that we can actually start to think about how you could just run those head-to-head comparisons in a computer.
Right? Remember, these are for drugs that are already marketed, where you’re getting, there is a lot of data from patients taking the drugs. And so can we just take all those data and figure out how to run a computer simulated clinic?
Harry Glorikian: Yes. Yeah. Something I’ve thought a lot about as, you know, I haven’t necessarily always thought about that from a clinical trial perspective, but how do you have enough data on a patient that you can just buy the data say and all the data behind it, how someone’s going to react to a certain.
Charles Fisher: Therapeutic or not. Exactly. Yeah. And then the, both of those first two applications I gave you and I’ll get back to what you actually were just saying there in a second, but just to draw contrast clinical trials and, and in this market access, comparative effectiveness stuff, even though. We’re making predictions and things at the individual patient level, the policy in both of those things is population level, right?
The policy for a drug approval is that the drug is approved for the policy for the insurance company, is that they’ll reimburse this much for the truck. Right. and so those are both interesting, but they’re not, they’re not really personalized. Right. And so that last thing that I think really in the future is exactly what you’re alluding to, which is how do you use these types of technologies for personalized medicine to really think about getting.
The right drug to a particular patient for their circumstances, right. That’s going to give them the best chance of getting better. and that is something in the future that we think about.
Harry Glorikian: Okay. It’s interesting. Cause there’s a lot of different companies coming at that particular issue and from different angles.
Right. But all of them involve. Data.
Charles Fisher: Yeah. For us, I think that that’s, that that is a future. Like, like I think that it can get there. I feel like we’re building a technological foundation, but the kinds of machine learning we use for that. but it’s, you know, it’s, it’s a less mature idea. And it’s a less mature market than the things that these, these population level concepts, right.
Who pays for it. It’s all kinds of questions about business models and other things once you get there. And so I feel like that’s a 15 year away problem. and, and that,
And yeah, but that, but that’s the clinical trials in the market access things that those are problems that we can solve today with really no new technologies necessarily with the technology we have right now, we can solve those problems and the business models we have right now, we can solve those problems. So that’s kind of our initial focus,
Harry Glorikian: You know, I, guess, you know, not, not today, but one of these days I’d love to, if there’s a demo to see, I’d love to. See it in action.
Charles Fisher: Well, like our demo is just the medical record, right? Like [00:33:00] that is our demo
Harry Glorikian: Maybe a slide deck.
Charles Fisher: we create simulated medical records because I don’t want to say that. It’s like, okay, well, here’s, here’s a medical record, which is what we do actually. I mean, like when we’re working with a lot of pharmaceutical companies, that’s actually step one, right. A big part of step one is there’ll be like, can we see it?
And we’ll say, sure, let’s, let’s get a collaboration going. You’ll send us a baseline data from patients in one of your trials and we’ll create these simulated medical records. And then you can dig through them, right? Like that’s one of the easiest way for people to really understand like what our product is.
Harry Glorikian: Well, it sounds exciting and fun. Love to learn more about the technology, but you know, that that’ll happen over time. especially how you’re. Coming at it by writing your own software and creating your own tools. That that’s, to me is the most interesting part of it. Cause I do believe what you said in the beginning, which is we take everybody else’s tools and superimpose them on our world.
And that can only get you so far.
Charles Fisher: Absolutely. Yeah, for us, one of the reasons we wanted to go this direction was to solve. These important problems in that, that was one of the reasons, right? One of the other reasons though, is that, you know, me and John and Erin were at our core we’re machine learning researchers, right?
Like we are interested not only in solving problems in medicine, but in pushing machine learning. Forward and getting to that next generation of AI. And I think that part of what, the way that you do that is you, as you look into new kinds of problems that haven’t, haven’t been attacked with machine learning before.
Right? Right. Because if you actually look at what happened with, you know, convolutional neural networks, that’s driven by a particular insight about images. And that ended up being a huge innovation and it was driven by the data they started with. It’s actually an interesting point because they started with images as data.
They invented this architecture that was really amazing for images. And then you can expand on that and they learned a lot more about how that can be used for, molecules and other kinds of other kinds of ideas now. And I think the same, thing’s going to be true for. If people that start looking at these new kinds of data that are underserved, if you, instead of thinking about how can I make this a convolutional neural network, which doesn’t necessarily make, if you think about it from the start, you can discover new principles of machine learning.
Because you’re looking in a par, area that’s uncharted and for us that was another part of what was really exciting about getting into this area. This is social applications are interesting, but that innate curiosity of exploring something really uncharted was, was also, I think, really important to all the founders.
Harry Glorikian: Well, we need that. I mean, there’s definitely true, not in the existing tools, aren’t going to solve all our problems. and there’s going to be things we’re going to want to do that haven’t been invented yet, which is why I still have a job, investing in, in these new areas. but, well, this was great.
You know, I look forward to staying in touch and, and learning more how things go in the future. And, I mean, I can only wish you the best of luck. I mean, when, as you said, the more successful companies like yours are. The healthier I can stay as I’m getting older.
Charles Fisher: Yeah, I know. That’s, I would say one of our, one of the main things that we encounter when we, I tell people about what we do, especially our work in Alzheimer’s is exactly that they’re like, go solve it quickly.
Harry Glorikian: Sorry. Yeah. Yeah, exactly. Exactly. So what was great to talk to you? And, I look forward as I said to staying in touch.
Charles Fisher: Yeah. Thank you very much for having me. Appreciate it.
Harry Glorikian: Thank you.