Scroll Top

Raffi Krikorian Says "We Don't Have Much Time Left" to Rein in AI

The Harry Glorikian Show 

Raffi Krikorian, CTO and Managing Director, The Emerson Collective 

For April 9, 2024 

Final Transcript 

Harry Glorikian: Hello. Welcome to The Harry Glorikian Show, where we dive into the tech-driven future of healthcare. 

Usually, when we talk on the show about artificial intelligence, we’re focused on specific questions like how AI is changing healthcare delivery or drug discovery. 

But today we’re going to zoom out and talk about how AI is changing everything. 

That’s because my guest is Raffi Krikorian 

And I don’t know anybody who has broader experience or a broader point of view about how AI is changing everything, from transportation to social media to politics. 

Raffi first rose to prominence in the technology industry as the vice president of engineering at Twitter, where he was responsible for getting rid of the Fail Whale and making the company’s backend infrastructure more reliable. 

Then he went to Uber, where he directed the company’s Advanced Technology Center in Pittsburgh and oversaw the launch of Uber’s first fleet of self-driving cars. 

But then he stepped into the job of chief technology officer for the Democratic National Committee, where he helped the party overhaul its technology infrastructure and analytics. His goal was not just to prevent more security breaches like the email hacking incident that disrupted the 2016 election. but to make sure the party had the tools it needed to engage voters in their next election. 

And finally, Raffi moved to the philanthropic and venture organization Emerson Collective, where he is now. 

Today he runs the technology group at Emerson, including for the venture team, for the policy team, for the political team, and for the philanthropic team, and also helps to upgrade the back office technology at the social change organizations that Emerson works with. 

On top of all that, Raffi recently launched a fantastic podcast called Technically Optimistic, where he’s taking a deep dive into the way AI is challenging us all to think differently about the future of work, education, policy, regulation, creativity, copyright, and a hundred other areas.  

I would say Technically Optimistic is a must-listen for anyone who cares about how we can build on AI to transform society for the better while minimizing the collateral damage. 

Raffi and I talked about why he moved to Emerson Collective, why and how he started the podcast, and what he really thinks about what government should be doing to prepare for the waves of social change AI will bring. 

It was an amazing conversation and it was great to have Raffi come on  the show to share some of his ideas with us. 

So here’s our chat. 

Harry Glorikian: Raffi, welcome to the show. I mean… 

Raffi Krikorian: Hi. Thanks for having me, Harry. 

Harry Glorikian: It’s great to have you here. I mean, it’s not like we don’t know each other, but it’s it’s great to actually have you on the show. Um, I mean, for those people that don’t know, you know, Raffi’s got this amazing background both inside and outside Silicon Valley, right? You you led the reliability and efficiency efforts at Twitter. Um, when it was more reliable, I’ll say. But, um. 

Raffi Krikorian: You said Twitter, not X, so you’re accurate then. 

Harry Glorikian: Then you directed, uh, the launch at Uber’s fleet of self-driving cars, and then one day you just poof, just said, I’m out of here. I’m going to go overhaul the Democratic National Committee’s technology infrastructure. And I remember that going, well, what what what happened? What what what he poof, you’re like just. 

Raffi Krikorian: Did he just, like, go wrong? 

Harry Glorikian: And then I think since 2019, though, you’ve been at Emerson Collective where you’re both the chief technology officer and the managing director, which is sort of interesting how how you play those two roles. And so I wanted to sort of jump in and talk about the the current roles, right? So that, you know, maybe some of our listeners have heard of Emerson Collective. I mean, um, they may recognize that Laurene Powell Jobs helped set it up as sort of a hybrid venture capital firm and philanthropic organization. Um, but maybe I don’t know, how would you describe Emerson Collective? How did you end up taking the role there, blah, blah blah. Yeah. 

Raffi Krikorian: Like here’s a quick 101. So like, Emerson Collective is a social change organization that works on some of the world’s hardest problems. And we largely think about like how to get access, how to get people to get to make sure they can achieve their best selves and things like that. So we work usually in four major areas immigration, education, health and the environment. But of course there’s a lot of bleed-through between those. Like you can’t talk about immigration without talking about politics. You can’t talk about education without talking about journalism, like you can’t talk about any of these without talking about democracy. So like, there are a lot of there are a lot of other issue areas that you have to work on, because we believe that to push any of these forward, you just need to come at it from so many different angles, like a philanthropic organization alone is going to have a lot of trouble moving the immigration space. But can philanthropy combined with venture capital, combined with politics, combined with a press story, combined all the stuff? Maybe. I mean, like time will tell. It’s like that’s at least our theory around it. And so I mentioned the four main vertical areas. You can think about the way we organize ourselves as like these horizontals. So we have a philanthropy team that has 20 something people on it that that does most of our most of our philanthropic work. 

Raffi Krikorian: That’s usually in the United States. You know, 95% of it’s the United States. We have a venture team that does a whole bunch of venture capital investing, because we believe that could be a lever for change. We have a policy team in DC that works really closely with the Hill, the administration, also state based stuff because like, we believe that you need to make policy changes in order to get work. We have a political team that works on electing the next generation of leaders. That can be a positive force for change. A media team. I mean literally the list keeps on going on and on. And then I’m fortunate enough that I run the tech, the technology team here. And technology is, for us a pretty broad thing. But like I think about how do you use everything from everyday it but maybe positioned slightly differently, like most of the back offices of social change organizations, whether it be nonprofit or for profit, look like they’re in the 1990s or early 2000. So, like, how do we just upgrade them and like, change their posture? On the theory that modern tools allow you to work better. You know, we’ve spent time with immigration grantees, helping them increase their security posture in order to just be a better citizen of all the sensitive data that’s going through the world. 

Raffi Krikorian: So we think about everything from it all the way to we have product managers, data scientists and engineers who actually like, build real interventions in the space. For a quick example of one: so we’ve been really fascinated with teacher vacancies across the United States. I mean, fascinated and sort of like a morbid way. Teacher vacancies is a bad thing. Um, but but, you know, there was this piece in the Atlantic magazine a few months ago that said it’s not a teacher vacancy problem, it’s a teacher location problem. Like we have more teachers in some areas and not enough teachers and others. And so that sparked our curiosity. It turns out there’s no database, no nationwide database that explains where teachers are staffed, where there’s demand for teachers, where their teacher gaps. So we built one. So like we have we set off the engineering team to actually create, like we have this huge scraping infrastructure that we literally scrape every job board, something like 12,000 public public school districts in the United States. We scrape every job board. We normalize it, we analyze it. We now have a map of like where all the teacher vacancies are, where’s their demand? Where there isn’t. 

Raffi Krikorian: And then the story that comes out of that is more than just teacher vacancies is a thing. Like, you know, Texas. I have issues with Texas. But those aside Texas wants to invest into STEM. You can see it in their job postings. They’re just having trouble staffing it. Or you can see that like New York State actually might have an abundance of teachers in some ways, but their neighboring states have issues. So there’s becomes now a policy question of like, how do we work on license transfersure? So like if you’re like, like right now, if you’re licensed as a teacher in New York, almost every state in the union will take you as a teacher. But if you’re a licensed as like, I’m, I’m making this up, so don’t hold me to these states. But if you’re licensed as a teacher in, let’s say, Louisiana, will Alabama take you as a teacher? I don’t know, and I don’t think so. And so like, how do we start to make those gaps at the state level so we can start solving these problems? How do we give teachers mobility so that like if they get licensed they don’t feel that they’re stuck in the state. They got a license. Like maybe they want to go home again. Maybe they want to go to a state that they’ve never been to before, or they want to help their, their, their home community. So like, we are building products that sort of like address these type of issues. 

Raffi Krikorian: You know, we did a whole other set of products where we FOIA’d, Freedom of Information Act, all the credible fear interviews that occurred across the southern border. So when someone crosses the border illegally and say a law enforcement detains them, they’re put in front of a judge that determines whether or not they should be given asylum, or they can start the asylum process. So all those court cases are publicly available through FOIA them. So we followed them all. And then we’ve created a predictive model that can tell us, given any particular judge on the southern border, how they might rule on certain cases. Um, unsurprisingly, some judges are pretty bad. Some judges seem to be pretty good. So now that we we’re still trying to figure out exactly what to do with this information. But, you know, these are the type of things that we’re both curious about. And we now have the technical capacity to actually just answer. And so like, those are the kinds of things that like my team here, the technology team here inside a social change organization thinks about. 

Harry Glorikian: Small, small problems. Rafael, I don’t even know how you get. I mean, you know, it’s such a varied number of things you just brought up, but, like, how did you end up taking the role there? I mean, what? What attracted you to the organization? Yeah. Um, you know, just from tech to then I guess the Democratic Party then like to this. I’m sort of wondering. There’s no straight line here for sure. 

Raffi Krikorian: Yeah, yeah. I mean, my driving thing is always been how can I learn the most number of things, like, so like, you know, I was really lucky that I was a VP at Twitter building out, you know, I ran a 500 person team. We built out Twitter worldwide. We made Twitter, like you said, performant, available, um, for the entire world. Before I joined, Twitter used to go down, like the 2010 World Cup, Twitter would literally go down every time there was a shot on goal because so many people were celebrating, it would take Twitter down. And so my job was to that let that never happen again. Um, and so we did that. And then when I was, when I felt I learned all I could for Twitter, I, you know, quote unquote retired for a while, but then Uber came around and asked me whether I’d be open to helping lead their self-driving car efforts. I was like, I know nothing about robotics, but that sounds like fun. Um, so, like, I got a crash course in robotics and, like, we got the first self-driving car fleet onto the road. Um, which was kind of amazing, but then. You know? I mean, I’ll put my politics on my sleeve for a second. Donald Trump was elected. And so, like, I literally remember watching Inauguration Day, January 2017, sitting in a hotel room in San Francisco watching Inauguration Day and at that moment deciding, like, I just got to leave Uber, I got to work on this. Um, so I spent all this time calling everyone I knew who was in politics, which was not the long list at the time. Um, and then somehow they all funneled me to the point that I got to meet, um, uh, Secretary Tom Perez, who was the chairman of the Democratic Party, the Democratic National Committee at the time. 

Raffi Krikorian: Um, and, you know, the DNC had just gone through a pretty public hack where the Russians hacked all the emails and had released the emails. And so Chairman Perez asked me is like, can you solve my cybersecurity problem? And I was like, well, yes, but solving your cybersecurity problem means you won’t lose the next time around. It doesn’t mean you will win. So how can I have a conversation about that? And so the chairman brought me on board and we rebuilt all of Democratic Party infrastructure, um, on the data side and the campaigning side so that we can become a efficient, lean, mean, campaigning machine kind of thing. Like we brought modern technologies in. You know, the thing about the about the electoral cycle is that you have a lot of money on presidential years and then almost no money on every other year. And so it’s really challenging to build technology in that cycle. So I spent a lot of time convincing donors and supporters that we need to invest in technology now. And thankfully they believed me. So we were able to rebuild all the data warehouses, all the models, do all the stuff in preparation for 2020. And then, uh, around that time, the Emerson Collective was snooping around and asking questions around data and politics as well. So they got to know me there. 

Raffi Krikorian: And then when I decided it was my time to leave the DNC, mostly because, you know, at that point this was pre-COVID. So like, everyone had to be in D.C. to do work. So like, I would literally fly from the West Coast every Sunday night. I would fly to DC, stay there till Thursday afternoon and then fly home again. And I had two little kids like I missed years one through three on my youngest because I was doing all this travel back and forth. So I felt like I got the team to the spot they needed to get, and they could build on that in preparation for the 2020 elections, which thankfully they helped win. Um, and so the Emerson Collective was like, well, can you do all that rebuilding, but for every nonprofit in our portfolio? And I knew nothing about nonprofits then, like literally, I was just like, well, that seems hard. Um, so I came aboard and like, I built the technology practice here. Like, I was the first technologist in the door at EC. And now we are the largest team in the building here at the Emerson Collective. And the same thing that we did at the Democratic Party, I was like the first CTO of the Democratic Party. But now, even years later, they are the largest and most permanent team for the Democratic Party is the technology group. So I think part of my job has always been like these, like help people catch up to the fact that technology is an integral part of all our lives, and that we need to be, like constantly investing in order to stay on the forefront. And those investments pay off like they actually pay off in a lot of different ways. So that’s how I’ve ended up here. 

Harry Glorikian: Yeah, that’s that’s a fascinating, uh, I mean, and, and daunting task. I mean, it’s not like one it’s not one thing. Right? It’s like you’re you’re taking on different problems and using technology to figure out how to solve each individual separate problem, which is, you know, I love it, right? Because for an ADD guy like me, that’s perfect. Like I love doing that, right? 

Raffi Krikorian: That’s exactly right. I mean, like, it lets me like there are so many different aspects that like for, for an ADD guy like me, I can sort of like move between them, but like, you know, the bummer is maybe there’s no playbook. Like, there’s no, like, manual for how you make an organization technologically curious, like you have to, like, figure it out. Like when you go in. 

Harry Glorikian: Yeah. I always tell people I’m like, please don’t use my career track as any example, right. Because like, 

Raffi Krikorian: That’s right, that’s right, that’s right. 

Harry Glorikian: Because every once in a while I wake up and I’m like, how did I end up doing this? Like what? And my wife will remind me you were bored and you decided to jump into this completely new area that you knew nothing about. And and, you know, I’m like, oh my God, maybe I’m getting old, but I almost want to settle, start settling down in some way. But so but then all of a sudden. In June of 2023. You decided, hey, I’m going to start a podcast, and I’m going to call this Technically Optimistic because I got, I don’t know, I got nothing else to do while I’m doing all these other things. And so, I mean, believe me, I know how this feels, because one day I got up and I started this podcast and people are like, why did you start the podcast? I’m like, ah, I don’t know. I was sort of curious. Yeah. I mean, but you know, I guess the question is, is like. All right, you know. Why’d you start the podcast? What are your main goals of the show? Yeah. What do you mean when you say. Technically Optimistic because it sounds like a, you know, a double entendre, right? I mean. You know. Give me the back story. 

Raffi Krikorian: Yeah. I mean, so, like, it actually probably starts in June of 22. Um, because then I was having this conversation with Laurene Powell Jobs, the president here at Emerson Collective that around emerging technologies broadly, you know, because like, we spent a lot of time from my background thinking about how technology and society are collide with each other. And we see this in a lot of different things. Right. Like Emerson Collective does work in environmental justice. What does environmental justice mean? It basically means that someone is experiencing the downstream effects of someone else’s definition of progress. Um, and look, progress is a good thing. I’m not I’m not putting down progress. But, like, you know, environmental justice could mean it could mean the people who live near an airport. So, like, airports are really good. They allow us to have a global community. But the people who live near them don’t have good experiences. Right? Like it’s really loud. The air might be polluted for a while. For a while, planes were literally dumping their sewage before landing kind of thing, like into the air before their landing. So like, there are all these, there are all these downstream effects of living near an airport. So that’s some form of environmental justice, of figuring out how to help this community of folks, given this sense of progress. So like, I’ve been seeing these type of issues inside technology for a while. 

Raffi Krikorian: I’ll give you an anecdote from my time at Uber. We trained the cars, the data collected. So we had one fleet that was a data collection fleet. It just drove around all the time. Lots of cameras, lots of radar, lots of lidar, lots of sensors just recording, recording what the world looked like so we could build simulations from it. We could build training from it. We can run models against it. We can train the driving cars based on it. So we trained them. We recorded all this data in Tempe, Arizona. Tempe, Arizona is really a nice place to drive because the weather is basically always the same. It’s basically always sunny. The roads are really wide, so like you get all this information that makes it really easy to digest. We bring all this training data to Pittsburgh, Pennsylvania, where I was based at the time. We build all these models, we put them on cars. We drive the cars in Pittsburgh. They kind of drive shortly, but, you know, they’re starting to drive. They never recognize a black pedestrian. Why? Because in Tempe, Arizona, there just aren’t that many. Like especially around the university. They’re all like white kids with blonde hair. Right. And so like so these technologies, if not really thought through on the and we weren’t being malicious about it. We were doing something based out of pure convenience, like we were in the dead of winter in Pittsburgh, Tempe, Arizona sounds like a really nice place to be data collecting. And so we’re not doing anything malicious. And that holds true for most people who build technology. And so my thesis around June of 22, and I was talking to Laurene, was that we could watch how these technologies develop. But there’s a high probability in ten years they become philanthropic problems, like some group of people really benefit from these technologies, and some people are left behind or potentially worse. Right. These technologies have a tendency to divide society if not guided appropriately as opposed to put society together. And so we need to start thinking right now as a philanthropic organization, how we could start to bend that curve today and not leave it to a philanthropic problem at ten years. So come 2023, ChatGPT was launched. It’s the fastest deployed technology on the planet. Effectively, it reached the most number of people in the shortest number of times. But that has real, honest questions like tech, of technology, of humanity and of power. Right. Because like we’re releasing a technology which in some ways maybe wasn’t ready to be released at the speed it was released like and had like serious open questions of like, well, some people are going to lose their jobs. I’m like, well, what does that mean? It might hallucinate and tell people lies. I’m like, but we just deployed it to like a decent chunk of the planet. 

Raffi Krikorian: So now we have a technology that can just tell lies to decent. So all these questions started swirling. But you know, Harry, at my core, I’m a very optimistic person about tech. Like I love tech. I mean, I’ll tell you stories from my childhood, tell you stories of like, my grandfather, like they’re all tech related. I love tech, and so like, what I was also worried about at the time were there are all these people who are just basically starting to shit on the tech industry of just being like, this is not good. This is really bad. We need to pause all this work. And on the other hand, there are all these people saying like, we don’t don’t worry about all these issues, we just need to go full steam ahead. And my answer to both of them are like, you’re both wrong. Like, we need to go full steam ahead, but we need to be cognizant of all these issues so we can start working on them and start start addressing them. And so. Thinking about what’s the way I can make my dent in the world when it comes to this? I thought education was the most important thing. The more people who are educated about the nuances, the better we could be. Like regular people could start participating in conversation. Legislatures could be smarter instead staying dumb things when they’re at Congress, they might be able to say smarter things in Congress. 

Raffi Krikorian: You know, the the analogy I like to give is, like my mother in law, who I love dearly, she doesn’t, she understands why we wouldn’t change the speed limit on a highway that would be fundamentally unsafe in a lot of situations. But she doesn’t understand the trade off of having a video surveillance doorbell on her on her front door. Like she doesn’t understand the nuances of like, what does that mean for privacy? What does that mean for policing? What does that mean when you have a bug? Like I don’t remember which one of the video doorbells, but they literally had a bug that when you pulled open the app, you would see someone else’s doorbell, which just sounds like so crazy in a lot of ways. And so, like, I want to make a dent in getting more people educated to the nuances of tech so we could remain optimistic like we all want. We all want our kids to have the best education possible. Technology is the path, but we need to make sure that they are, we are not marginalizing some kids at the cost of others. We all want climate change to stop. Technology is probably the path, but we want to stop having environmental justice issues in the process. Like we want all these things. And I think the way we get it is we just have more people educated about it. 

Harry Glorikian: I mean, I couldn’t agree more. I mean, I can, it’s funny because when I give my talks on whatever it is, I can take both sides of the fence. Right. Uh, very quickly. And the same thing with, like when I put my podcast together, it was like this data in biology thing, like 5 or 6 years ago when I started, this was just barely getting. And I’m like, people need to understand, like how this is really going to move the needle and, and how it can really change health and wellness in the space. Um. But we at that time were not moving at the pace that this stuff is. I mean, this is yeah, I mean, there’s 100 papers I want to say I know for sure a week. I want to say almost a day at this point. It’s just, Arxiv is like blowing up with the number of new papers that are coming out. I mean, I can’t keep up with them. Um, yeah. Which is funny because you have to use AI to help you make sense of it. 

Raffi Krikorian: In order to keep up with it. I mean, so, like, here’s a funny story. Uh, my my wife, a brilliant woman. She’s actually an AI professor at Stanford. Uh, and she was she was lucky to chair the, um, the International Conference for Machine Learning. ICML. It’s arguably the largest machine learning conference in the planet. So she chaired it last year. And I remember her putting an edict that no one could use ChatGPT like things to create papers. I was like, what are you doing? Like, you are literally the the conference on Machine Learning. And you’re saying she’s. But she was to your point. She’s just like, we don’t need more, we need higher quality. And she’s just like, I want people to be like carefully writing it. I mean, it’s curious to see what they would say this, what they’re going to say this year. But like I remember having exactly this conversation last year of just like there’s so much of it that we need and machine learning to digest it for us to understand. Right? 

Harry Glorikian: Right. No, no. And there’s tons and it’s just moving so fast. I mean, I was I was listening to, um, the professor at Stanford that developed the first diffusion models for for video. And the question was, is, did you see Sora coming, right, from OpenAI? And he was like, yeah. He goes, well, you know, I thought eventually, like, we were going to get there, right? Which. Well, okay. It makes sense. Right? You draw a line. But he’s like, I didn’t expect it to be like, now. 

Raffi Krikorian: That’s right. 

Harry Glorikian: And I’m expecting. If I were a guess, we should expect a major motion picture. By year end. Fully done by Sora. And I think that will just so anybody that’s like predicting where it’s going to go, I’m like, you’re wrong. You know, the chances of you being wrong are probably extremely high because there’s so many variables happening that, yeah, trying to predict where it’s going to go. I’m like, it might be a good guess, but being right on a good guess is, is just it’s interesting. Right. 

Raffi Krikorian: And we’re also probably on this exponential curve and like, humans are really bad at exponential curves. So like, just like really bad at them. 

Harry Glorikian: Yeah. Yeah, yeah. I mean when we were doing the genome, I remember we, we finished 2% and people are like oh 98% left to go. And then in five years we were done. Boom. Yeah. So um, but okay, your first six episodes, you did, I think it was a mini-series on artificial intelligence. Right. 

Raffi Krikorian: That’s right. 

Harry Glorikian: It’s a really deep drive around issues around generative AI. And you talk to I think a lot of the leading thinkers and experts in the field. I strongly advise, like people to listen to the show 

Raffi Krikorian: Me too. 

Harry Glorikian: And get smarter about this. But I wanted to ask you a few questions. 

Raffi Krikorian: Go. 

Harry Glorikian: Why did you start your podcast there? Like with the topic of, you know, artificial intelligence? I mean, the the episodes have covered, um, you know, risks posed by AI, whether and how AI should be regulated, AI in the classroom, AI impact on jobs, responsibility, accountability. I’m curious. Does the set of themes track with your own personal concerns about how AI is impacting society? 

Raffi Krikorian: Yeah, 100%. And so like, you know, I’m fortunate that I sit both at Emerson Collective, my wife’s at Stanford. So I get to talk to lots of these type of people. And so like, you know, the things that I got worried about fairly immediately when ChatGPT came out, I was just like, Holy crap, all these people are about to lose their jobs. So I wanted to really dive into like, what did that mean? So I was really fortunate. I got to talk with another Stanford professor, Erik Brynjolfsson, who coincidentally, was doing a bunch of studies at the time on call centers. And so he ran a study of just like, well, let’s just automate half the call center, let’s just remove the people, put a large language model and maybe some text to speech, speech to text. So when people call into the call center, they’re really talking to a chat bot. And then on the other half of the call center, let’s make the call center operators have superpowers. Let’s give them a chatbot, but so that they can rapidly access information. And what Eric noticed was that in the short term, you would save money by doing the former like just remove all the people, automate them. 

Raffi Krikorian: But in the long term, you derive more value by the latter of just like, let’s give our call center operators superpowers. Like people are happier. The call center is just more productive. And so like it was nuances like that that I feel like I was bumping into that more people needed to know of. Just like, yes, we should be terrified about jobs, but there’s a path out of this. There’s actually a path that involves AI that gets us out of this and makes us better in a lot of ways. So that was like one story I really wanted to tell. And that story repeats itself. Or that pattern repeated itself in almost everything that I talked about, like, yes, there’s something to be worried about, but there might be a path out of it. But we got to be smarter and purposely choose that path as opposed to the easier path, which might lead to our to our doom kind of thing. 

Harry Glorikian: But it’s interesting. I mean, so if I remember that study correctly, it made the first year technical support person equivalent to like year three or year four, right? So it sort of gave the first the first person a superpower. I don’t know what it did for the third or the fourth year. I don’t think it did all that much because they had all that information. Right. But it’s but it’s interesting because the guy who’s been there for four years is now like, I mean, you’re flattening that dynamic. So. So there is like a fundamental change. But let me ask you, when you, what were your biggest takeaways from the mini-series I mean, could you could you name a few things you feel like you learned or problems you feel clearer about now? 

Raffi Krikorian: Yeah. So I’ll give you a few. And some of them I don’t have answers for, but I but I now appreciate the question better. So like for example, on the question one, like we spoke with Justine Bateman, if for those of you who watched Family Ties, she was the sister next to Michael J. Fox, who’s now a her own producer and writer in her own regard, and she got me to really understand the concerns of the film industry when it came to generative AI. It’s not just questions of like, what is creativity? But more so that, look, the film industry is already stuck in a rut in a lot of ways, like the Marvel Cinematic Universe has been going on for 20 years. We’re on Fast and Furious 10. Like, like we’re already in a like, we’re like, fascinated with doing remakes like Top Gun, whatever, whatever the next Top Gun, what’s called versus the old Top Gun. Like, we’re already in this process of just regurgitating the past and calling that creative content. And she was terrified that systems like this, which are only trained on the past, wouldn’t generate new things. They just generate mishmashes of old things. And so, like the film industry gets obviously concerned about jobs, but from like a high brow place, concerned about like, what does this mean as a creative society anymore? So that that I don’t have an answer, but like I have a better appreciation for that position. So that’s one thing. Another thing I learned in talking to regulators and lawmakers is that I think we all naively went into going in that we we need to regulate, which I think is still true, but we needed to regulate by giving centralized control, like we need a new AI agency or we need an agency thinking about these emerging technologies. 

Raffi Krikorian: But like, honestly, in talking to like the difference between senators and Congresspeople and chatting with all of them, like I have better nuanced understanding now, like the path forward is we need better talent across the government. Like we just need to figure out a talent infusion when it comes to AI and emerging technologies across the government, mostly because what’s a centralized AI agency going to do? But like some, like almost like being a bull in a China shop, but like the Labor Department has very specialized needs that NIST does not have or that, like NSF does not have, or like a name your favorite department, DHS doesn’t have. So we need to figure out how to get talent across the board. Maybe it’s not breaking news, but it was an interesting like, oh, we need to think about this slightly differently. And then finally, um, and and spending time with people like Sal Khan, like actually understanding that there is a world that’s not about intelligent tutors helping our kids, which is very important. We should do that. But there’s also a world of like, intelligent tutors helping teachers and classrooms is like actually a real opportunity for us. Like if we can make a teacher 10x more effective so that he or she can use their own internal guidance of like, I need to spend more time with this child and I can help me with these other kids for a minute while I go do that. 

Raffi Krikorian: Those can be real unlocks for our society. So like finding those like key stories and education and labor and politics and creative sense were just super interesting and like, I think like are now guiding the way that I like, advise other people. I’ll give you one more story, but this is just more much more of a fun curiosity. It turns out the Vatican has a person in charge of emerging technologies. He’s a guy named Bishop Paul Tighe, and he used to be the Archbishop of of Dublin, I believe, and then was so good at doing social media that they brought him to the Vatican to manage this stuff. And we had this great conversation of just like, well, what does humanity mean if like because like we as humans can anthropomorphize rocks, like literally we had pet rocks. There’s a whole museum in Japan dedicated to rocks that look like people. Um, and now these rocks can talk. So, like, what does it mean that we’ve anthropomorphized the thing and it can talk to me? So, like, we had this great conversation of just like the existential question of just like, what does it mean to be human in the world of these things? And that was also just fascinating and mind blowing at the same time. So I guess, like I learned a lot of lessons and got my mind blown a bunch of different ways. Just all a plug that you all should listen to these episodes. But, uh, there were, there were there were a lot of fun and sort of helped me craft my thinking around all this stuff. 

Harry Glorikian: Well, no, I mean, look, we do this podcast. I do the podcast for the same reason, right? I mean, I get to talk to I mean, the people bring stuff up. I’m like, I just never thought about that. Right? And and I learned from incredible experts in the field. Right. But it’s funny when you say. The regulation around this, I mean I listened to some of these, you know, hearings and nobody knows the difference between Google and Facebook. And then they’re going to regulate like that. Scares, scares the death out of me. Right? 

Raffi Krikorian: So I mean, let me double click on this for one second. So like I was fortunate enough to be at the DNC when the first Zuckerberg hearing happened about social media and those questions that the senators asked were so mind blowingly dumb, it was embarrassing to be a part of a political party at that point. They’ve I mean, I’m not going to say they’ve learned their lesson, but like, their staffs are a lot better right now. So, like, you know, I was fortunate to testify in front of Congress about data and privacy just recently related to the podcast and others and the questions that I was asked, like, literally had me pause to think for a second, like they were asking the right deeper questions about about this situation. Um, now, I’m not saying that like, this is uniform across Congress. Like I had one experience, but I was I came away impressed from that experience that I think that like, they got kicked in the butt, they need to figure this out. And I’ve spoken to some very smart people like I’m a Democrat, obviously, but Representative Obernolte from Southern California who’s a Republican, um, is like actually has a degree in artificial intelligence, was actually like built games and did all these things. And he has I mean, I might disagree with some of his policy positions, but they’re well thought through, like they’re actually real policy positions. And so, like I am I am more hopeful than I was when those first hearings with Zuckerberg came out. Like, I remember talking to Senator Senator Bennett about when Sam Altman appeared in front of Congress. And I was like, well, Senator, you know, Mark Zuckerberg did the thing. I was just like, please regulate us. And now Altman is doing the same thing. And the senator just broke out laughing. He’s just like, yeah, we don’t believe him anymore. Like they’re they’re just saying that for the show. But like, we don’t believe him. I’m like, great. You’ve like you’ve caught, like you guys are you I’m not gonna say you’ve caught up, but you’re catching up. Yeah. 

Harry Glorikian: And we need them. Like, seriously, we need I always tell people like policy people, like, you need to be ahead of this stuff. Like, it’s just moving so fast, you know, trying to put in policy once the cat is out of the bag, like, you’re toast. 

Raffi Krikorian: Actually, one of the, this is actually one of the bigger things that I think about and try to work on here at Emerson Collective is that, like, we need people like you, Harry, to go work with government for a few years. I mean, like, you do whatever you want, but like, but like, we need real technologists who are willing to do, like a tour of duty to go work in these places, to help educate these institutions, to help uplift these institutions. Because, like, you know, if they don’t figure it out, I think we get into a not a great place. I think the incentives of being in a capitalistic system look, I’m a capitalist, but like the incentives cause you to act in certain ways. Like when the FTC fined Facebook for the Cambridge Analytica, Facebook’s share price went up. So if you’re inside Facebook, you’re just like, we need to do more of this, which I totally get. Like, that’s the way the incentives work, right? And so, like, we need smarter people in government to introduce different incentives on top of this, just to like, help keep it all in check. 

[musical interlude] 

Harry Glorikian: Let’s pause the conversation for a minute to talk about one small but important thing you can do, to help keep the podcast going. And that’s leave a rating and a review for the show on your favorite podcast player. 

On Apple Podcasts, all you have to do is open the app on your smartphone, search for The Harry Glorikian Show, and scroll down to the Ratings & Reviews section.  

Tap the stars to rate the show, and then tap the link that says Write a Review to leave your comments.  

On Spotify, the process is similar. Search for The Harry Glorikian Show, click on the three dots, then click “Rate Show.” 

It’ll only take a minute, but you’ll be doing us a big favor, since positive ratings and reviews help other listeners discover the show. 

Thanks. And now, back to the show. 

 [musical interlude] 

Harry Glorikian: So let’s step back for a second. I mean, I keep I always wonder about the term artificial intelligence. And with my, you know, my CTO, one of my companies, we constantly have this back and forth, right of, of um, because I think that word is fundamentally misleading. Right. Um, I don’t know. It’s probably too late to change the terms we use around this. Right. But you know, you’ve talked about it in your miniseries, right? Large language models and foundation models only work because they’ve been trained on massive amounts of data that was either generated by a human or labeled by a human. 

Raffi Krikorian: That’s right. 

Harry Glorikian: It’s funny because this word intelligence that we see from models like OpenAI or you know, uh, Google’s Gemini or Anthropic Claude, right? Is it just mimicry and rearranging of ideas humans created. I mean, behind the scenes, these foundation models are doing a lot of very complicated math statistics very quickly. 

Harry Glorikian: Sure. 

Harry Glorikian: How do you think about all that? I mean, when we say that GPT-4 isn’t really intelligent, is that true in some profound way that everybody needs to understand? Or are we just doing the classic, are we moving the goalposts around of what intelligence means? 

Raffi Krikorian: I mean, I think both of these can be true. I think that, like so let’s remember Alan Turing and the Turing test, right? Like the Turing test was all about like, can you interact with a computer? Can you do a blind taste test, like you interact with a person without knowing it? Can you interact with a computer without knowing it? And then could you choose who is the computer and who is not? And if you couldn’t choose, then that machine was intelligent because it mimicked what the human could do. So I think we would say that talking to ChatGPT, it’s not intelligent, but it can do a lot of things. However, ChatGPT would probably pass the Turing test. So yes, in some ways the goal posts probably are moving now. Like we need to redefine what intelligence really means because like, I don’t think we would say ChatGPT is intelligent. I think it’s just like doing very sophisticated processing to get stuff in and out of it. But you’re right, like a lot of this, you know, there was that term stochastic parrot, right? Like a lot of this is a, a machine that, given a set of words in a sequence, is really good at predicting what the next word should be. 

Raffi Krikorian: And so like the longer, longer that sequence is and the better it predicts the next word, the closer it mimics what intelligence could really look like, because it’s starting to gather all this stuff that a regular human would say, and can just predict what that next word looks like in a chain of words. So I don’t think any of us, in what I just described would consider that to be intelligence. However, it does pass the Turing test and so like so yes, the goalposts are moving, but like, these systems are just trained machines. They’re just playing lots of statistics on us. Now, you know, to the point of Paul Tighe, Bishop Paul Tighe, like, is that what humanity really is? I choose to believe that’s not what humanity is, that we’re not just statistically choosing the next word that I think, Harry, you want to hear. Um, so I think there’s something more in that. And we and like, the field needs to spend more time trying to define what we mean by intelligence these days. But yeah, I think you’re right. The goalposts are moving. 

Harry Glorikian: Yeah. I mean, when I think about these things creating almost a worldview of how things are arranged, right, I think about like how we learn as kids and so on and so forth. And I’m like. How close, how much closer are we getting to where, you know, and I’m not saying it’s going to be equivalent, but boy, it’s. 

Raffi Krikorian: It’s getting there. 

Harry Glorikian: It’s moving. And, I mean, it makes me wonder. 

Raffi Krikorian: I mean, I have lots of friends who are lawyers, but it makes me wonder what a lawyer does all day because, like, if, like, ChatGPT can pass the bar exam, it makes you really wonder. It’s all a lawyer does is pattern matching. And like, their entire job is like, how much of the legal field can they hold in their head and pattern match against? I’m like, I hope not, because that is definitely what a computer can now do. Um, but you know, at the same time, like, would you trust ChatGPT to argue in front of the Supreme Court? Probably not. Right? Like, that requires a level of creativity and a level of reach that goes beyond previous case law. So maybe that’s what intelligence is, is like being able to make those jumps. Um, that’s not just encoded in like what previous case law looks like. Yeah. 

Harry Glorikian: But we put one with the other. And now you have like everybody has like a superpower, right. If they use it the right way, which it’s funny because I have this discussion with people and I’m like, you don’t understand that. Ah, like it’s like a hammer. It’s like a tool. I’m like, no, tools don’t talk back to me. Like, I can have a discussion with this thing and go back and forth with it. And it stimulates me to think about aspects I just maybe hadn’t thought about. When I’m putting something together and it just makes my output better. Right. Um, which which sort of brings me to this thing. Like I was thinking, um, Ethan Mollick, you you probably know him at at Wharton, right? He has a great Substack blog, um, where he’s talked about AI is really good at at some tasks that seem hard to humans, but really bad at some tasks that seem easy to humans. And this just seems like another way of framing what we’re talking about, that AI and human intelligence are very different. 

Raffi Krikorian: Yeah. 

Harry Glorikian: But can you see a convergence coming, like with AI getting better at these things that are currently hard for computers but easy for humans? Some examples I don’t know common sense reasoning, creative problem solving, empathy, or emotional intelligence, although I have heard some systems that are damn super empathetic like consistently with customers. But what do you need to happen for AI to emulate these skills? Do you think it’s even possible? How hard will it be? How long will it take? 

Raffi Krikorian: Yeah, no, I mean, like, I think that again, we’re really bad at exponentials. So, like, you know, when I think about emotions, I think about Professor Ros Picard at the MIT Media Lab. She spent her entire she spends her entire career. She coined the term affective computing, which is like getting computers to understand emotions. And they’ve deployed their technology, like, you know, in other call centers, not the one Erik Brynjolfsson is talking about. They use their technology to understand how agitated people are who are calling in to help load balance on the call center operator. So the call center operator doesn’t get like five agitated people in a row, mostly to help with their mental health so that they can help more people. So, like, computers are beginning to learn all these things about humanity in a bunch of different ways. But I think you’re you’re on to something. I think that, like the path forward is to find these like one plus one equals three situations of like where computers are pretty bad at themselves. Humans are only kind of okay at it themselves. But when you give a human the right tool, or give it an AI that is tuned appropriately, that they actually do way better all of a sudden. And, you know, I think there are some examples in medical imaging right now that that that is actually true of just like an AI can actually scan cancer, tumor, scan pictures looking for cancer tumors and find ones that a human in a first pass might not have noticed. 

Raffi Krikorian: But on a second pass they’re like, oh yeah, you are right, there is something there. So I think that there are these opportunities in and like looking for and there are these opportunities and, and these examples do exist of these one plus one equals three. Now to your other question, to your real question. Just like, is there going to be a point where these all catch up or they converge or one passes each other? I mean, this is the thing people are searching for, right? Like, when can we get to AGI this, this, uh, artificial general intelligence, where, like, these systems are way better than humans are, and I don’t know when we get there. I don’t think anyone knows when we get there. I think these are all shots on goal, trying to get us there, or trying to explore the space that’s trying to get us there. But like, you know, is it inevitable? Maybe like maybe we’ll get there at some point. We just don’t know precisely when. 

Harry Glorikian: Interesting. Well, yeah, I mean, I keep seeing the leaps. Um, you know, I always joke with my, my nephew who’s in in tech in the Valley is like, when, when am I going to stop saying, oh, shit. Right? Like I see something and I’m like, oh, Jesus. Like, okay, let let me try to absorb this right now of what this thing is doing. Right. Um, but look, okay, so on this show, we always talk about healthcare and technology. I wanted to ask one or two questions about healthcare applications. In the second episode, if I remember of your mini series, you, you list a bunch of risks associated with the new wave of AI. One of them is being misled when an AI model hallucinates, or what you call confabulate. Right. Um, and healthcare is a field where. You really don’t want to be wrong. Although I could debate that not every doctor is right all the time. Um, but if you’re using a foundation model for some medical application and it turns out that the model is was trained on, say, inaccurate data or it’s making up the answer, how do you think about all this? I mean, would you agree that that healthcare or maybe also drug development areas where we need to do bulletproof standards for authenticity or accountability. And I, I mean, can you see any good solutions emerging because I’m, I’m struggling with bulletproof because, that’s just such a high expectation that I don’t think it’s possible because we’re not bulletproof. 

Raffi Krikorian: Yeah, exactly. I mean, like, I think about this in terms of like, how do we measure progress. So like. Some people might argue, I can take both sides of this argument that. The self-driving cars are okay if they still kill people. If you’re just killing less people statistically than human drivers do drive on the road. So like that’s your bulletproof argument. Like if it’s bulletproof, you need to get the death to be zero by self-driving cars. But like, if you’re realistic, you just need to get better than humans. Um, and, you know, I think I would set my bar higher than better than humans because humans are pretty crappy drivers. But, like, I wouldn’t set it at zero because that’s impossible. And and to your point, highly unrealistic. And so I, I wonder if that framework applies to a bunch of different places. So like for drug discovery. Well, you know, for drug discovery, there already is a real framework around how we test these things, how we deploy these things. So maybe it doesn’t matter precisely where it came from, because it has to still enter the exact same pipeline as everything else. So there might be like, you know, there might be pre-existing ways to think about stuff there. But when it comes to like direct to consumer stuff of like, can you give people advice? Or, you know, my father was recently discharged from the hospital and you get like a ream of paper to tell you about all the things. 

Raffi Krikorian: I’m just like the number one thing I want to do is feed this to ChatGPT and just like summarize it for me as a six year old, like, I’m not a doctor, how am I supposed to understand this? Um, like, do I need that to be truly bulletproof? No. But do I need to make sure it’s directionally correct in what it’s telling me? Yes. And so, like, I actually think that, like, these are places where, um, these are places where either regulation or some industry watchdog group or industry consortiums could actually be helpful to just like, help set forth like some standards that people need to abide by. Like, how do we how do we all agree upon what good enough really looks like and not just leave it up to single developers that might be doing things slightly differently? Not, again, because people are malicious. Mostly that people all will just look at it from different angles and ask very different questions. So I think there’s opportunities here for like people to be getting together and talking about their systems, red teaming these systems, like trying to poke holes in these systems with each other to just hold each other accountable in some ways to build a better system. But I think I agree with you. Bulletproof is an impossible standard. We just need to be better than people in a lot of different ways. 

Harry Glorikian: And in a lot of areas. We already are better than people, which is sort of interesting to me. Um, but I mean, you know, listen, we could talk for hours, preferably maybe over a glass of wine or something like that, because it would get really interesting. But I just, you know, looking at the time, I want to like just throw in a couple final questions. I’m thinking about, um, another quote from Ethan Mollick when he said, this may be the critical time to assert our agency over AI’s future. Now, sure, I’m not sure how to do that, considering how fast it’s going, but I wanted to sort of get your thoughts. What do you what do you see happening on this front that are encouraging you? What do you see? What do you what what should we be doing that we aren’t doing right now to assert that agency? And sure, how much time do we have left? 

Raffi Krikorian: We don’t have very much time left. Um, I view this all as like, there’s an Overton window of what’s acceptable, and it’s clearly moving, right? Like it’s moving to the point that, like, people now feel like these type of systems are inevitable. These are the way it’s going. The world is going to work. And like, the window is moving almost to escape velocity. And we’re in one of these last chances to like, rein it in and get control over it. And so like I look at like, you know, Finland in 2018 did this experiment where they tried to get 1% of the population educated at a high level about AI, like its pros and cons, its nuances, etc. they actually managed to get 10% of Finland to do it. Now Finland’s a small country. It’s more like it’s more like a club than a country in a lot of different ways. But because of that, in 2018, Finland by some measures has the highest per capita AI startups in the world. They use AI in every form of their society like government uses it. City planners use it literally. Plumbers took the class and would write in and say how that class changed, how they think about their business, like all aspects of their society touched this class, which is crazy. 

Raffi Krikorian: We need to do something. That’s the only thing. We need to publicly educate so we can prevent this Overton window from moving too far. I mean, look, I, I used to work at big tech companies. I might still go work at a big tech company again. Like, they’re not evil. They just operate within an incentive mechanism that’s designed so that, like, they want to get market dominance. They want to have you think it’s inevitable because it fits their bottom line. I totally get that. So the only recourse against that is for people to really understand what are they buying, what are they buying into, what’s true, what’s false, what’s inevitable, what’s not. Now I realize this is a long shot. Like I realize that, like, educating Americans across the board is incredibly hard. Even if you do, can you actually get them to do action? But I haven’t come up with a better idea yet, Harry. So, like, that’s the one I’m trying. But like, we but we need people to be thinking about this and not just thinking it’s inevitable. 

Harry Glorikian: Yeah, I’ve got a I’ve got a friend of mine. I know that he’s his goal is a million people educated, hands on, you know, 8 to 10 hour, you know, class on AI actually using it practically. 

Raffi Krikorian: I would love to talk to him. 

Harry Glorikian: And he’s just rolling. He’s rolling along. And actually I had my older son take his class. I’ve trained the trainer, um, so that he could go out and train other people himself. Uh, so I, Raffi, we can only hope for the best. I mean, I’m I’m I’m. Yeah, I’m trying to keep up, and I feel like I, I barely am holding on to the threads on how quickly it’s moving while trying to do my day job. So, uh, great to have you on the show. Um, I know there’s a million other questions we could have gone through, but I want to be respectful of everybody’s time, so. 

Raffi Krikorian: Oh, no. It’s always fun chatting with you, Harry. We should do this again sometime. 

Harry Glorikian: Okay. Thank you. 

Harry Glorikian: That’s it for this week’s episode.  

You can find a full transcript of this episode as well as the full archive of episodes of The Harry Glorikian Show and MoneyBall Medicine at our website. Just go to glorikian.com and click on the tab Podcasts. 

I’d like to thank our listeners for boosting The Harry Glorikian Show into the top two and a half percent of podcasts globally. 

To make sure you’ll never miss an episode, just find the show in your favorite podcast player and hit follow or subscribe.  

Don’t forget to leave us a rating and review. 

And we always love to hear from listeners on X, where you can find me at hglorikian. 

Thanks for listening, stay healthy, and be sure to tune in two weeks from now for our next interview.