A Flood of Intelligence: Why “Incremental” AI Advances Are More Transformative Than They Appear
My motivation for writing this piece comes from Ethan Mollick’s “Prophecies of the Flood”, which caught my eye for the sheer urgency with which it frames the current state of artificial intelligence. In his article, Mollick highlights that insiders at major AI labs – those closest to developing large-scale models – truly believe a seismic shift is imminent. While we should remain clear-eyed about hype and overblown predictions, it’s difficult to ignore the steady drumbeat of evidence that the recent leaps in AI capabilities are more than just “incremental” steps forward.
A Tidal Wave, Not a Trickle
Mollick cites examples of AI systems surpassing benchmarks once considered out of reach. When a model vaults from mastering only a fraction of math problems to scoring above expert levels on specialized tests, it’s clear that these developments aren’t merely iterative. They represent a fundamental turning point in how we can apply AI across various industries.
From my vantage point – particularly in the healthcare, biotech, and data-driven sectors – it’s obvious that even today’s “reasoner” models are rewriting the rules. These models use more advanced problem-solving strategies than their predecessors, and it’s a reminder that technological progress in AI tends to come in surges rather than a neat, predictable progression. If you haven’t spent time experimenting with newer models, it’s easy to dismiss this as an academic conversation. Yet hands-on experience quickly reveals how swiftly these tools can reshape workflows, disrupt old business models, and spark innovation that was hard to fathom a couple of years ago.
Narrow Agents, Big Impact
According to Mollick, one particularly striking area of recent progress is in agentic AIs – systems given autonomy to act on a set of goals, such as Google’s Gemini with Deep Research. While I won’t reiterate all his examples here, I’ll emphasize that these agents aren’t just futuristic thought experiments; they’re already producing real value. They can sift through hundreds of websites and compile comprehensive, well-referenced reports in minutes. That capability alone has the potential to remake roles in research, consulting, and healthcare – where information gathering and interpretation are paramount.
Still, as Mollick and others have noted, it’s one thing for technology to exist, and quite another for society to adopt it broadly. Organizational inertia, compliance requirements, and cultural acceptance can all slow down implementation. Even so, small, nimble teams with a deep understanding of what’s possible can race ahead, leveraging perhaps 50% to 60% of these systems’ power while established players barely scratch 10% to 20%. It’s a classic case of asymmetric advantages – only this time, the stakes could be global.
The Urgency of Conversation
One line from Mollick’s piece (Mollick, 2025) resonated especially strongly: the idea that we need to have these conversations now, before AI’s impact becomes so pervasive that we’re left reacting to changes rather than shaping them. Healthcare, for example, could be revolutionized by AI’s ability to accelerate drug discovery, triage patient needs, or personalize treatments. But if hospitals, payers, and policymakers wait until these systems are mainstream to figure out how to integrate them ethically and effectively, they’ll be playing catch-up.
Education is another prime candidate for transformation. Tools that can generate scholarly reports in minutes might cause panic about cheating, but there’s also an opportunity to rethink how we teach critical thinking, creativity, and collaboration. The presence of AI in the classroom doesn’t have to be a threat; it can also serve as a catalyst for innovation in pedagogy.
Beyond the Hype, Toward Integration
Mollick’s cautionary tone about hype is well-justified. AI labs have every incentive to tout revolutionary progress, whether to attract investors or secure their place in tech history. But just because there’s some overstatement in the mix doesn’t mean we should ignore the bigger picture: The capabilities we already have today would take years to deploy fully, even if we stopped inventing new techniques tomorrow. And, of course, research isn’t standing still – there will be more advances, more breakthroughs, and more opportunities to embed AI deeper into the fabric of our lives.
As someone who has spent decades focusing on the intersection of data, healthcare, and technology, I can attest that transformation rarely unfolds with neat predictability. It comes in waves – and right now, AI appears to be creating a tidal wave. The risk isn’t just being unprepared; it’s being left behind. The pace of change means that small, agile players can outmaneuver larger incumbents if they harness these tools effectively. And that’s not just a corporate survival story – it’s a national and geopolitical concern.
Final Thoughts
Mollick’s article served as a catalyst for my own reflections, but it’s just one voice among many signaling that the velocity of AI advancement is unlike anything we’ve seen before. The label “incremental” doesn’t do justice to leaps in performance where AI surpasses human benchmarks in tasks that previously seemed out of reach.
Yes, a degree of healthy skepticism is warranted. But so, too, is a willingness to roll up our sleeves and explore how these tools can be integrated responsibly and effectively. Even if the big names in AI are motivated by their own interests, the underlying trends they’re pointing to are real – and the floodwaters are rising fast. If we don’t start building our bridges and adapting now, we’ll be faced with a wave we may not be ready to ride.
References
Mollick, E. (2025, January 10). Prophecies of the Flood.
Various announcements from Google’s Gemini and OpenAI’s reasoner models (2024-2025).


