Skip to main content Scroll Top

The AI Boom Your Dashboard Can’t See

Over the weekend I spent some time digging into some of the papers from the National Bureau of Economic Research “Economics of Transformative Artificial Intelligence” workshop at Stanford University (September 18-19). It was a who’s-who of economists and practitioners – organized by Ajay Agrawal, Anton Korinek, and Erik Brynjolfsson – covering measurement, market design, agentic software, labor, competition, public finance, and the information ecosystem. The sessions ranged from Making Artificial Intelligence Count (Diane Coyle & John Lourenze Poquiz) to An Economy of Artificial Intelligence Agents (Gillian Hadfield & Andrew Koh), The Coasean Singularity? (Peyman Shahidi and co-authors), We Won’t Be Missed: Work and Growth in the Era of Artificial General Intelligence (Pascual Restrepo), and The Impact of Artificial Intelligence and Digital Platforms on the Information Ecosystem (Joseph Stiglitz & Maxim Ventura-Bolet). If you care about how value is created – and how we will even detect that value in the official data – this workshop was a dense read-out. My goal for digging into this was many fold.

Theme #1 (and the big one IMHO): If you cannot measure it, how do you know it is happening?

A core message: our measurement systems were built for an industrial economy, not a software-defined, agent-mediated one. The Making Artificial Intelligence Count paper explains why early impacts of Artificial Intelligence will look “invisible” in Gross Domestic Product and productivity as we measure them today. Several features drive the gap:

Zero-price and bundled services. When Artificial Intelligence features are embedded in tools people already use – email, search, office software – or offered at zero monetary price, the consumer surplus and quality gains do not show up as output, so macro data may say “nothing happened.”

Intermediate inputs that do not leave a receipt. Much of the value shows up as lower cycle time, fewer errors, and changed workflows. Those are firm-side benefits that today’s statistics often treat as intermediate consumption, not final output – so they are hard to see in aggregate numbers.

Quality change that moves fast. Large language model features can materially improve capability without a price change, which means standard price indices under-adjust for quality and understate real output in sectors adopting these tools.

Intangibles that do not fit the ledgers we use. Data are now recorded as assets in the updated System of National Accounts, but valuing data and Artificial Intelligence-generated intellectual property is still unsettled; compute and model improvements also muddy depreciation and capital services.

Public-sector output blind spots. Artificial Intelligence can raise effectiveness in education, health, and administration, yet many public outputs are still valued by input cost – so we miss quality gains even when outcomes improve.

Why I think this measurement gap matters:

Company:

Valuation narratives can lag reality. If macro data understate productivity, public markets and boards may conclude Artificial Intelligence “is not moving the needle,” even as customer experience, reliability, and throughput improve inside the firm. That misreads where value is actually created. This is one I hear all the time which frankly drives me bats.

Dashboards miss the story. Artificial Intelligence can raise value through personalization, speed, and error-reduction rather than more units sold. That shows up as lower cost-to-serve and working capital, not always higher revenue.

Policymakers

Policy dials risk being calibrated to yesterday. If quality gains and time savings do not appear in output or productivity, it is easy to misread inflation-productivity dynamics during adoption waves.

The tax base drifts toward intangibles and compute. As more income accrues to compute, models, and data, traditional items like wages and tangible investment become less representative – especially in scenarios outlined by Pascual Restrepo where growth tracks compute and the labor share trends toward zero in the long run.

Bottom line: if we do not update how we measure value, we will mistake a re-wiring of production for a productivity slowdown. That is not a bookkeeping quirk – it affects boardrooms, budgets, rates, taxes, and, ultimately, living standards.

Other themes that stood out – if you have time I highly recommend you read a few of these as they have implications that are significant. Again – IMHO – so read for yourself.

1) Agentic software becomes a market participant, not just a tool.

2) Markets redesigned around agents.

3) Compute becomes the bottleneck in growth stories.

4) The information ecosystem is a public-good problem with new failure modes.

Why did I lead with measurement?

Because I believe the narrative we choose depends on what our measurement systems can see. If official statistics cannot register quality improvements, process redesign, or agent-driven activity, firms will look less productive than they are, and macroeconomic policy will be flying on partial instruments. That was what I took away from this Stanford meeting – and it is the right place to start any conversation about Artificial Intelligence and economic impact.

Related Posts