When Sam Altman admits we're in an AI bubble while simultaneously promising to invest a trillion dollars in data centers, something stranger than usual financial excess is happening. When Geoff Hinton meets with the Pope to discuss whether artificial intelligence threatens human dignity, we've wandered far from the territory of normal market speculation. The question "Is AI a bubble?" turns out to be too simple.
There are at least three distinct bubbles forming around artificial intelligence, each with its own logic, its own risks, and its own potential to reshape our world. The question isn't just whether there's an AI bubble. The question is which bubbles matter, how they interact, and what they mean for the future we're building. We're living through multiple overlapping stories at once: a financial drama, a technical quest, and a cultural revolution. Understanding each bubble on its own terms helps us navigate all three with clearer eyes.
The Economic Bubble: A Familiar Story With Unfamiliar Scale
The economic bubble is the easiest to recognize because we've seen this movie before. It has all the classic ingredients: massive capital expenditures, soaring valuations, and a compelling narrative promising outsized returns. Over the past year, Google, Meta, Microsoft, and Amazon have collectively poured more than $360 billion into capital expenditures, with Amazon alone planning to invest $125 billion this year. NVIDIA recently hit a $5 trillion market valuation. Salaries for top AI researchers have reached stratospheric heights. If the economic outcomes don't match these investments, capital will rush for the exits and there will be a crash.
Yet this bubble has peculiar characteristics that distinguish it from previous speculative manias. For one thing, the money isn't chasing imaginary demand. Google, Microsoft, and Amazon have all reported that they don't have enough computing power to meet customer needs, despite their enormous spending on data centers. This isn't pets.com selling dog food at a loss and hoping to make it up in volume. There's real revenue growth - OpenAI went from $200 million in early 2023 to $13 billion by mid-2025. The demand is there. The question is whether it can sustain the investment.
The scale of projected losses is also unprecedented. OpenAI expects to lose $45 billion by 2028, a figure that dwarfs the losses of other high-growth companies. The company has arranged intricate financing with chipmakers and cloud providers - circular investments that, while not inherently unsound, add layers of complexity and interdependence to the financial structure. And increasingly, these infrastructure projects rely on debt financing. Even companies that have been printing money can't conjure the trillions they want to spend on AI. It's got to be debt-funded, and even with financial engineering that keeps these debts off big tech's balance sheets, it's someone's debt. The Bank of England has warned that if AI fails to meet expectations or requires less computing power than anticipated, the potential for financial instability could rise significantly.
Peter Wildeford offers a thoughtful defense against bubble alarmism. He argues that what we're witnessing is better understood as an infrastructure bubble, similar to Britain's Railway Mania or the late 1990s telecommunications crash. In those historical episodes, the underlying technology was genuinely transformative, but excessive and overlapping investments created financial instability. The railways did revolutionize Britain; the fiber optic cables did enable the internet age. Many investors just lost their shirts in the process.
Wildeford points out that AI infrastructure offers more flexibility than those earlier examples. Data centers and GPUs can adapt to various workloads, reducing the risk of completely wasted capacity. Unlike railroad tracks laid to the wrong destination or fiber optic cables buried where no one needed them, computing infrastructure can pivot. The hyperscalers don't currently have excess capacity - if anything, they're struggling to keep up with demand. The critical uncertainty is simply whether AI capabilities will advance swiftly enough to generate the economic returns necessary to justify these investments.
So the economic bubble exists, but it may not be irrational. There's genuine demand, demonstrated revenue growth, and adaptable infrastructure. What makes it precarious is the sheer scale of the bet and the debt financing it requires. A market correction remains possible if growth falters, but there's at least a plausible path to eventual profitability.
But deflating profits aren’t the only bubble we are looking at here. Let me quote Altman again:
“Successful people create companies. More successful people create countries. The most successful people create religions.”
This was written before OpenAI was founded, but gives you a sense of the metaphysical ambition behind AI.
The Technical Bubble: The Metaphysics of Intelligence
Let’s start with the claim that AGI - artificial general intelligence-, perhaps even ASI, artificial superintelligence, is just around the corner, a near future in which machines that can do everything we do, but better. This is where Mark Zuckerberg talks about the strategic importance of building AI infrastructure now, preparing for a future where "superintelligence" could transform industries. It's a semi-religious belief that money reliably converts to compute, which in turn converts to capability, and that AGI is merely a matter of scale and investment.
In some tasks, the claim is already true. Large language models can do mathematics better than most people, even while making mistakes you or I might not. They can write emails and documents better than most people, at least in English. What we used to call "smart" is suddenly less impressive. Think of it this way: there was a time when people were feted for being fast at arithmetic with Roman numerals. If you've ever tried to add or multiply in Roman numerals, you know how hard it is. The invention of the Hindu-Arabic numeral system, with place value and the number zero, revolutionized calculation so thoroughly that schoolchildren could do what once passed for brilliance.
BTW, Romans computed with devices like the abacus, so no one was doing complex arithmetic in their heads, but with the Hindu-Arabic numerals, you can!
AI may do something similar to today's notion of intelligence. But recent evidence suggests that AGI is further down the road than enthusiasts think. Over the past few months, expert opinions have been converging around a more skeptical view. A pivotal moment came in June 2025 with an Apple reasoning paper demonstrating that even enhanced reasoning capabilities in large language models fail to overcome the critical problem of distribution shift. The arrival of GPT-5 in August 2025, anticipated as a major leap forward, ultimately fell short of expectations.
Gary Marcus has long been pointing out the limitations of large language models. BTW, he was a participant in an earlier takedown of neural networks for language learning through his PhD advisor, Steven Pinker. Marcus sayeth:
LLMs have their place, but anyone expecting the current paradigm to be close to AGI is delusional
And he's not alone anymore. Rich Sutton, a Turing Award winner renowned for his work in reinforcement learning and author of the influential "Bitter Lesson,” has publicly acknowledged critiques of LLMs and agreed that they are far from achieving AGI. Andrej Karpathy, a respected machine learning expert with experience at Tesla and OpenAI, estimated that AGI remains at least a decade away, emphasizing that current agent-based models are nowhere near the required level of sophistication.
The metaphysical conviction that AGI is near helps sustain today's valuations. There's a mix of snake oil and sincere conviction in Silicon Valley about this, and it matters economically because it shapes investment decisions and market expectations.
Interestingly, AGI fever hasn't spread through Chinese AI communities the way it has in Silicon Valley. Chinese entrepreneurs draw on a different intellectual canon - a blend of Western business classics like Peter Thiel's Zero to One with the "Red Canon" of political texts including Mao's selected works and Xi Jinping's writings on governance. These provide tactical guidance on organizational mobilization and survival in fiercely competitive markets. Complementing this is the "Grey Canon" of classical Chinese philosophy- Confucius, Laozi, Han Feizi - which shapes how entrepreneurs navigate power structures and balance innovation with tradition. Near contemporary works like Jin Yong's martial-arts novels and Liu Cixin's The Three-Body Problem offer frameworks for thinking about loyalty, strategy, and geopolitics.
I read Jin Yong’s Legend of the Condor Heroes series last year for reasons completely unrelated to AI. Very entertaining! And Liu’s Three-Body Problem (I mean the trilogy, not just the first novel) is the best science fiction I have read this century - again, for reasons unrelated to AI.
This produces a distinctly different approach to AI development, one less focused on metaphysical speculation about superintelligence and more grounded in practical applications and alignment with national strategic goals.
The technophilosophical bubble may be culturally specific.
The Cultural Bubble: How We Live With One Another
The third bubble asks whether AI will transform how we live, work and play - in short, how we relate to one another, about the rhythms of daily life. This is the bubble most analogous to the internet story.
Yes, there was a dot-com crash that took the Nasdaq and plenty of companies down with it. Yet the internet still reorganized human culture as well as our economies. Take a look at the top 10 companies by market cap in 2000 versus today. Microsoft is the constant, but note how tech has taken over the cap table. Wouldn’t have happened without the internet. So even if you invested in pets.com and lost your shirt, the internet still remade the world.
The question is whether AI will follow that pattern - revolutionizing how we live with one another regardless of whether AGI arrives soon, and regardless of whether today's economic exuberance cools. The energy needs alone are reshaping global infrastructure. The demand for data centers is ramping up so quickly that orders for gas turbines are expected to reach over a thousand units in 2025. The three dominant manufacturers - Mitsubishi Heavy Industries, Siemens Energy, and GE Vernova - are struggling to keep pace, leading to wait times of at least three years.
This has global ripple effects.
Data centers require continuous power, and gas turbines are being deployed as a stopgap. The rush has concentrated demand in the United States, which now accounts for nearly half of global turbine orders, sidelining Asian markets like Vietnam and the Philippines that need turbines for their own energy transitions. The AI boom isn't just affecting tech companies - it's keeping the fossil fuel industry alive, complicating climate commitments, and reshaping energy geopolitics. Not good!
Gas turbine technology is one area where China doesn't dominate, adding another layer to great power competition.
But the cultural transformation goes deeper than infrastructure. It's rare - once in a generation? once in a century? once in a civilization?- that there's a technology promising to change how we work, fight, and pray at once. When SEC investigators worried about subprime loans, they didn't meet with the Pope. But AI pioneers have. Pope Leo XIV has warned that AI risks undermining Christian humanism and the inviolable dignity of the person, cautioning against a future where humans become mere functions or algorithms. Drawing on Martin Heidegger's philosophy, some worry that the rise of cybernetics and technological dominance threatens to extinguish the deeper essence of Being, leaving humanity spiritually diminished.
From its beginning at Dartmouth in 1956, AI has prompted fundamental questions about human meaning and purpose, not just economic productivity. It's forcing us to ask what makes us distinctively human (maybe nothing?), what kinds of work and creativity we value, and how we want to structure our societies. These questions matter whether or not AGI arrives, whether or not today's investments pay off.
What Matters in the Long Run
We should think about all three bubbles at once.
The economic bubble draws attention for obvious reasons—fortunes will be made and lost, companies will rise and fall, and financial stability hangs in the balance. The technical bubble attracts fervor from researchers and entrepreneurs convinced they're building god-like intelligence. But the cultural bubble - the ways AI might reshape our shared life- may be the one that matters most in the long run.
The economic bubble will eventually resolve one way or another. Either the investments will generate sufficient returns, or there will be a correction. Either way, capital will find its level. The technical bubble, too, will face reality. We'll discover whether AGI is five years away, fifty years away, or fundamentally misconceived. Time will tell.
But the cultural transformation is already underway and may be largely irreversible. AI is already changing how we write, how we search for information, how we create images and analyze data. It's already reshaping energy infrastructure and geopolitical competition. It's already forcing us to reconsider what we mean by intelligence, creativity, and meaningful work. These changes will compound and accelerate regardless of whether the stock market crashes or AGI arrives.
The internet analogy is instructive. Many investors lost money in the dot-com crash, but that didn't stop the internet from reorganizing human society. Pets.com failed, but Amazon thrived. Webvan collapsed, but online grocery delivery eventually succeeded. The bubble popped, but the cultural transformation continued.
AI may follow a similar trajectory. Today's valuations might be excessive. AGI might remain elusive for decades. But the technology is already useful enough, transformative enough, to reshape how we live with one another. That's the bubble we should watch most carefully - not because it's the most likely to burst, but because it's the most certain to change us, whether we're ready or not.