The LLM Bubble Is Real: Why the AI Hype Cycle Is Collapsing
The artificial intelligence industry is experiencing what can only be described as a magnificent hallucination. Billions of pounds in venture capital have flooded into companies promising to revolutionise everything from customer service to medical diagnosis, all built on large language models that cost peanuts to run and are already showing signs of plateauing performance. The bubble is real, it is inflating rapidly, and smart money is already looking for the exit.
The Numbers Do Not Lie
Let us start with the scale of what we are witnessing, because the figures are genuinely startling. According to research from MacroStrategy Partnership, the AI bubble is now approximately 17 times larger than the dot-com bubble of the late 1990s and roughly four times larger than the global real estate bubble that triggered the 2008 financial crisis. Julien Garran, formerly of UBS and now leading the research team, has been blunt in his assessment: companies have vastly overhyped the capabilities of LLMs, and the adoption rate among large businesses has already started to decline.
This is not scaremongering from tech skeptics. The warning signs are visible to anyone willing to look past the press releases. Ray Dalio, the investor who correctly predicted the global financial crisis, has drawn direct comparisons between current AI exuberance and the dot-com era. The CEO of Baidu made the same connection in October 2024. When the Chinese search giant, itself deeply invested in AI development, is warning about irrational exuberance, we should pay attention.
The AI bubble is 17 times larger than the dot-com bubble and four times larger than the 2008 real estate bubble. The mathematics of unsustainable growth do not care about marketing budgets.
The Fundamental Problem: Overpromised, Underdelivered
Every LLM company promises that their model will transform your business. They cannot all be right, and increasingly, the evidence suggests most of them are wrong. The core issue is straightforward: the technology has not kept pace with the hype, and the cost structures being built around it are based on assumptions that are already crumbling.
Consider what DeepSeek achieved in early 2025. Their R1 model surpassed OpenAI's latest o1 model in numerous tests while being built in just two months by researchers operating with a fraction of the resources. More significantly, DeepSeek models are monumentally less energy intensive to train and use than their American counterparts, yet perform at essentially the same level. This is not a incremental improvement. This is a paradigm shift that makes the entire cost basis of the current LLM ecosystem obsolete almost overnight.
The implications are stark. If advanced AI can be trained for a fraction of the cost and run on consumer hardware, the £200 per month subscriptions being charged by premium AI chatbot providers start to look rather foolish. We have written extensively about this phenomenon at GOOBLR, and the conclusion is always the same: you are paying a premium SaaS markup for technology that costs pennies to operate.
The chart above illustrates what anyone who has actually built with these tools already knows. The premium chatbot market is built on pricing structures that assume continued scarcity. That assumption is collapsing.
Adoption Is Stalling, Not Accelerating
Despite the relentless marketing, actual enterprise adoption of LLMs is not following the trajectory that investors were promised. Garran's data shows that the adoption rate among large businesses has begun to decline, not because companies are hostile to AI, but because the practical value has not materialised in the ways that were predicted.
This should not come as a surprise. Most LLM implementations today are chatbot wrappers around existing knowledge bases, offering marginal improvements over search functionality at substantial cost. The promised transformative applications in legal analysis, medical diagnosis, and software development remain largely aspirational. The models are impressive in demonstration but inconsistent in production.
The Hugging Face CEO, Clement Delangue, has been notably vocal about this reality, warning that we are heading towards an LLM bubble burst and advocating instead for specialised models designed for particular applications. His argument is compelling: rather than one massive general-purpose model trying to do everything, the future lies in smaller, focused models that excel at specific tasks. A banking chatbot does not need to write poetry. A legal analysis tool does not need to debug code.
The Specialisation Advantage
There is a practical elegance to specialised models that the current LLM gold rush has obscured. Smaller models trained on domain-specific data can outperform their general-purpose cousins on relevant tasks while running on modest hardware and costing a fraction as much to operate. This is not speculative. We are already seeing this pattern emerge across industries.
| Approach | Training Cost | Monthly Inference Cost | Task Performance |
|---|---|---|---|
| General-Purpose LLM | £10-100M+ | £10,000+ | Good at everything, excellent at nothing |
| Fine-Tuned Specialized Model | £50K-500K | £500-2,000 | Excellent at specific domain |
| Open-Weight Small Model | £10K-100K | £50-200 | Sufficient for most business tasks |
The economics are not ambiguous. For most business use cases, the specialised approach delivers superior results at a fraction of the cost. This is precisely why the open-weight model movement is gaining traction, and why companies investing exclusively in proprietary LLM APIs face a difficult future.
What Happens When the Bubble Bursts
The bubble does not need to burst in a dramatic crash for significant damage to occur. More likely, we are looking at a prolonged correction as investors and enterprises recalibrate expectations. Companies that raised billions on promises of AI transformation will face pressure to demonstrate actual returns. Many will fail. The survivors will be those who focused on solving real problems rather than riding the hype cycle.
For businesses currently evaluating AI investments, this environment actually presents an opportunity. The collapse of hype tends to separate genuine utility from marketing vapour. The companies that survive will be those offering concrete value at reasonable prices, not those demanding premium subscriptions for commoditised technology.
We have seen this pattern before in web development, in mobile apps, in countless technology cycles. The initial gold rush attracts speculative capital, which funds inflated valuations, which produces unsustainable cost structures, which eventually correct. The LLM market is following this script with almost mechanical precision.
The Path Forward
Rather than panic, business leaders should approach the AI landscape with clear-eyed pragmatism. The technology is genuinely useful in specific contexts. It is not, however, the transformative force that some have claimed, at least not in its current form. The models are impressive engineering achievements, but they are also tools with clear limitations and rapidly commoditising economics.
Our recommendation at GOOBLR has consistently been to evaluate AI implementations on actual business outcomes rather than technological novelty. If an AI feature genuinely solves a customer problem or reduces operational friction, it is worth investing in. If it is there primarily because it sounds impressive in a sales pitch, the enthusiasm is probably misplaced.
The LLM bubble is deflating. This is not a catastrophe. It is a maturation process, painful as it may be for those who invested at the peak. The technology will continue to develop, the costs will continue to fall, and the businesses that survive will be those who focused on building genuine value rather than riding a wave of speculative capital.
Those who learn from history are not doomed to repeat it. Those who ignore it, however, may find themselves holding expensive subscriptions to technology that became obsolete before the invoice arrived.