The industry’s obsession with AI is heading towards a cliff

A new study from MIT suggests that the largest, most computationally intensive AI models may soon offer diminishing returns compared to smaller models. Comparing scaling laws to continued improvements in model efficiency, the researchers found that it could become more difficult to achieve performance above that of giant models, while efficiency gains could make models running on more modest hardware perform better and better over the next decade.
“In the next five to 10 years, things will most likely start to shrink,” says Neil Thompson, a computer scientist and professor at MIT involved in the study.
Advances in efficiency, like those seen with DeepSeek’s remarkably inexpensive model in January, have already served as a reality check for the AI industry, accustomed to burning through enormous amounts of compute.
As it stands, a frontier model from a company like OpenAI is currently much better than a model trained with a fraction of the computation from a university lab. Although the MIT team’s predictions might not hold up if, for example, new training methods such as reinforcement learning produce surprising new results, they suggest that big AI companies will have fewer advantages in the future.
Hans Gundlach, a researcher at MIT who led the analysis, became interested in the question because of the complexity of running state-of-the-art models. With Thompson and Jayson Lynch, another MIT researcher, he mapped the future performance of frontier models compared to those built with more modest computational means. Gundlach says the predicted trend is particularly pronounced for currently popular reasoning models, which rely more on additional calculations during inference.
Thompson says the results show the value of perfecting an algorithm as well as expanding the calculation. “If you’re spending a lot of money training these models, you should absolutely spend some of that money trying to develop more efficient algorithms, because that can be extremely important,” he adds.
The study is particularly interesting given the current AI infrastructure boom (or should we say “bubble”?) – which shows no signs of slowing down.
OpenAI and other American technology companies have signed hundred billion dollar deals to build AI infrastructure in the United States. “The world needs a lot more computing,” OpenAI President Greg Brockman proclaimed this week, announcing a partnership between OpenAI and Broadcom for custom AI chips.
A growing number of experts are questioning the merits of these agreements. About 60% of the cost of building a data center is spent on GPUs, which tend to depreciate quickly. Partnerships between the main actors also appear circular and opaque.




