As I had mentioned yesterday, at least some of the hype around AI is due to predictions about the imminent arrival of Artificial General Intelligence (AGI). Is it around the corner? Is it just a matter of pouring more money into existing tech and building out more data centers?

Gary Marcus doesn't think so. His prognosis:

LLMs have their place, but anyone expecting the current paradigm to be close to AGI is delusional.

Over the past few months, expert opinions are converging to the notion that Large Language Models (LLMs) will not achieve AGI any time soon. A pivotal moment came in June 2025 with the release of the Apple reasoning paper, which demonstrated that even enhanced reasoning capabilities in LLMs fail to overcome the critical problem of distribution shift. The arrival of GPT-5 in August 2025, anticipated as a major leap forward, ultimately fell short of expectations, further dampening hopes for imminent AGI.

Rich Sutton, a Turing Award winner renowned for his work in reinforcement learning (also the author of the Bitter Lesson), publicly acknowledged critiques of LLMs and agreed that they are far from achieving AGI. Andrej Karpathy, a respected machine learning expert with experience at Tesla and OpenAI, estimated that AGI remains at least a decade away, emphasizing that current agent-based models are nowhere near the required level of sophistication.

The dream of LLMs ushering in AGI might be premature, vindicating Gary Marcus.

Game over. AGI is not imminent, and LLMs are not the royal road to getting there.
First slowly, and then all at once, dreams of LLMs bringing us to the cusp of AGI have fallen apart.
https://garymarcus.substack.com/p/the-last-few-months-have-been-devastating