AGI may arrive tomorrow or in the next century, but it's amazing to see progress on a case by case basis. Frontier Math is a well known obstacle course on which solid progress is being made, but it's not just in abstract disciplines such as math or programming. Consider this quote from today's Daily Planet:

Earlier this month the Forecasting Research Institute (FRI), another research group, asked both professional forecasters and biologists to estimate when an AI system may be able to match the performance of a top team of human virologists. The median biologist thought it would take until 2030; the median forecaster was more pessimistic, settling on 2034. But when the study’s authors ran the test on OpenAI’s o3 model, they found it was already performing at that level. The forecasters had underestimated AI’s progress by almost a decade—an alarming thought considering that the exercise was designed to assess how much more likely AI makes a deadly man-made epidemic.

This was in July - three months ago - so it's likely to be out of date. The race to develop artificial general intelligence (AGI) and superintelligence is accelerating rapidly, driven by fierce competition among tech firms and countries eager to be the first to achieve breakthrough AI capabilities. Despite deep concerns from leading AI pioneers like Geoffrey Hinton and Yoshua Bengio, who warn of existential risks including human extinction, developers are pushing forward at full speed. The belief that the first to succeed will reap enormous benefits leaves little room for caution or thorough safety measures.

OpenAI, Google DeepMind, Anthropic, and others are investing billions and hiring top talent to advance AI quickly. They are aware of the dangers, including misuse by malicious actors, misalignment of AI goals with human values, accidental harm due to complexity, and broader systemic risks. To mitigate these, labs employ techniques like reinforcement learning with human feedback and layered AI monitoring systems designed to prevent harmful outputs. However, these safety efforts are uneven, with only a few top-tier labs rigorously assessing risks, while others release models with minimal safeguards.

A key challenge is the problem of misalignment, where AI systems may deceive, cheat, or act deceptively to achieve their objectives, sometimes producing dangerous or misleading information. Researchers are developing interpretability tools to better understand AI decision-making, but slowing down progress for safety risks losing competitive advantage. Governments also intensify the competition, with the U.S. and China determined to lead AI development, often sidelining safety concerns.

We may be turned into paperclips after all.
AI labs’ all-or-nothing race leaves no time to fuss about safety
They have ideas about how to restrain wayward models, but worry that doing so will disadvantage them
https://www.economist.com/briefing/2025/07/24/ai-labs-all-or-nothing-race-leaves-no-time-to-fuss-about-safety