I think about Jevons' paradox a lot when it comes to AGI. Once intelligence becomes a commodity, so that capability at the 90th percentile of current knowledge worker competence in all current knowledge worker categories (think: doctor, lawyer, accountant, analyst, scientist) is available on tap. What will that world look like?

Our friend from the Bitter Lesson has some thoughts on this matter:

The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. The ultimate reason for this is Moore's law, or rather its generalization of continued exponentially falling cost per unit of computation. Most AI research has been conducted as if the computation available to the agent were constant (in which case leveraging human knowledge would be one of the only ways to improve performance) but, over a slightly longer time than a typical research project, massively more computation inevitably becomes available.

Now imagine a world in which massive computation is available to everyone, so that we have a 90th percentile doctor/lawyer/therapist/analyst available for $20 a month. Forget whether this almost-AGI can discover new theories or write better songs: most of us don't do that either. Nevertheless, what we will have is the collected wisdom (and hatred, and greed, so not just the good things) of humanity available to everyone.

What might that world look like?

Let's say a Radiology exam costs $10 instead of $1000 - there will be a lot more use of these tests (that would be the Jevons' paradox for Radiology) and lead to more Radiologist hires before demand peaks (we don't need an exam everyday, for example) and productivity improves and hiring stops.

Note that 90th percentile Radiology competence isn't in the distant future: it's already here, and if compute was super cheap and plentiful, we would be having more exams already. That's the future the hyperscalers are trying to build for - I finally understand the logic of the build out even if they may never make money out of it. As I said in a comment on a wonderful analysis of the AI Bubble:

In a recursive version of the bitter lesson, tech futurists like Negroponte flog their bespoke futures (remember OLPC?) when you would much rather let the general purpose discovery engine (aka the market) figure out what the future will bring. With AI, the human wannabe prophet's existential dilemmas are even more poignant, and will likely be even more wrong. I would just build massive data/energy (over)capacity and let the world figure out the rest.

Key line from today's #DailyPlanet:

Artificial intelligence is rapidly spreading across the economy and society. But radiology shows us that it will not necessarily dominate every field in its first years of diffusion -- at least until these common hurdles are overcome. Exploiting all of its benefits will involve adapting it to society, and society's rules to it.
The algorithm will see you now - Works in Progress Magazine
Radiology combines digital images, clear benchmarks, and repeatable tasks. But replacing humans with AI is harder than it seems.
https://worksinprogress.co/issue/the-algorithm-will-see-you-now/