i— title: Taking AGI Literally citation: true bibliography: ai.bib draft: true reference-location: document citation-location: document date: today author: Tom Cunningham engine: python freeze: false format: html: other-formats: false lightbox: auto # ← enables click-to-zoom for figures/images


I feel like I didn’t really take AGI literally until recently.

I’ve been working on the economics of AI for 2 years, but I feel I never really asked myself what the world would be like if computers could literally do all the things that humans could do.

On reflection I feel: (1) it might happen; (2) if it did happen, things would go bananas.

Of course this a very common view, and I’ve read (or skimmed) a lot of things making this point, but I feel I didn’t internalize them so it’s worth rehearsing the arguments to see if I’ve missed something.

I think I’m rehashing a very well-trodden debate. Maybe I’m missing arguments that I should already know, & I’d be very gratful for people to point those out. I list some references at the bottom.

I know smart people who appear to disagree.

Some people seem to believe we could have AGI yet the world would not go bananas. See some examples below: Tyler Cowen, Andrej Karpathy, Seb Krier, Alex Imas, and responses to the FRI survey.

I’m unsure how much is just a difference in how we’re defining AGI. It’s possible they think the type of AGI I’m talking about is vanishingly unlikely. If so then I’d like to understand their reasons.

I normally am pretty sanguine about most things. In discussions about politics or technology I usually irritate people by saying “this too will pass” and finding historical parallels. I would like to say the same about AI but I don’t feel I can. I would be very happy to be talked out of these opinions.

I feel odd writing this. My economist friends will ask why I’m wasting my time on ideas so obviously wrong; my AI friends will ask why I’m wasting my time on ideas so obviously right.

My claims:

I’ll define AGI as being able to do every task that any human can do, including esoteric skills, and physical tasks through a robot.

My claims:

  1. If you put AGI into standard economic models then things go crazy almost immediately. The economic effects would be unprecedented in all of human history.

  2. If you think about everyday life with AGI, then things go crazy too.

  3. If you think about other parts of society - politics, warfare, communication - all completely and utterly bananas.

In some sense these claims feel obvious. If I wake up one day and I check my phone and my phone says back to me “anything you can do I can do better”, then of course the world is going to be utterly different. Maybe this type of AGI is centuries away, & that would be reassuring. But if it’s in my lifetime, or my daughter’s lifetime, then it seems like it would be a tidal wave which would sweep away most things I know.

AGI makes standard economic models go bananas

I will treat AGI as labor.
Suppose we treat AGI as a perfect substitute for labor in a standard economic model. Suppose that we start with 100M human-equivalent AGIs in the world (roughly the number of H100-equivalents likely to exist in 2030),1 and they then double every year (a conservative extrapolation of trends). If we assume they’re all in the US, this would mean the effective labor force is roughly doubling each year.

1 https://epoch.ai/data/ai-chip-sales/

Putting AGI into standard economic models implies crazy things.

We start with the standard Cobb-Douglas with 2/3 labor share.

  • If effective labor supply doubles each year then GDP will go up by 1/3 each year.

  • Of course capital will also grow too, because of the increase in labor and output, so pretty soon overall GDP will be roughly doubling every year.

  • TFP will also grow much faster. The Jones model says productivity growth is a fixed ratio of R&D labor growth, so if you multiply R&D growth by 20X (say from 5% to 100%) then you’ll also multiply TFP growth by 20X.2

  • Eventually growth must be bottlenecked by fixed inputs: land, energy, minerals.

2 There’s a famous paper arguing ideas are getting harder to find (@bloom2020ideas), but they argue there’s a low ratio between TFP growth and R&D growth, not a declining ratio.

Some objections:
  1. This will take 20 years to diffuse. Suppose actual output lags potential output with a half-life of 20 years (a very conservative reading of @comin2014technologydiffusion). But remember potential output is doubling every year. So after 5 years potential output is 32X higher than the starting output, thus closing 3.5% of the gap each year (consistent with a 20y half-life) will already bring you well above 2X the starting point, and you’ll be roughly doubling every year from then on.

  2. Regulations will slow things down. Regulations surely restrict output below its potential, but given the growth above they would have to constrain output far tighter. If AI is this powerful then the incentives for skirting the regulation or seceding will become incredibly strong.

  3. There are o-rings and bottlenecks to production. We’ve assumed that AI labor and human labor are interchangeable, so an o-ring or bottleneck must mean we’re still in the pre-AGI world. @jones2026pastautomation and @gans2026oring and @jones2025aird all say AI will have modest effects on growth, but this is because AGI won’t arrive for decades, and I would like to understand better their confidence in this (if they believe it).

  4. Humans are already near the limits to intelligence. Francois Chollet says this. But my thought experiment is just about the quantity of labor, not the quality.

  5. The value of human intelligence is limited. Tyler Cowen says IQ is only loosely correlated with wage. But again in this experiment we’re only increasing the quantity of labor, not the quality.

  6. The value of online intelligence is already low. I used to argue at OpenAI that we can estimate the value of artificial labor by looking at the demand for online labor, and it doesn’t seem enormous. Some activities are outsourced to workers overseas, but it’s only a small share of the economy. If the price of online labor went to zero then would output really explode? This seems to me to be a difference in the quality of labor. Many people clearly can do their jobs over the internet, & get paid very well for it.

  7. People prefer human-provided goods. Suppose people have an intrinsic preference for human-provided services, even if they couldn’t tell the difference they’d still pay a premium for a service provided by humans. Then even if computers can outperform humans in objective qualities, we might still employ humans for jobs like teachers, therapists, etc..

    • This has implications about the income share that goes to labor rather than capital, but it doesn’t say much about what’s actually being produced: the output of every other good is still doubling every year in this story.
    • It seems to me this goes against the overwhelming trend of the last 200 years, exchanging human-made goods for machine-made goods. People used to get goods & services from their local tailor, musician, actor, and furniture-maker. They now get furniture from a catalog & listen to recorded music & watch television. AI will allow people to substitute away from a greater set of human-provided goods. See Phil Trammel’s excellent essay which is a pretty thorough discussion of these issues.3

3 https://philiptrammell.substack.com/p/is-labor-a-luxury-in-the-long-run

AGI would make everyday life go bananas

AGI would be bananas for other things too

AGI would seem to cause other parts of society to go bananas:
  • Asymmetric information. Economists go through .
  • Comparative advantage. We all .
  • Politics. I think it’s pretty orthodox to believe that political structures are largely downstream of economic structures. If this is a radical change to the economic structure then politics seems likely to be affected.
  • Offense-defense.

Objections

Most things make us better off.

It’s a reasonable principle that things generally get better. John Horton says he expects “the economy will be great for workers + consumers generally … leaning on our historical experience with technology (mostly very, very good)”4

4 https://x.com/johnjhorton/status/2050559037738041727

This is a useful first observation but of course you wouldn’t want to end your analysis there. The US has had a pretty good run for 90 years, but that’s quite an anomaly, history is also full of catastrophically bad things, & so we’d want to dig a little deeper before being so optimistic.

We’ll never have AGI.
This is a separate question. Here I’m just thinking through the conditional, what would happen if we did have AGI.
We can’t predict the future.
John Horton says “I’m not sure economists are all that insightful about very big picture stuff. I’m humbled by the fact that many of the best post-war economists were far less accurate about the ultimate trajectory of the USSR than a rank & file John Birch society member”5

5 https://x.com/johnjhorton/status/2050559037738041727

A recent survey says experts expect moderate AI growth.6

6 https://static1.squarespace.com/static/635693acf15a3e2a14a56a4a/t/69cbb9d509ada447b6d9013f/1774959061185/forecasting-the-economic-effects-of-ai.pdf

Forecasting Research Institute The experts think that “rapid” AI progress by 2030 (defined below) will cause 1.5% excess GDP growth over 2025-2050, relative to “slow” AI progress. The definition of the “rapid” scenario seems pretty close to my definition of AGI above, and remember that this is the 2030 state, then there’s another 20 years of progress to follow:

“In the “rapid” scenario, AI systems surpass humans in most cognitive and physical tasks. Autonomous researchers can collapse years-long research timelines into months or even days. AI systems can surpass all freelance software engineers, customer service agents, paralegals, and clerical workers. Models can write 2025-Pulitzer-caliber books – and negotiate the resulting book contract. Robots can navigate an arbitrary home anywhere in the world.

A few people have said that their moderate predictions for GDP were because AGI could either increase or decrease GDP. But in that case I think this should be reflected in much wider confidence intervals, yet the p10 predictions for the rapid scenario appear to be systematically higher in the “rapid” scenario than in the “unconditional” scenario, implying they think rapid progress would have an unconditionally positive effect.

Other examples
  • Tyler Cowen & Andrej Karpathy, discussed here.

Literature

Classic writings on this.
  • @aghion2019artificial say (1) progressively automating tasks can be consistent with ordinary growth rates if each task is a strong complement; (2) in contrast progressively automating R&D tasks could cause explosive growth.
  • @davidson2021could give arguments for explosive growth from AGI.
  • In 2023 Matt Clancy and Tamay Besiroglu debated AI and explosive growth in Asterisk. Matt Clancy’s arguments: (a) slow automation of tasks; (b) bottlenecks from experiments; (c) bottlenecks from regulation.
  • @erdil2024explosive give arguments for explosive growth from AGI.
  • Sam Hammond replies to Davidson and Erdil, but found the arguments difficult to follow. He spends a lot of time on the returns to scale in R&D, assuming that we can’t get much more efficient than we already are (which would be somewhat surprising), but I didn’t feel he directly addressed the increase in effective labor supply. The arguments about bounded utility didn’t seem relevant.
  • Tyler Cowen (Feb 2025) “Why I think AI take-off is relatively slow”
  • @wiseman2025growth, “We estimate that economic growth will be 3% to 9% higher per year for the 20 years following significant AI automation.” But this is based on a model with slow automation of tasks over time, i.e. it’s not about AGI, it’s about slowly expanding AI abilities.