Draft

The Curve & Economic Impacts of AI.

Author

Tom Cunningham

Published

October 6, 2025

Some recurring conversations at the Curve about economic effects of AI.

The Curve was an amazing conference (thank you to Rachel Weinberg, Golden Gate, & Manifund). Among dozens of conversations & arguments, here are some things that would keep coming up:

  1. We need more forecasts of economic impacts.
  2. We need more theory of capabilities.
  3. We need more metrics of capabilities.
  4. We need more theory of offense-defense balance.
  5. We need more bottom-up modelling of capabilities growth.

I feel bad saying “we need,” & scolding others for the work they’re not doing, so I’ve tried to add my own very tentative best guesses about each below.

We need more forecasts of economic impacts

We have few forecasts of the impact of strong AI.

A number of people have published explicit forecasts of the future trend of AI capabilities (“timelines”), but there are many fewer forecasts of the economic effects of those capabilities, i.e. the effects on GDP, consumption, employment, wages, asset prices.

The most explicit forecast which allows for strong AI is Epoch’s GATE model, see below.

Having explicit forecasts would be very useful.

What do we expect AI to do to these things?

  • Wages & employment, across sectors and tenure.
  • The price of land, the value of the capital stock.
  • Incomes across different countries.

I feel it’s like early 2020 and COVID: if we’re trying to make a decision about whether to announce a lock-down it should be based on a clear idea about the counterfactual, which includes a lot of equilibrium effects.

Comparison of productivity forecasts from Filippucci, Gal, and Schief (2024)

Comparison of productivity forecasts from Filippucci, Gal, and Schief (2024)
Most academic forecasts assume no capabilities growth.
I complained about this in my previous post: Acemoglu (2024) and Aghion and Bunel (2024) both give explicit forecasts of AI’s economic impact (0.06%/year and 1%/year respectively), but both effectively assume that AI will not get any better, they are just extrapolating out from existing capabilities.
I know of just two very concrete economics AGI forecasts.
  1. Korinek and Suh (2024) forecast that, over 15 years, GDP triples; wages increase a little at first, then collapse when everything is automated. GDP continues to increase.
  2. Epoch’s GATE model (Erdil et al. (2025)) forecast full automation in 2034, by which point gross world product (GWP) has grown 10X. They forecast that wages will at first increase dramatically then, at some point after full automation is achieved, collapse.1

 

Metaculus: What will be the prime-age (25-54) labor force participation rate in the United States in these years?

Metaculus: What will be the prime-age (25-54) labor force participation rate in the United States in these years?
Forecasting markets expect big capabilities, small impacts.

Forecasting markets expect rapid progress in AI capabilities. As of Oct 2025, the median Metaculus forecasts are:

However forecasting markets also expect economic variables to remain relatively flat. Over the next 100 years Metaculus expects:.

Both forecasts are relatively smooth over time. The smoothness could be consistent with expecting dramatic effects, but (1) uncertainty about capabilities growth; (2) disagreement among forecasters.

The financial markets seem to expect small impacts.

Asset prices do not seem to anticipate dramatic effects from an intelligence explosion, though they are hard to interpret.

Chow, Halperin, and Mazlish (2024) argues that we should expect real interest rates to increase (and they haven’t). Nordhaus (2021) says that an AI singularity would predict a variety of things, especially an increasing growth rate and increasing capital share, and he says he does not find much evidence for these.

Am I missing other forecasts?

How can we get more people to forecast?
One idea that Anna Yelikazova and I discussed: sponsor a couple of dozen econ grad students to write 5-page very explicit forecasts, and give prizes the most compelling ones.

We need more theory of capabilities.

We have many projects which are collecting data on AI impacts.

We can organize AI impacts into a waterfall, top to bottom:

  1. Data on AI capabilities – benchmarks that have representative tasks across the economy - e.g. GDP-val (Patwardhan et al. (2025)), APEX.
  2. Data on AI uplift – effect on productivity, e.g. Becker et al. (2025).
  3. Data on AI adoption – adoption by occupations, by industry, by demographic, E.g. (bick2024rapid?).
  4. Data on AI usage – what types of economic tasks are LLMs used for, e.g. Handa et al. (2025), Chatterji et al. (2025).
  5. Data on AI economic effects – changes in hiring and wages by occupation, e.g. Brynjolfsson, Chandar, and Chen (2025).

Each of these is relatively unopinionated, they try to canvas AI impacts in general.

Collecting data is hard without theory.

I don’t think we have that many opinionated theories on how each of these should move. Theories are important because we’re expecting things to change rapidly, both due to capability growth and adoption growth. If we don’t have an explicit theory then we’re using an implicit theory.

Think of spending a lot of time & resources collecting samples of COVID, but not, at the same time, working on a theory of how epidemics evolve and who’s more susceptible.

We don’t have many theories of AI’s impacts.

Here are the prominent theories of the ways in which AI is likely to be adopted:

  1. Informal observations about LLMs: AI researchers generally say that LLMs are relatively better at tasks that are verifiable, short-horizon, low-context, and text-based.
  2. Indices of task or occupation “exposure” to AI: Frey and Osborne (2013), Brynjolfsson, Mitchell, and Rock (2018), Felten, Raj, and Seamans (2018), Webb (2019), Eloundou et al. (2023). METR’s time-horizon paper (Kwa et al. (2025)) can also be interpreted as an exposure index for tasks.
Nathan Lambert seems to feel the same way.
Nathan Lambert’s post-curve post says “many AI obsessors are more interested in where the technology is going rather than how or what exactly it is going to be.”

We need more metrics of capabilities

We don’t have a standard way of defining AI capabilities.

We say “strong AI”, “transformative AI”, “AGI”, or “ASI”.

The best concrete metric is probably METR’s time horizon index. We can then say “what happens when AI can do a one month task?”

The Forecasting Research Institute is working on a set of well-defined capability scenarios.

Our World in Data: Technology Costs over Time.

Our World in Data: Technology Costs over Time.
My favorite metric: frontier cost-efficiency growth.

There are hundreds of cost-efficiency metrics that have been regularly increasing over decades: transistor density, corn yield, compression efficiency (see the chart on the right). When AI becomes useful then we expect these metrics to start improving more quickly. Cost-efficiency growth is a useful metric because it’s (1) unambiguous; (2) economically relevant; (3) upstream of other economic impacts like employment.

Existing historical cost-efficiency data:

  • Farmer and Lafond (2016) documents progress in 53 technologies (visualized at Our World in Data), but only up to 2013.
  • Sherry and Thompson (2021) document historical trends in algorithmic efficiency across a variety of algorithms.

We need more theory of the offense-defense balance

Many discussions were about how AI will change the offense-defense balance

There are dozens of cases where there’s some offense-defense balance, and it’s no immediately clear how AI will affect that balance. Some examples that came up in the Curve:

  • hacking
  • ransomware
  • spearfishing
  • media manipulation
  • drone assassinations
  • drone warfare.

In each case it’s clear that AI could help both sides, but arguable how the equilibrium will be affected.

We should have some common theory.

It seems wasteful to treat each of these problems independently, there ought to be some general principles we can apply on how AI will affect offense-defense balance.

The closest I know is Garfinkel and Dafoe (2019). The argue that when both sides get sufficiently strong then this will generally tend to favor the defender:

“we offer a general formalization of the offense-defense balance in terms of contest success functions. Simple models of ground invasions and cyberattacks that exploit software vulnerabilities suggest that, in both cases, growth in investments will favor offense when investment levels are sufficiently low and favor defense when they are sufficiently high.”

I also have a note from 2023, which argues that AI will favor the defender for “internal” properties (where human judgment is the ground truth), but favor the attacker for “external” properties (where external reality is the ground truth).

This seems an incredibly fertile area for economic theory but I have seen very little enagement from economists.

References

Acemoglu, Daron. 2024. “The Simple Macroeconomics of AI.” National Bureau of Economic Research. https://economics.mit.edu/sites/default/files/2024-04/The%20Simple%20Macroeconomics%20of%20AI.pdf.
Aghion, Philippe, and Simon Bunel. 2024. “AI and Growth: Where Do We Stand.” https://www.frbsf.org/wp-content/uploads/AI-and-Growth-Aghion-Bunel.pdf.
Becker, Joel, Nate Rush, Elizabeth Barnes, and David Rein. 2025. “Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity.” https://arxiv.org/pdf/2507.09089.pdf.
Brynjolfsson, Erik, Bharat Chandar, and Daniel Chen. 2025. “Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence.” Stanford Digital Economy Lab. https://digitaleconomy.stanford.edu/wp-content/uploads/2025/08/Canaries_BrynjolfssonChandarChen.pdf.
Brynjolfsson, Erik, Tom Mitchell, and Daniel Rock. 2018. “What Can Machines Learn and What Does It Mean for Occupations and the Economy?” In AEA Papers and Proceedings, 108:43–47. American Economic Association 2014 Broadway, Suite 305, Nashville, TN 37203. https://doi.org/10.1257/pandp.20181019.
Chatterji, Aaron, Thomas Cunningham, David J. Deming, Zoe Hitzig, Christopher Ong, Carl Yan Shan, and Kevin Wadman. 2025. “How People Use ChatGPT.” Working Paper 34255. National Bureau of Economic Research. https://doi.org/10.3386/w34255.
Chow, Trevor, Basil Halperin, and J Zachary Mazlish. 2024. “Transformative AI, Existential Risk, and Real Interest Rates.” Working Paper. https://www.semanticscholar.org/search?q=Transformative%20AI%2C%20existential%20risk%2C%20and%20real%20interest%20rates.
Eloundou, Tyna, Sam Manning, Pamela Mishkin, and Daniel Rock. 2023. “Gpts Are Gpts: An Early Look at the Labor Market Impact Potential of Large Language Models.” arXiv Preprint arXiv:2303.10130. https://arxiv.org/pdf/2303.10130.pdf.
Erdil, Ege, Andrei Potlogea, Tamay Besiroglu, Edu Roldan, Anson Ho, Jaime Sevilla, Matthew Barnett, Matej Vrzla, and Robert Sandler. 2025. “GATE: An Integrated Assessment Model for AI Automation.” arXiv Preprint arXiv:2503.04941. https://arxiv.org/pdf/2503.04941.pdf.
Farmer, J Doyne, and Francois Lafond. 2016. “How Predictable Is Technological Progress?” Research Policy 45 (3): 647–65. https://doi.org/10.2139/ssrn.2566810.
Felten, Edward W, Manav Raj, and Robert Seamans. 2018. “A Method to Link Advances in Artificial Intelligence to Occupational Abilities.” In AEA Papers and Proceedings, 108:54–57. American Economic Association 2014 Broadway, Suite 305, Nashville, TN 37203. https://doi.org/10.1257/pandp.20181021.
Filippucci, Francesco, Peter Gal, and Matthias Schief. 2024. “Miracle or Myth? Assessing the Macroeconomic Productivity Gains from Artificial Intelligence.” OECD Publishing. https://www.oecd.org/content/dam/oecd/en/publications/reports/2024/11/miracle-or-myth-assessing-the-macroeconomic-productivity-gains-from-artificial-intelligence_fde2a597/b524a072-en.pdf.
Frey, Carl Benedikt, and Michael Osborne. 2013. “The Future of Employment.” https://www.semanticscholar.org/search?q=The%20future%20of%20employment.
Garfinkel, Ben, and Allan Dafoe. 2019. “How Does the Offense-Defense Balance Scale?” Journal of Strategic Studies 42 (6): 736–63. https://doi.org/10.1080/01402390.2019.1631810.
Handa, Kunal, Alex Tamkin, Miles McCain, Saffron Huang, Esin Durmus, Sarah Heck, Jared Mueller, et al. 2025. “Which Economic Tasks Are Performed with AI? Evidence from Millions of Claude Conversations.” https://arxiv.org/pdf/2503.04761.pdf.
Korinek, Anton, and Donghyun Suh. 2024. “Scenarios for the Transition to AGI.” National Bureau of Economic Research. https://arxiv.org/pdf/2403.12107.pdf.
Kwa, Thomas, Ben West, Joel Becker, Amy Deng, Katharyn Garcia, Max Hasin, Sami Jawhar, et al. 2025. “Measuring AI Ability to Complete Long Tasks.” arXiv Preprint arXiv:2503.14499. https://doi.org/10.48550/arXiv.2503.14499.
Nordhaus, William D. 2021. “Are We Approaching an Economic Singularity? Information Technology and the Future of Economic Growth.” American Economic Journal: Macroeconomics 13 (1): 299–332. https://doi.org/10.2139/ssrn.2658259.
Patwardhan, Tejal, Rachel Dias, Elizabeth Proehl, Grace Kim, Michele Wang, Olivia Watkins, Simón Posada Fishman, et al. 2025. “GDPval: Evaluating AI Model Performance on Real-World Economically Valuable Tasks.” arXiv Preprint arXiv:2510.04374. https://arxiv.org/pdf/2510.04374.pdf.
Sherry, Yash, and Neil C Thompson. 2021. “How Fast Do Algorithms Improve?[point of View].” Proceedings of the IEEE 109 (11): 1768–77. https://doi.org/10.1109/JPROC.2021.3107219.
Webb, Michael. 2019. “The Impact of Artificial Intelligence on the Labor Market.” Available at SSRN 3482150. https://www.semanticscholar.org/search?q=The%20impact%20of%20artificial%20intelligence%20on%20the%20labor%20market.

Footnotes

  1. Tom Davidson has a 2021 report on Explosive Growth, and a model of takeoff speeds), but I don’t think either has a central forecast with multiple aggregate economic variables.↩︎

Citation

BibTeX citation:
@online{cunningham2025,
  author = {Cunningham, Tom},
  title = {The {Curve} \& {Economic} {Impacts} of {AI.}},
  date = {2025-10-06},
  url = {tecunningham.github.io/posts/2025-10-06-the-curve.html},
  langid = {en}
}
For attribution, please cite this work as:
Cunningham, Tom. 2025. “The Curve & Economic Impacts of AI.” October 6, 2025. tecunningham.github.io/posts/2025-10-06-the-curve.html.