Ben Jones & Chad Jones on Economic Growth in the Long Run: Artificial Intelligence Explosion or an Empty Planet?

How will economic growth evolve in the long run? This session explored the wide range of plausible scenarios. Aghion, Jones & Jones 2017 analyze how artificial intelligence may super-charge the growth trajectory, causing a potential speed-up in economic growth as either production or the process of innovation itself – often considered the main driver of economic growth – become more and more automated. In the limit, these processes may lead to growth singularities. By contrast, Jones 2020 uses a similar framework to show that a markedly different outcome is possible – continuously declining living standards – if population growth becomes negative and slows down the process of ideas production.

Benjamin Jones is the Gordon and Llura Gund Family Professor of Entrepreneurship at the Kellogg School of Management at Northwestern. He studies the sources of economic growth in advanced economies, with an emphasis innovation, entrepreneurship, and scientific progress. He also studies global economic development, including the roles of education, climate, and national leadership in explaining the wealth and poverty of nations. His research has appeared in journals such as Science, the Quarterly Journal of Economics and the American Economic Review, and has been profiled in media outlets such as the Wall Street Journal, the Economist, and The New Yorker.

Chad Jones is the STANCO 25 Professor of Economics at the Stanford Graduate School of Business. He is noted for his research on long-run economic growth. In particular, he has examined theoretically and empirically the fundamental sources of growth in incomes over time and the reasons underlying the enormous differences in standards of living across countries. In recent years, he has used his expertise in macroeconomic methods to study the economic causes behind the rise in health spending and top income inequality. He is the author of one of the most popular textbooks of Macroeconomics, and his research has been published in the top journals of economics.

The session was moderated by Anton Korinek (UVA) and featured Rachael Ngai (LSE) and Phil Trammell (Oxford) as discussants.

You can watch a recording of the event here or read the transcript below:

Anton Korinek  00:06

Welcome to our webinar on the governance and economics of AI. I’m glad that so many of you from all corners of the earth are joining us today. I’m Anton Korinek. I’m an economist at the University of Virginia. And the topic of our webinar today is economic growth in the long run, whether it be an artificial intelligence explosion, or an empty planet. And we have two eminent speakers, Ben Jones and Chad Jones, as well as two distinguished discussants, Rachael Ngai and Phil Trammell. I will introduce each one of them when they are taking the stage.

We’re excited to have this discussion today, because the field of economic growth theory has gone through a really interesting resurgence in recent years. At the risk of oversimplifying, a lot of growth theory in the past has focused on describing or explaining the steady state growth experience that much of the advanced world has experienced in the post-war period, that was captured in what economists call the “Kaldor facts.” But in recent years, a chorus of technologists, especially in the field of AI, have emphasized that there is no natural law that growth in the future has to continue on the same trajectory as it has in the past, and they have spoken of the possibility of an artificial intelligence explosion, or even a singularity in economic growth. Our two speakers, Ben Jones and Chad Jones, have been at the forefront of this literature in a paper that is published in an NBER volume on the economics of AI. And Ben will tell us a bit about this today. And since an explosion in economic growth is by no means guaranteed, Chad will then remind us that the range of possible outcomes for economic growth is indeed vast. And we cannot rule out that growth may, in fact, go the other direction.

Our webinar today is co-organized by the Center for the Governance of AI at Oxford’s Future of Humanity Institute and by the University of Virginia’s Human and Machine Intelligence group, both of which I’m glad to be a member of. It is also sponsored by the UVA Darden School of Business. And before I yield to our speakers, let me thank everyone who has worked hard to put this event together: Anne le Roux, Markus Anderljung at the Center for the Governance of AI and Paul Humphreys at the UVA Human Machine Intelligence Group, as well as Azmi Yousef at Darden.

So let me now introduce Ben Jones more formally. Ben is the Gordon and Llura Gund Family Professor of Entrepreneurship at the Kellogg School of Management at Northwestern. He studies the sources of economic growth in advanced economies, with an emphasis on innovation, entrepreneurship, and scientific progress. He also studies global economic development, including the roles of education, climate, and national leadership in explaining the wealth and poverty of nations. His research has appeared in journals such as Science, the Quarterly Journal of Economics and the American Economic Review, and has been profiled in media outlets such as the Wall Street Journal, the Economist, and The New Yorker. Ben, the virtual floor is yours.

Ben Jones  03:49

Okay, thank you very much, Anton, for that introduction. And let me share my screen here. It’s great to be with you to talk about these issues. And thanks, again, to Anton and the organizers for putting this together and for inviting me to participate. So the first paper that I’m going to talk about is actually joint with Chad, your second speaker, he’s gonna appear in both, and this is also with Philippe Aghion. The idea in this paper was rather than sort of a typical economics paper, where you go super deep into one model and do all the details, this was really to kind of step back and look at the kind of breadth of growth models that we have. And then say, well, how would you insert artificial intelligence into these more standard understandings of growth? And where would that lead us? So we actually have a series of sort of toy models here. We’re exploring the variety of directions this can lead us and seeing what you have to believe in order for those various outcomes to occur. So that’s kind of the idea behind this paper. I’m going to do this in an almost non-mathematical way, not a completely math-free way, but I know that this is a seminar with a group of people with diverse disciplinary backgrounds. I don’t want to presume people are steeped in endogenous growth models. So I’m going to try to really emphasize the intuition as I go through the best that I can. I will have to show a little bit of math a couple of times, but not too much.

The idea in this paper is how would we think about AI? And you might think that AI helps us make goods and services, and things that go into GDP, and that we consume. It also might help us be more creative. Okay, so we’re gonna distinguish between AI entering kind of the ordinary production function for goods and services in the economy, but also that it might go into the so called knowledge production function, into R&D, and it might help us, succeed better in revealing new insights and breakthroughs about the world in the economy, and then the kind of implications we want to look at: just two very high level implications what will happen to long run growth under various assumptions, what do you have to believe, to get different outcomes in terms of the rate at which standards of living are improving, but also inequality, what share like the GDP per capita might go up, but what share that is going to go to labor, particular workers. There’s obviously a lot of fear that AI would displace workers, and that maybe more and more of the fruits of income will go to the owners of the capitals, or the owners of the AI. And then of course, there’s this other idea almost more from science fiction, it seems but taken seriously by some in the computer science community, that we might actually experience radical accelerations in growth, even to the point of some singularity. Anton referenced how growth has been very steady, according to sort of since the Industrial Revolution, but maybe we’re gonna see an actual structural break, where things will really take off. And of course, I think in Chad’s paper, he’ll show later it may be going the other way. Potentially, we’ll explore that as well.

So how are we going to think about AI? And you might think AI is this radically new thing. And in some ways it is. But one way to think about it is that we are furthering automation, right? What are we doing, we’re taking a task that is performed by labor, maybe reading a radiology result in a medical setting. And then we’re going to have a machine or algorithm do that for us. And we’re going to be every image search on Google, I used to categorize which is a cat. And now Google can just have an AI that tells us which image is a cat. But if you think about it in terms of automation, it can be very useful, because then we can think about AI in more standard terms that we’re used to, to some extent, in economics. So if you think, in the past, the Industrial Revolution was largely about replacing labor with certain kinds of capital equipment, maybe that was textile looms, and steam engines for power. AI is sort of a continuation of that process in this view. And it’s things like driverless cars and, and pathology and other other applications. So that’s one main theme in the work and how we want to introduce AI into our thinking of growth and see where it takes us.

The second main theme that really comes out in this paper we developed writing it is that we want to be very careful to think about not just what we get good at, but what we’re not so good at. And the idea that growth might be determined more by bottlenecks that growth, maybe constrained not by what we get really good at, but what is actually really important, what is essential, and yet it’s hard to improve. And I’ll make a lot of sense of that as we go intuitively.

I have a picture of the these guys are sugar beet farmers, pulling sugar beets out of the ground by hand, harvesting them. And that was done. This is a combine harvester type machine, that’s automating that and pulling sugar beets out of the ground with a machine. So that’s kind of like 20th century automation. And then a lower graph, lower picture, I’m trying to think about automation as AI. On the left, if you’ve seen the movie, Hidden Figures, these are the computers, I always think it’s very interesting. So computer was actually the job description. These women were computers at NASA involved in spaceflight. And they were actually doing computational calculations by hand. And then on the right, I have one of massive supercomputers who have basically replaced that job description entirely. So we see a lot of laborers being replaced by capital, raising productivity, but also displacing workers. And so how do we think about those forces?

Okay, so one way to think about this model is to start with a Zeira model, which is the following. So imagine there’s just n different things we do in the economy, n different tasks. And each task represents the same constant share of GDP, of total output. And to an economist here, that would sound like Cobb Douglas. Right? So we have the Cobb Douglas model. But if you’re not an economist, ignore that we just imagined the tasks every task has an equivalent share of GDP for simplicity. And when we think about automation, what we’re saying is that a task was done by labor, but now it might be done by capital equipment. Instead, AI would be a computer and an algorithm, a combine harvester would be a piece of farming equipment, automation. And so if you think that a fraction of the tasks beta, so there’s all of task one or beta percent are automated, then the capital share of total GDP is beta. So that means labor gets one minus beta. And then the expenditure on the capital equipment is a beta share of GDP. Okay? So that would seem that’s a very simple model, like it’s very elegant in a way. And it would say that if we keep automating, if you increase beta, we keep taking tasks that were done by labor and replacing them with machines, or AI, what will happen? Well, the capital share of income will increase and the labor share of income will decrease. So that sounds like it will be inequality in the sense that labor will get less – less income. That might sound very natural, maybe that’s what’s happening today, we have seen the capital share, it seems to be going up, in a lot of countries in advanced economies and like in the US, it seems like there’s a lot of automation going on from robots. So these new AI type things. Those are, of course, those two trends, though, so maybe they’re just happened to be correlated, we think that the AI is causing that rise in the capital share, well, this would be a model in which that could be true.

The problem with that model, though, is if you look backwards, we’ve seen tons of automation, like sugar beets, or so many other things, robots and auto manufacturing automobile manufacturing, that we didn’t see the capital share go up in the 20th century was very, very steady. So that suggests that if  , it just wouldn’t really fit kind of our normal understanding of automation. And so it’s not clear that this seems like quite the right model. So how can we repair it? A simple way to repair it, one idea that we developed in the paper, is to introduce the so-called Baumol’s cost disease, which is that the better you get at a task, the less you spend on it. So as you automate more tasks, maybe the capital share wants to go up. But something else also happens, right. So if I automate a task, like collecting sugar beets, what can I do, I can start throwing a lot more capital at that task, I can keep getting more and more machines and doing sugar beets, right. And moreover, the capital I put at the task, the capital equipment might get better and better, I first use a pretty rudimentary type of capital, and eventually these very fancy machines or introduced computers, and then computers get faster. If you throw more capital or better capital at it, what’s going to happen? Well, you’re gonna get more productive at getting sugar beets or doing computation at NASA. And so the cost of doing the task is going to drop. But if the cost drops, and things are kind of competitive in the market, the price should drop, right, the price will also drop. So what’s going on, you can do more of it at greater quantity, but the price of the task you’re performing will fall. So what’s the share in GDP, well, the quantity is going up, but the price is falling. If the price is falling fast enough, that actually the share in GDP will go down, even though you do more of it. So you get more sugar beets, but the prices of sugar beets plummets. And so sugar beets as a share of GDP is actually declining. And then what happens is that the non automated sort of bottleneck tasks, the ones you’re not very good at, actually come to dominate even more and more.

If you think backwards in the 20th century history or back to the Industrial Revolution, we see that agriculture and manufacturing have had rapid productivity growth and lots of apparent automation, like sugar beets, that agriculture and manufacturing agriculture, surely dwindling and dwindling share of GDP. And manufacturing GDP shares also seem to be going down. So it’s like what you get good at, what you automate, actually, interestingly, becomes less important with time as it starts to disappear in the overall economy. We’re left with things like health services, education services, government services, this is Baumol’s cost disease point, things that we find hard to improve the hard stuff actually comes to take on a larger and larger share of GDP. And if we can’t improve that, then our chances for growth, because it matters so much to GDP, dwindle.

So in this view, the capital share is a balance between automating more tasks, which tends to make the capital share go up. But the expense of sharing each task declines. And that tends to make it go down. So one model we offer in this paper is what we can call more of the same, maybe that’s what AI is, maybe AI is just more balanced growth. And we keep automating more and more tasks, but then they keep becoming a dwindling share of the economy. And we never automate everything. And you can actually show as we do in the paper, a model, where even though this is all the set of tasks, and a greater and greater share being done by capital equipment, and artificial intelligence, and a tinier and tinier share being done by labor, even the labor gets stuck in a very, very small share, it still actually gets, say two thirds of GDP, it still gets the same historical number. And again, why is that? It’s because this stuff, all the labor, all the capital stuff, it’s doing more tasks, but its price is plummeting, because we’re so good at it. And you’re left with just a small set of tasks being done by labor, that pays enormously for them. And that may be what’s going on in the economy. You’ve got something going on in the 20th century, certainly consistent with what’s been going on in the 20th century to a first order without overstating that case, but it’s broadly consistent with the stylized facts of growth. But that would suggest AI is again, just more of the same. We just keep automating it.

Here’s a simulation from our paper. This is steady state growth. You look on the x axis, we’re looking over five centuries, you get steady state growth, even as what’s happening with automation. Here’s the green line — you’re ultimately automating almost everything. You’re just sort of slowly automating everything, you never quite get to the end. And you just get constant growth and you can get a constant capital share. Okay, not a rising capital share. So actually, this is an idea that I’ve been developing in a new paper, which is almost done, this sort of seeing how far we can go along this, this line.

Okay, but let’s go a different tack, because a lot of people who observe artificial intelligence are excited by the possibility that maybe it will accelerate growth. And many futurists make these claims that we could even get some massive acceleration, something like a singularity. So we explore that in this paper as well. What would you have to believe for this to happen? So we consider two different topologies of a growth explosion, what we call a type one growth explosion, where the growth rates were going to depart from this steady state early 21st century experience, and we’re going to see a slow acceleration in growth, maybe to very, very high levels, they’ll call that a type one growth explosion. And the other would be a type two, where we mean a literal and a mathematical sense singularity, where you go to infinity in productivity and income, in some finite point in time in the future, you actually literally have a singularity, where you go to you go to infinity. You can actually surprisingly, using sort of standard growth, reasoning and automation, you can get either of those outcomes. Alright, so the first one is a simple example. And they’re more but one example of the first one is when you do achieve complete automation, so not just you kind of keep automating at a rate and never quite finish. Now we’re going to fully automate. Here’s my first equation, y GDP, k capital. That’s the automation capital. That’s all the combine harvesters, and the supercomputers, and the AI. And then a is the kind of the quality of the capital, the productivity of one unit of capital. All right. So this is fully automated. In other words, there’s no labor there, there’s no l, labor is now irrelevant to production of GDP, we can do the whole thing just with machines. That’s what that’s saying. It just depends on k and the quality of the l, which we call a. If you look at that, what’s the growth rate in y it’s going to be the growth rate in y and the growth rate in a, technology level, plus the growth rate in capital. Now, the thing about capital, which is really interesting and different from labor, which Chad’s going to be going over in his paper, is that with capital, you can keep making more and more of it, right? Because of how you make capital, you invest in it, you build it, and that comes out of GDP. So think about this equation, if I push up capital, I get more output. And then with more output, I can invest more. Okay. And then more importantly, if I push with the level of technology, I get more and more for every unit of capital, that increases GDP, I can invest more and keep building more capital. Okay, so the growth rate actually turns out to be what’s below. I’m ignoring depreciation. But basically, you can see that as long as you keep pushing up if you can keep pushing up the level of technology, so you keep improving the AI, you keep improving computers, the growth rate is going to track with a, it’s going to keep going up and up and up and up. And this is a type one growth explosion, so called why it’s an AK model. It’s a standard model, an endogenous and early standard model and endogenous growth theory. If we can automate everything, this suggests, in fact, that we can have a very sharp effect on the growth rate, that’s a very strong view of what one view of what AI might do.

Interestingly, another place to put AI, as I alluded to, in the very beginning, is you could put it into creativity and innovation itself. And if you do that, things can really take off. Alright, so this is a knowledge production function. a. is the rate of change of the level of technology, the quality of the capital. And if I fully automate how we produce that, again, there’s no labor in this equation, it just depends on capital. And then the state of technology itself a, and that’s going to act a lot like the second equation, which is that the growth in a is going to depend on the level of a to some parameter phi. And that’s like positive feedback, I push up a growth goes up in a, which causes growth in y to go up, I push up growth, and then a goes up and keeps going like this, okay? And that actually will produce if you saw the differential equation, it does produce a true mathematical singularity, it’ll be some point in time t star, which is definable, at which we will achieve infinite productivity. All right. Now, maybe that sounds like a fantasy. And it would be a fantasy because there may be certain obstacles that happen. I’ll just go very quickly through a couple.

One obstacle is that you just simply can’t automate everything, right? So both of those models assume you can get to a lot of automation, right? Maybe automation there is actually very hard. Maybe it was easy to automate sugar beets, but there are just certain cognitive tasks, for example, with regard to AI, that are going to be very, very hard to automate. If we never get to full automation, we can still get growth to go up. But we’re never going to get these kinds of singularities in these models. Okay, in the simplest form. So if you think that there’s some kind of bottleneck tasks that we can’t automate, and then we’re going to get we’re not we’re not going to get these labor free, full automation singularities, you have to believe that to some extent that we can truly automate all these things. And of course, that’s an open question with AI, but how far it can go in goods and services production and in sort of creative innovative activity.

A second a second constraint kind of the latter two constraints in some sense come from the universe itself, which is this differential equation at the top if that parameter phi is greater than zero, it will give you a singularity you will get one. You fully automate idea production and you will get one in finite time. But the question then is really whether we believe that parameter phi is actually larger than zero. What does that say? It’s saying that if it’s greater than zero, then when I increase A, I increase the level of technology in the economy, I make future growth faster, right. But if phi is less than zero, what happens when I raise the level of existing technology, and phi is less than zero, I make future growth slower, it takes away that positive feedback loop. And then you don’t get a singularity. And there are good reasons to think that phi might be less than zero, we don’t know. But there are reasons to think it is because there’s only so many good ideas in the universe, and we came up with calculus, and we came up with the good ones early, and the remaining ones are hard to discover, or just there aren’t that many good ones left. And so if you think we’re kind of fishing out the pond, right, think of AI as changing the fishermen, we get better fishermen on the edge of the pond. But if the pond itself is running out of fish, big fish for us have new ideas, it doesn’t matter how good your fishermen are, there’s nothing left in the pond to catch. And then there’s some other I have another AI version called the burden of knowledge. But regardless, there are some ideas in the existing economic growth literature about science and innovation that suggests phi may be less than zero. And that’s just going to turn off that singularity.

And then the third one, which is somewhat related, is that there just might be bottleneck tasks. And this kind of comes back to Baumol’s cost disease as he’s reasoning, but more at a task level. So for example let’s say that GDP here is actually a combination of our output and all these tasks. And the most simple form, let’s say it’s the minimum. So this is a real bottleneck, you’re only as good as your weakest link. It’s one version of a simple version of Baumol’s cost disease. So if it’s the min function, it doesn’t matter how good you get at every task, the only thing that matters is how good you are at your worst task. Right. So in other words, we might be really, really good at agriculture. But at the end of the day, we’re really bad at something else. And so that’s what’s holding us back.

I think that this is actually quite instructive. Because think about Moore’s Law, people get so excited about Moore’s law and computing. And a lot of people who believe in singularities are staring at the Moore’s Law curve. And it’s just incredibly dramatic, exponential, rapid, rapid, rapid increase in productivity, which is mind boggling in a way. At the same time, this has been going on for a long time that Moore’s law, and if you look at economic growth, we don’t see an acceleration. Right? If anything, we probably see it slow down. And that suggests that no matter how good you get at computer’s, there are other things holding us back, like it still takes as long to get from one point on a map to another based on available transportation technologies, that’s not really changing. I go back to the Baumol theme, if things really depend on what we’re sort of what is essential, but hard to improve, we can actually take our computing productivity to infinity, literally, and it just doesn’t matter. It’ll help, it’ll make us richer, it’s good. But it won’t fundamentally change our growth prospects unless we can go after the hard problems, or the hard ones to solve.

To conclude these are a whole series of models, obviously, we do this at much greater length in the paper, if you’d like to read it, you can put AI in the production of goods and services. If you can’t fully automate, you just kind of slowly automate, you kind of it looks like more of the same, it’s sort of a natural way to go. But if you can get to full automation, where you don’t need labor anymore, you can get a rapid acceleration in growth through the so called, what we call a type one singularity. When you put AI in the ideas production function in the creation of new knowledge, you can get even stronger growth effects. And that, in fact, could even lead to one of these true mathematical singularities, sort of, in science fiction. But there are a bunch of reasons in both cases to think that we might be limited because of either automation limits, because their search limits and that creative process, really, with regard to the knowledge production function, or more generally, in either setting, with natural laws, like I didn’t say it a lot, but like, the second law of thermodynamics seems like a big constraint on energy efficiency, that we’re actually pretty close to, in current technology. And if energy matters, then that’s going to be a bottleneck, even if we can get other things to sort of skyrocket in terms of productivity. And so I think a theme that Chad and I certainly came to writing this paper was the kind of interesting idea that ultimately growth seems determined, potentially not by what you are good at, but by what is essential, yet hard to improve. And that that is kind of important for us to keep in mind, when we all get excited about where we are advancing quickly. Then we go back to the aggregate numbers, and we don’t see much progress. This is just like a pretty useful way potentially, to frame that and begin to think about it, maybe we should be doing a lot of thinking about what we’re bad at improving, and why that is, if we really want to understand future growth. Okay, so I went pretty quickly, but hopefully I used my time, I didn’t spill over too much beyond my time and look forward to the discussions from thanks, Rachael and Phil in advance. I look forward to Chad’s comments as well. Thank you.

Anton Korinek  24:27

Thank you, Ben. The timing was perfect. And to all our participants, let me invite you to submit questions through the Q&A field at the bottom of the screen. After all the presentations, we’re going to continue the event with the discussions of the points that you are raising; and incidentally to the speakers, if there are some questions, clarification questions, for example, where you can type a quick response, feel free to respond to the Q&A in the Q&A box directly.

Let me now turn it over to Chad. Chad is the STANCO 25 Professor of Economics at the Stanford Graduate School of Business. He is noted for his research on long-run economic growth. In particular, he has examined theoretically and empirically the fundamental sources of growth in incomes over time and the reasons underlying the enormous differences in standards of living across countries. In recent years, he has used his expertise in macroeconomic methods to study the economic causes behind the rise in health spending and top income inequality. He is the author of one of the most popular textbooks of Macroeconomics, and his research has been published in the top journals of economics. Chad, the floor is yours.

Chad Jones  25:50

Wonderful, thanks very much Anton. It’s really a pleasure to be here. I think Anton did a great job of introducing this session and pairing these two papers together. As he said, a lot of growth theory historically looked back and tried to understand how constant exponential growth can be possible for 100 years. The first paper that Ben presented kind of looked at automation, artificial intelligence and possibilities for growth rates to rise and even explode. This paper is going to look at the opposite possibility. And ask, could there be an end of economic growth? And I think all these ideas are worth exploring. And I guess my general perspective is part of the role of economic theory is to zoom in on particular forces and study them closely. And then at the end of the day, we can come back and ask, Well, how do these different forces play against each other? So that’s kind of the spirit of this paper.

So a large number of growth models work this way: basically, people produce ideas, and those ideas are the engine of economic growth. So the original papers by Paul Romer and Howard and Grossman Hellman work this way, the sort of semi endogenous growth models that I’ve worked on and Sam Kortum, and Paul Seger show, basically, all idea driven growth models work this way people produce ideas and ideas drive growth. Now these models typically assume that population is either constant or growing exponentially. And for historical purposes, that seems like a good assumption. An interesting question to think about, though, is what does the future hold? From this perspective, I would say before I started this paper, my view of the future of global population, which I think is kind of the conventional view, is that it was likely to stabilize 8 or 10 billion people a hundred years from now or something. Interestingly, there was a paper, a book published last year by Bricker and Ibbitson called Empty Planet. And this book made a point that after you see it, is very compelling and interesting. They claim that maybe the future is actually not one where world population stabilizes. Maybe the future is one where world population declines, maybe the future is negative population growth. And the evidence for that is remarkably strong, I would say, in that high income countries already have fertility rates that are below replacement. So the total fertility rate is sort of a measure in the cross section of how many kids are women having on average. And obviously two is a special number here, if women are having more than two kids on average, then populations tend to rise, if women are having fewer than two kids on average, then the population will decline and maybe it’s 2.1 to take into account mortality, but you get the idea. The interesting fact highlighted by Bricker and Ibbitson and well known to demographers is that fertility rates in many, many countries, especially advanced countries are already below replacement. So the fertility rate in the US is about 1.8 in high income countries as a whole 1.7, China 1.7, Germany 1.6, Japan, Italy and Spain even lower 1.3 or 1.4. So, in many advanced countries, fertility rates are already well below replacement. And then if we look historically, again, we kind of all know this graph qualitatively fertility rates have been declining. So take India, for example, in the 1950s and 60s, the total fertility rate in India with something like six, women had six kids on average, and then it fell to five and then to four and then to three, and the latest numbers in India, I think are 2.5 or 2.4. But the perspective you get from this kind of graph is, well, if we wait another decade or two, even India may have fertility below replacement rates, fertility rates have been falling all over the world. And maybe they’re going to end up below two.

So, the question in this paper is what happens to economic growth, if the future of population growth is that it’s negative rather than zero or positive, right? And the way the paper is structured, it considers this possibility from two perspectives. First, let’s just feed in exogenous population growth, let’s just assume population growth is negative half a percent per year forever, feed that into the standard models, and then see what happens. And the really surprising thing that happens is you get a result that I call, in honour of the book, the empty planet result. And that is that not only does the population vanish with negative population growth, that the global population is disappearing, but while that happens, living standards stagnate. So this is quite a negative result: living standards stagnate for a vanishing number of people. And it contrasts with the standard growth model result that all these growth models that I mentioned earlier, half, which I’m now going to call an expanding cosmos result. But it’s basically a result that you get exponential growth in living standards. So living standards grow exponentially, at the same time the population grows exponentially. On the one hand, you have this sort of traditional expanding cosmos view of the world. And what this paper identifies is, hey, if these patterns in fertility continue, we may have a completely different kind of result, where instead of living standards growing for a population that itself is growing, maybe living standards stagnate for a population that disappears.

Then the second half of the paper, and I only have a chance to allude to how this works, says, Well, what if you endogenize the rate of fertility? What if you endogenize the population growth? Do you learn anything else? And you can get an equilibrium that features negative population growth, that’s good, we can get something that looks like the world. And the surprising result that comes out of that model is that even a social planner, if you ask, what’s the best you can do in this world, choose the allocation that maximizes the utility of everyone in the economy. And with population growth, the question of who is everyone is essentially in question. But the result there is that a planner who prefers this expanding cosmos result can actually get trapped by the empty planning outcome. And that’s a surprising kind of result, it might seem like it doesn’t make any sense at all, but I’ll try to highlight how it can happen.

I’m going to skip the literature review in the interest of time, I’ve already kind of told you how I’m going to proceed. Basically, what I want to do is look at this negative population growth in the sort of classic Romer framework, and then in a semi endogenous growth framework, and then go to the fertility results.

Let me start off by illustrating this empty planet results in a set of traditional models. So make one change in traditional models, instead of having positive population growth, or zero population growth, have negative population growth and see what happens. That’s, that’s the name of the game for the first half of the paper. To do that, let me just remind you what the traditional results are in a really simplified version of the Romer model. I’m sure you all know, but the model this is based on and this paper by Romer won the Nobel Prize in Economics a couple of years ago. So this is a very well-respected, important model in the growth literature. So the insight that got Romer, the Nobel Prize was the notion that ideas are not rival. Ideas don’t suffer the same kind of inherent scarcity as a good. So if if there’s an apple on the table, you can eat it, or I can eat it, apples are scarce, bottles of olive oil are scarce, coal is scarce, a surgeon’s time is scarce. Everything in economics that we’re traditionally studying is a scarce factor of production. And economics is the study of how you allocate those scarce factors. But ideas are different. If we’ve also got the fundamental theorem of calculus, one person can use it, a million people can use it, a billion people can use it, and you don’t run out of the fundamental theorem of calculus the same way you’d run out of apples or computers.

And so that means that production is characterized by increasing returns to scale, there’s constant returns to objects, here just people, increasing returns to objects and ideas taken together. And this parameter sigma being positive measures the degree of increasing returns to scale. Then where do ideas come from? In the Romer model, there’s a basic assumption that says that each person can produce a constant proportional improvement in productivity. So the growth rate of knowledge is proportional to the number of people and then the Romer model just assumes that population is constant. This is the assumption I’m going to come back and relax in just a second. So if you solve this model income per person, lowercase y is just GDP divided by the number of people that’s just proportional to the number of ideas, right? The amount of knowledge, each improvement in knowledge raises everyone’s income because of non rivalry, that’s the deep Romer point. And in the growth rate of income per person, depends on the growth rate of knowledge, which is proportional to population, right? So this is a model where you can get constant exponential growth in living standards, with a constant population. And if you look at this equation, you realize, well, if there’s population growth in this model, that gives us exploding growth in living standards. We don’t see exploding growth and living standards historically. And we do see population growth. So there’s some tension there. And that’s what the semi endogenous growth models are designed to fix that I’ll come back to in a second.

In the meantime, what I want to do is change this assumption that population is constant, and replace it with an assumption that the population itself is declining at a constant exponential rate. So let Ada denote this rate of population decline. So think of Ada as 1% per year, half a percent per year, the populations falling a half a percent per year. And then what happens in this model? Well, if you combine the second third equations, you get this, this law of motion for knowledge. And this differential equation is easy to integrate, right? It says the growth rate of knowledge is itself falling at a constant exponential rate. And not surprisingly, if the growth rate is falling exponentially, then the level is bounded. That’s what happens when you integrate this differential equation, you get the result that the stock of knowledge converges to some finite upper bound A*. And since knowledge converges to some finite upper bound, income per person does as well. And you can calculate these as functions of the parameter values. And it’s interesting to do that I do a little bit of that in the paper. But let me leave it for now, by just saying, what we did is just by by changing this assumption, that population was constant, making it population growth negative, you get this empty planet result, you get that living standards asymptote, they stagnate at some value, y* as the population vanishes, that’s the empty planet.

Now that I look at this other class of models, the semi-endogenous growth class of models and what was interesting about these models is that in the original framework, the Romer style models and the semi endogenous growth models lead to very different results. In the presence of positive population growth, these models yield very different outcomes. And what’s kind of interesting is that with negative population growth, they yield very similar outcomes. Okay. So again, let me go through it in the same kind of order as before, let me present the traditional result with positive population growth, and then change that assumption and show you what happens when population growth is negative. So same goods production function, we’re taking advantage of Romer’s non rivalry here. And I’m making basically one change, if you want set lambda equal to one that doesn’t really matter. I’m introducing what Ben described in the earlier paper, as as this sort of “ideas are getting harder to find” force, right, the fishing out force, right? And beta kind of measures the rate at which ideas are getting harder to find it says, the growth rate of knowledge is proportional to the population, but the more ideas you discover, the harder it is to find the next one, right? Beta measures the degree to which it’s getting harder. So beta, think beta, some positive number, and then let’s put in population growth at some positive rate and it’s exogenous. Same equation, income per person is proportional to the stock of ideas raised to some power, the stock of ideas is itself proportional to the number of people. And that’s an interesting finding here, which is, the more people you have, the more ideas you produce, and the more total stock of knowledge you have, and therefore the richer the economy is. People correspond to the economy being rich in the long run by having lots of ideas, not to the economy growing rapidly. That’s what happens, versus the earlier models. And then, if you take this equation, and you take logs and derivatives of it, it says that the growth rate of income per person depends on the growth rate of knowledge, which in turn depends on the growth rate of people. The growth rate of income per person is proportional to the rate of population growth, where the factor of proportionality is the degree of increasing returns to scale in the economy, essentially. And so this model, you can have positive population growth, being consistent with constant exponential growth and living standards. So this is the expanding cosmos result, right, we get exponential growth and living standards for a population that itself grows exponentially, maybe it fills the earth, maybe it fills the solar system, maybe it fills the cosmos, right, that’s the kind of taken to the implausible extreme maybe result of this model.

Let’s do the same thing. Suppose we change that assumption that population growth is positive, to one of population growth being negative, again, that kind of remarkably, I would say, looks like the future of the world that we live in, right, based on the evidence that I presented earlier. So once again, we’ve got this differential equation, you substitute from the negative population growth equation again, and you see that not only does the growth rate of knowledge decline exponentially because of this term, but it falls even faster. So the growth rate of knowledge falls even faster than exponentially. So of course, the stock of knowledge is still going to be bound. This is another differential equation that’s really easy to integrate. And you get that once again, the stock of knowledge is bounded, and you can play around with the parameter values and do some calculations. In the interest of time, let me not do that.

Let me instead say, what we see, let me just sort of summarize, is so first, as a historical statement, fertility has been trending downward, we went from five kids to four kids to three kids to two kids, and now even less in rich countries. And an interesting thing about that is from the microeconomic perspective, from the perspective of the individual family, there’s nothing at all special about having more than two kids or fewer than two kids: it’s an individual family’s decision, and some families decide on three, some families decide on 2, 1, 0, whatever. But there’s nothing magic about above two versus below two, from an individual family’s perspective. But the macroeconomics of the problem makes this distinction absolutely critical. Because obviously, if on average, women choose to have slightly more than two kids, we get positive population growth. Whereas if women decide to have slightly fewer than two kids, we get negative population growth. And what I’ve shown you on the previous four or five slides, is that that difference makes all the difference in the world to how we think about growth and living standards in the future. If there’s negative population growth, that could condemn us to this empty planet result, where living standards stagnate as the population disappears, instead of this world we thought we lived in, where living standards were going to keep growing exponentially along with the population. And so this relatively small difference matters enormously when you project growth forward. The sort of fascinating thing about it is, it seems like as an empirical matter, we’re much closer to the below two view of the world than we are to the above two view the world. So maybe this empty planet result is something we should take seriously. That’s, I would say that the most important finding of the paper.

Let me go to the second half of the paper, just very briefly, and I won’t go through the model in detail. It’s admittedly subtle and complicated, and took me a long time to understand fully, but I do want to give you the intuition for what’s going on. I write down a model where people choose how many kids to have. And in the equilibrium of this model, the idea part of kids is an externality. So we have kids, because we love them. And in my simple model, people ignore the fact that their kids might be the next Einstein and Marie Curie or Jennifer Doudna, I guess, now with the Nobel Prize for CRISPR. And if they might create ideas that benefit everyone in the world. The individual families ignore the fact that their kids might be Isaac Newton. And so the planner is going to recognize that social welfare recognizes that having kids creates ideas. And so the planner wants you to have more kids than you and I want to have, there’s an externality in the simple model along those lines. And, admittedly, this is a modeling choice people would writing down these kind of fertility models for a while and there are lots of other forces and you can get different results. I don’t want to claim this as a general result, rather, I see it as illustrating an important possibility. As I mentioned, the key insight that you get out of studying this endogenous fertility model, is that the social planner can get trapped in the empty plant, even a social planner who wants this expanding cosmos, if they’re not careful. I’ll try to say what I mean by if they’re not careful, they can get trapped in the empty planet. So how to understand that.

In this model, population growth depends on the state variable x, which you can think of is knowledge per person. It’s a to some power divided by n by some power, let me just call it knowledge per person. And we can parameterize the model so that the equilibrium, women have fewer than two kids. And so population growth is negative. If population growth is negative look at what happens to x. I’ve already told you that a converges to some constant, and n is declining. And so x is going off to infinity. So in the equilibrium x is rising forever. What about in the optimal allocation, the allocation that maximizes some social welfare function? Well, the planner is going to want us to have kids not only because we love them, but because they produce ideas that raise everyone’s income. The key subtlety here is suppose we start out in the equilibrium allocation, where x is rising and population growth is negative. And ask: when do we adopt the good policies that raise fertility? The planner wants you to have more kids? Do we adopt the policies that raise fertility immediately? Do we wait a decade? Do we wait 50 years, do we wait 100 years? That’s the ‘if you’re not sufficiently careful’. The point is, if society waits too long to switch to the optimal rate of fertility, well, then x is going to keep rising. And the idea value of kids gets small as x rises, because remember, x is knowledge per person. As x rises, we have tons of knowledge for every person in the economy. So the marginal benefit of another piece of knowledge is getting smaller and smaller. So the idea value of kids is getting smaller and smaller. And because we’ve already said that the loving your kids force still leads to negative population growth, well, even if you add a positive idea value of kids, the planner might still want negative population growth, if you wait too long. If you wait for the idea value of kids to shrink sufficiently low, then even the planner who, ex ante, preferred the expanding cosmos, gets trapped by the empty planet. So what this says is that it’s not enough to worry about fertility policy, we have to worry about it sooner rather than later. And here’s just a diagram.

I think I’m almost out of time, let me just conclude. So what I take away from this paper, is that fertility considerations are likely to be much more important than we thought, this distinction between slightly above two and slightly below two, that from an individual family standpoint just barely seems to matter. From an aggregate standpoint, from a macroeconomic standpoint, is a big deal. It’s the difference between the expanding cosmos and the empty planet. As I mentioned, when I started, this is not a prediction. It’s a study of one force. But I think it’s much more likely than I would have thought, before I started this project. And there are other possibilities. Of course, we’ve talked about one with AI producing ideas so that people aren’t necessary: important in my production function is that people are a necessary input. You don’t get ideas without having people and maybe AI can change that, that’s something we should discuss in the open period. There are other forces: technology may affect fertility and mortality, maybe we end up reducing the mortality rate to zero so that even having one kid per person is enough to keep population growing, for example. Maybe evolutionary forces favor groups that have high fertility for some reason, maybe it selects for those genes. And so maybe this below replacement world we look like we’re living in, maybe that’s not going to happen in the long run. But anyway, I think I’m out of time, let me go ahead and stop there.

Anton Korinek  48:33

Thank you very much, Chad. And let me remind everybody of the Q&A again. Our first discussant of these ideas is Rachael Ngai. Rachael is a professor of Economics at the London School of Economics and a research associate at the Center for Economic Performance, as well as a research affiliate at the Center for Economic Policy Research. Her interests include macroeconomic topics such as growth and development, structural transformation, as well as labor markets and housing markets. Rachael, the floor is yours.

Rachael Ngai  49:11

Thank you Anton. Thank you very much for having me discuss these two very interesting papers. There’s a lot of interesting content in both, but because of time, what I will focus on is the aspect related to the future of economic growth and the role played by artificial intelligence, declining population growth. Now, when we talk about artificial intelligence, there are many aspects, there will be political aspects, philosophical aspects, which I will not have time to talk about. Today, I will purely focus on the implication for future of economic growth.

Okay, so economic growth is about the improvement in the living standard. When we think about the fundamental source of growth as both Ben and Chad point out, it’s about technology progress. Technology progress can happen through R&D, or experience, when we are doing something and we get better at doing something. But the key thing for technological progress is that it requires brain input. So far, for the last 2000 years or so, the main brain input is the human brain. So here are some examples we already have mentioned. That how their research outputs have improved the living standard for mankind over the last 2000 years. Now, Chad’s paper is very interesting, and it is bringing up something that is really important. So here is the figure that basically repeats what Chad has shown us from the United Nations, about how many children women have, as you seen in high income country, has already fallen below the replacement ratio, which is about two, and for the world as a whole it is also falling. And in fact, United Nation predicts that in 80 years, population growth will be stagnant. So there will be zero population growth. And that means going forward, we will see negative population growth. What Chad has convincingly shown is that, when that is happening, we might get the empty planet result, which is the result that living standards will be stagnant and the human race will start to disappear.

And this is really an alarming result. And the reason for this is because the profit incentive of having children – we love children – does not take into account that the children are producing ideas that are useful for technological progress. So clearly, there’s a role for policy here, which Chad mentioned earlier as well. So we could try to introduce some policies that help to stimulate people to have more children. And the problem is, if we wait too long, then the empty planet result cannot be avoided. So that is something really, really worrying.

Then it goes to Ben’s paper, which gives you the tentative scenario, which is to say: what if we have the following situation. And they suppose we think of the human brain or that man is basically like machine. So artificial intelligence can replicate human brain. In fact, in Chinese, we say the computer we translate as electrical brain. So it’s really saying can the electrical brain really replace a human brain. If it can, then what we will obtain is that we can avoid that nation, which is the empty planet itself. And even more, we might be able to move through to a technological singularity, where the artificial intelligence can self improve, and the growth can explode.

Now, I think we are all kind of convinced by that the singularity result seems quite impossible, because one simple thing one can say is that many essential activity cannot be done by AI, and because of that, which is sometimes called the Baumol’s effect, because of that, you will not get the situation where growth explodes. So let me focus on the situation whether AI can solve the problem that Chad mentioned, which is the stagnation result. So how possible is it really, that we can have AI completely replace humans in generating technological progress? Meaning in that R&D production function, we do not need humans anymore, we can just have AI in it. How is that possible?

So here’s the brief timeline of the development of artificial intelligence, which is quite remarkable fact, which started in 1950. So over the last 70 years, a lot of progress has been made. Okay, there’s a lot of great discovery. But is it enough? And what do we look for into the future? So there’s a report by the Stanford University called Artificial Intelligence Index Report. What this shows are a few points I want to highlight. One is that the human brain itself is actually still needed in improving AI. So for the last 10 years 2010 to 2019. What we’ve seen is published paper about artificial intelligence has increased by 300%. And the papers online before they were published, has increased by 2,000%. So there’s a huge increase in how we researchers are trying to improve AI. And at the same time, we also see a lot of students choose to go to university to study AI. So looks like we still need quite a lot of human brains to pour into making the artificial intelligence to replace the human brain. So there is progress being made in many area. But there is a lot of questions here, AI is good for searching pattern using the observed observed data. Okay, so that is basically how artificial intelligence works with big data. But can it really work like human brains on intuition and imagination.

Now, on the right hand side here, I took one example from this annual report, which is to show a video to the machine and ask the machine to recognize what is going on in that video. When you show the video of some high activity thing, for example, like Zumba dancing, the precision rate is very high, the machine really picks up the activity very easily. But if you look at these other activities, for example, here is show the hardest activity is drinking coffee. So presumably, when people enjoy their coffee, they do not do much special movement. And there’s no special characteristic for the machine to pick up very easily. So the precision rate is less than 10%. And there has been very little progress over the last 10 years. So my take on this is that it’s still quite a long time for the artificial intelligence to completely replace the human brain. And it really matters a lot to see. If the world is going to have stagnant population 80 years, do we have enough time to make artificial intelligent replace human growth? So when you think about the future growth? Here’s the question, which is less costly and more likely, producing human brain or producing human green light, artificial intelligence? Can we, humans, with the help of artificial intelligence actually create an Einstein-like artificial intelligence? It to me, I don’t know, it seems quite difficult. But on the other hand, if we go back to Chad Jones’ paper, is saying that we need policy, we need policy to increase fertility. But it’s not an easy part on its own. As you’re thinking woman today face a trade off between career concerns and having children. So just by giving childcare subsidy on maternity leave, these are costly policies, and most of the time, it might not work. So when we think about fertility, of course, there’s lots of theories about fertility. Here, I’m just going to focus on a few things.

What, what is behind this? So if you look historically, how can we have very high fertility in the past, which is like five children per woman. So because there is a big role play by family farms, so family farms on the right hand side, here is some data from the AL, which show you how the fraction of woman working on family farms has been declining over time. Now family farms are very special, it creates demand for children, because children can help on the farm. And it also allows a woman to combine home production and work. But the process of urbanization and structural transformation have come along with the disappearance of family farms. Modern day, when a woman has to go to work, it really means leaving home. So making it incompatible to combine home production and work. So you look at home production. Here I show you a picture of the home production time per day, and market production time per day for women and for men. So the first bar is women, the second bar is men. And these two bar represent the world. What we see here is something really striking: woman’s home to home hour and childcare time is triple men’s. So for every one hour men does for home production, women have to do three hours. Now that itself this kind of picture may give young woman especially a pause when choosing whether to get married and to have children. While we see that women’s education is rising, and there is rising concern for gender equality.

So let me just conclude with this on the future fertility. So I hope I sort of convinced myself the artificial intelligence will take some time, but if we don’t change anything in 80 years, population growth will go negative. We need to really think about how we can do something about fertility. Childcare subsidies and maternity leave will not be enough. One possibility, maybe it will help women to choose to have more children is that if there’s more possibility of outsourcing home production to the market, but that really will play on the development of the service economy. Now, of course, the social norm is important as well, the social norm around the role of a mother can play a crucial role for woman’s decisions to become a mother. But the social norms themself are changing over time. And they will change and you will respond to technology and policy. So some hope there is, if this thing are all working, perhaps we can revert the trend of fertility to bring it up above the replacement level before or together with the artificial intelligence. And that will be the future of growth hope. Thank you very much.

Anton Korinek  60:59

Thank you very much, Rachael. Our next discussant is Philip Trammell. Phil is an economist at the Global Priorities Institute at the University of Oxford. His research interests lie at the intersection of economic theory and moral philosophy, with specific focus on the long term. And as part of this focus, he is also an expert on long-run growth issues. And incidentally, he has also written a recent paper on growth and the transformative AI together with me in which we synthesize the literature related to the theme of today’s webinar. Phil, the floor is yours.

Phil Trammell  62:32

Thank you, Chad, Ben and Rachael. And thank you, Anton, for giving me this chance to see if I can keep up with the Joneses. Some of what I say will overlap what’s already been said. But yeah, hopefully I have something new to say. As Anton said at the beginning, when thinking about growth, economists are typically content to observe, as Kaldor first famously did, that growth has been roughly exponential at 2 to 3% a year since the Industrial Revolution. And so they’ll assume that this will continue, at least over the timescales that they care about. Sometimes they do this bluntly, by just stipulating an exogenous growth process going on in the background, and then studying something else. But even when constructing endogenous or semi-endogenous growth models; that is, ones that model the inputs to growth explicitly, research and so on. A primary concern of these models is usually to match this stylized description of growth over the past few centuries. For example, the Aghion, Jones and Jones paper that Ben presented is unusually sympathetic to the possibility of a growth regime shift and acceleration. But even so, it focuses less on scenarios in which capital becomes highly substitutable for labor and tech production, ones that overcome that Baumol effect, on the grounds that as long as that phi parameter Ben mentioned is positive, which I think the authors believed at the time, then capital accumulation is enough to generate explosive growth, which is not what we’ve historically observed. And restrictions along these lines appear throughout the growth literature. As a result, alternate growth regimes currently seem to just be off most people’s radar. For example, environmental economists have to think about longer timescales than most economists, but they typically just assume exponential growth, or a growth rate that falls to zero over the next few centuries. A recent survey of economists and environmental scientists just asked: when will growth end? As if that roughly characterized the uncertainty, and those with an opinion about half said within this century, and about half said never. No one seems to have filled in a comment saying they thought it would accelerate or anything like that. Plus when asked why it might end, insufficient fertility wasn’t explicitly listed as a reason, and no one seems to have commented on its absence.

But on a longer timeframe, accelerating growth wouldn’t be ahistorical, the growth rate was far lower before the Industrial Revolution. And before the agricultural revolution, it was lower still. So some forecasts on the basis of these longer-run trends have predicted continual acceleration to growth, sometimes in the near future, multiplied by a factor of 20 again, it might be 40% growth a year or something. Furthermore, radically faster growth doesn’t seem deeply theoretically impossible, I don’t think. Lots of systems do grow very quickly. If you put mold in a petri dish, it’ll multiply a lot faster than 2% a year, right.

So more formally, the Ben paper finds that you can get permanent acceleration under this innocent seeming pair of conditions. First, you need capital that can start doing research without human input, or can substitute well enough to overcome that Baumol effect. And second, you need phi at least zero, the fishing out effect. Not not too strong. Yeah, just to just to recap what here’s what phi at least zero means. When you have advanced tech, on the one hand, it gets easier to advance further, because you have the aid of all the tech you’ve already developed. And on the other hand, it gets harder because you’ve already picked all the low hanging fruit. Phi less than zero means the second effect wins out. Okay, so as you can see, these two conditions are basically a way of formalizing the idea of recursively self-improving AI, leading to a singularity, and then translating that into the language of economics.

That’s a great contribution in formalization in its own right, but the really nice thing about it, is that it lets us test these requirements, the singularitarian scenario. So as Ben noted a recent paper estimates phi to be substantially negative, or using Chad’s notation, beta to be positive, implying that even reproducing and self-improving robot researchers couldn’t bring about a real singularity, like a type one or type two. But they could still bring about a one time growth rate increase, as long as they can perform all the tasks involved in research.

In any event, this is just one model. There, there are plenty of others. Anders Sandberg here put together a summary of these back in 2013, of what people have come up with at the time. And Anton and I did the same more recently to cover the past decade of economist’s engagement with AI. But I think the most significant contribution on this front is just the paper that Ben presented. It solidifies my own belief for whatever little it’s worth that an AI growth explosion of one kind or another, even just a growth rate increase rather than a singularity is not inevitable, but not implausible. And it’s at least a scenario we should have on our radar.

This is all very valuable for those of us interested in thinking about the range of possibilities for long-run growth. For those of us also interested in trying to shape how the long-run future might go though, what we especially want to keep an eye out for our opportunities for very long run path dependence, right, not just forecasting. In fact, I think almost a general principle for those interested in maximizing their long term impact would be to look for systems with multiple stable equilibria which have very different social welfares in them and where we’re not yet locked into one and then to look for opportunities to steer toward a good stable equilibrium. So we have to ask ourselves, does the development of AI offer us any opportunities like this? If so, I don’t think the economics literature as yet identified it identified them actually. As Ben Garfinkel here has pointed out, a philanthropist who saw electric power coming decades in advance might not have found that insight to be a decision element. It just doesn’t really help you do good. There could be long term consequences of the social disruption AI could wreak, or of who first develops AI, and like takes over the world or something. And most dramatically, if we do something to prevent AI from wiping out the human species, that would certainly be a case of avoiding a very bad and very stable equilibrium. But scenarios like these aren’t really represented in the economics literature on AI.

By contrast, path dependency is a really clear implication of Chad’s paper. We may have this once and forever opportunity to steer civilization from the empty planet equilibrium to the expanding cosmos equilibrium by lobbying for policies that maintain positive population growth and thus maintain a positive incentive to fund research and fertility. To my mind, this is a really important and novel insight. And it would be worth a lot more papers to trace out more fully, just under what conditions it holds. But I think it’s pretty robust. So the key ingredient is just that if there’s too much tech per person, the social planner can stop finding it worthwhile to pay for further research. For the reasons Chad explained, fertility has proportional consumption costs, you have to bring about a proportional population increase, people have to give up a certain fraction of their time to have the children but it would no longer be producing proportional research increases, because there’s this mountain of ideas, you can hardly add much to in proportional terms. So as long as this dynamic holds, you’ll get that pair of equilibria.

So for example, in the model, peoples’ utility takes this quirky form you see here, where c is average consumption of the time, and n is how many descendants people have alive at a time. But you might wonder, what, if people are more utilitarian, what if they’re perhaps number-dampened time-separable utilitarians like this? Well, if their utility function takes this form, as Chad points out in the paper, actually, we get the same results. And the utility functions are basically just monotonic transformations of one another. So they represent the same preference ordering is how you can see that. Anyway, likewise, in the model, people generate innovation just by living. This is equivalent to exogenously stipulating that a constant fraction of the population has to work as researchers full time. But what if research has to be funded by the social planner, at the cost of having fewer people working in final good output and thus lower consumption? Well, then, at least if my own scratch work is right, we still have our two stable equilibria. And in fact, in this case, the bad one stagnates even more fully. Research can zero out even though it’s not like everyone has died off, because it’s just not worth allocating any of the population to research as opposed to final good production.

Finally, sort of like Rachael is saying, I think there’s an important interaction between the models. If we’re headed for the empty planet equilibrium, the technology level plateaus. But the plateau level can depend on policy decisions at the margin, right, like research funding or just a little bit more fertility, even if it doesn’t break us out of equilibrium. And the empty planet result doesn’t hold if capital can accumulate costlessly and do the research for us. So maybe all that matters is just making sure we make it over the AI threshold and letting the AI take it from there. All right.

Well, to wrap up if we care about the long run, we should consider a wider spectrum of ways long run growth might unfold, not just those matching the Kaldor facts the last few centuries. If we care about influencing the long run, we should also look for those rare pivotal opportunities to change which scenario plays out. To simplify a lot, the Ben paper helps us with the former, showing how a gross singularity via AI may or may not be compatible with reasonable economic modeling. And the Chad paper helped us with the latter, showing a counter intuitive channel through which we could get locked into a low-growth equilibrium, sort of ironically via excessive tech per person. And a policy channel, that could avert it. He focuses on fertility subsidies, destroying technological ideas would do the trick too because it would shrink the number of ideas per person. But hopefully the future of civilization doesn’t ultimately depend on longtermists taking to book burning. And yeah, hopefully all this paves the way for future research on how we can reach an expanding cosmos. Thank you.

Anton Korinek  76:19

Thank you Phil, and thank you all for your contributions, and to everyone who has posted so many interesting questions in our Q&A. Now, luckily, many of them have already been answered in writing, because we are at the end of our allocated time. So let me perhaps just let both of our speakers have 30 seconds to give us a quick reaction to the discussion. Ben, would you like to go first?

Ben Jones  76:51

Sure, I will. Thanks, everyone, for all the great questions in the Q&A. Thanks, Rachael and Phil, for very interesting discussions about us writers in the pair of these papers. And I think the distinction of whether you can automate the ideas production function or not. I mean, that’s kind of where, what do we believe about that? In terms of which very different trajectory we end up on, I think it’s a super interesting question for research. I guess one last comment, I think the singularity type people, they tell a story something like you get a computer one algorithm as good or better than a human. And because you can then have huge increasing returns to scale from that invention of that algorithm, that AI, you think you just keep repeating it over and over again, as instantiations on computing equipment, then you kind of can get to sort of infinite input into or very high input growth into the idea production function. And I mean, that’s where you get this really, really strong singularity, I think, at a more micro statement of what’s going on. But I think the point that Chad and I are making, another way to think about it, is that you’re not going to repeat the human. What we’re going to do, it’s sort of like that we had a slide rule, and then we had a computer, and we have centrifuges, we’ve got automated pipetting, we’re going to research just like production as a whole set of different tasks. And probably what’s going to happen is we’re going to slowly continue to automate some of those tasks. And the more you automate the more you leverage the people who are left, and you can throw capital at those automated tasks. And I think that is the way that’s still a way doesn’t get you to singularities necessarily, but it’s the way potentially past the point Chad is making. And I think it’s really interesting, I think this work collectively helps us really think about where the rubber hits the road, in terms what we have to believe and where the action will be, in terms of the long run outcomes.

Anton Korinek  78:36

Thank you Ben. Chad?

Chad Jones  78:38

Yeah, so let me thank Phil and Rachael for excellent discussions, those were really informative. And I think the one thing I took away from your discussion and from pairing these two papers together is the point that you both identified, so I’ll just repeat it, I think it’s important.  An interesting question is, does the AI revolution come soon enough to avoid the empty planet? And I think that’s really when you put these papers together, the thing that jumps out at you the most, and as Phil kind of mentioned, and Ben was just referring to, small improvements can help you get there and so maybe it’s possible to leverage our way into that, but it’s by no means obvious. It’s been pointed out if you’ve got this fixed pool of ideas, then the AI improves the the fishers but doesn’t change the pool. And so I think a lot of these questions deserve a lot more research. And so I think, Anton, thanks for putting this session together. It was really great and very helpful.

Anton Korinek  79:34

Thank you, everyone, for joining us today and I hope to see you again soon at one of our future webinars on the governance and economics of AI. Bye.

Footnotes
Further reading