
The text below is a transcript of the audio from Episode 38 of Onward, "Are we vastly underestimating AI? with Dwarkesh Patel".
---
Ben: My guest today, Dwarkesh Patel, is the most exciting rising podcaster in the nation. He is also inside the very small community of people who may, right now, be inventing artificial general intelligence. What gives him a real inside view is not only his conversations with the leaders of every major AI company—including the founders of OpenAI, Anthropic, Meta, and Google’s DeepMind—but also his close friendships with his peers, who are the next generation of tech leaders inside these companies. My hope is that by bringing Dwarkesh on the podcast, he pulls back the curtain to help more people appreciate what those who are actually building AI think is going to happen. This is Onward, the Fundrise podcast. I’m Ben Miller, CEO of Fundise, and before we get started, I want to remind you that this podcast is not investment advice, it is intended for informational and entertainment purposes only.
Ben: Dwarkesh, welcome to Onward.
Dwarkesh: Thanks for having me.
Ben: I was so excited about this interview. So I really appreciate you coming on. So I wanted to talk about AI. I know you actually have more breadth than that, and maybe I'll try to bring it together in a few places, but I feel like most people outside of tech industry have trouble tracking what's happening with AI and what's doomsday airing and what's really likely to happen at least in the next few years.
So my first question for you is how should people. The normal people who work outside the tech industry think about AI.
Dwarkesh: because if you look at any sort of previous technological revolution, if you're sitting around in 1750 and you ask yourself, how should you think about the industrial revolution? In some sense, it'll change everything, but I don't know, what should you make of it? It's hard to say. Obviously, we can expect things like world output will massively increase. Obvious things like a lot of jobs will get automated, especially if they're remote work type jobs. It's hard to put in the context of what will the normal person take out of that. One thing is just that if a chimpanzee, if you told them we're inventing human beings, it's like, what is the takeaway for you?
I'd be concerned. I think it would be pretty interesting, but I'd be like, this is a big face transition.
Ben: okay, wow. So you're an AI optimist. I wasn't sure if you were going to go all the way there. So let me try to bring it to ground a little bit. So a lot of times with these big monumental changes, a lot of the Near term effects are uncertain, but the long term effects are more certain, like AI will boost productivity. That's pretty clear how much. What are the other consequences you think that are highly likely?
Dwarkesh: I think it's not that wild if you expect AGI to expect tens of percent economic growth. And the kinds of things you maybe see in specific Chinese cities that become special economic zones. And over the course of a few decades, they go from fishing villages to the most dashing cyberpunk futures. Basically what happened to Shenzhen or something in 30 years, going to a fishing town to what it is today. The equivalent of what you start with what America is like today and what the futuristic drone swarm mega AI cluster future looks like. I think that's plausible.
Ben: I didn't expect you to go to the sort of AI extreme so fast, so I'll try and navigate this for a second. Okay, so we I wanted to do this structured scenario planning exercise that I've done a lot of times on the podcast and actually I've done a lot at work and so there's this guy named Peter Schwartz who wrote this book years ago, I read literally like 20 years ago, the book was called The Art of the Long View and it's all about how he helped companies like Shell, Shell did this planning exercise back ago.
In the mid 80s, and the planning exercise was like, what if the Soviet Union were to fall? And so Shell was the only company that actually was prepared. No one was prepared for the sort of collapse of the Soviet Union, and we were just shocked. And Shell ended up becoming a dominant company as a result of being prepared for it mentally and sort of a little bit practically. And so it's not that you're trying to say, this is what's going to happen in the future. It's actually, if you enter the exercises, like a structured way to think about the future, you think about it as positive, or typically positive is a linear extrapolation of the present into the future, which is for AIs. and just go up into the right, following the scaling. The second is more of a negative shock. So things go badly. And the third is to try to come up with something that's a surprise, that people aren't thinking about. And that's the hardest one to do. So what's the thing that surprises people with AI? It's hard because by definition, if no one's thinking about it, it's hard for you to think about it. And that's normally what normally ends up happening. It's a really great exercise and I thought it'd be fun to do with you. To do like a scenario one positive, which I think is a linear extrapolation of the presence of the future or not linear because it's growing parabolically.
Dwarkesh: Even more significantly, the reason that's the case is because if you had more people, this happens sometimes, if you think about what happened to East Asian economies in the second half of the 20th century, or to the Soviet Union in the first half of the 20th century, where it looks like it's really fast growth. And what's happened is you have this latent population. And you've discovered it for the first time and now you can just let the wheels churn. And I think you can think of AI as if you have millions of digital workers. That is a specific scenario of just instead of 8 billion people, now you have 20 billion people or 20 billion workers. And you would obviously expect really fast economic growth and the extrapolation you talked about.
Ben: So can you try to put that into more of a story? What's happening in the world as a consequence, make it more like a novel than a straight up prediction. There's socioeconomic effects, political effects of a world where you've doubled the number of workers. And probably you believe there's 8 billion. There's 80 billion. It's hard for me to imagine you're constrained by the number of workers at that point.
Dwarkesh: That's a good question because I think you're right where you learn a lot just by trying to put things more concretely. Here's something specific. If you're constrained ultimately by how much is a worker worth, high skilled worker can be worth up to a hundred dollars an hour, let's just low ball it and say like more than a hundred, obviously, but I don't know. Low ball, let's say 50. And now instead of all of that cost is. A couple dollars, a couple cents worth of compute, then it's yeah, compute is incredibly valuable because you can get a lot of labor out of it. So then you can imagine that people who have set up the infrastructure to do this sort of inference compute will just be have a sort of windfall, right? So if you can manage to set up the cluster in Malaysia or the Middle East or something, that's really valuable. Now, maybe remote labor is slightly less valuable with humans, but in the transition period, the things that the AI can't do are extremely valuable because You could have 20 percent economic growth instead of 10 percent economic growth. The reason you're not getting 20 is there's not enough laborers. The robots aren't good enough. And so the AI is bottlenecked on the Amazon shipments or something. So in this future, you imagine million dollar salaries for Amazon fulfillment workers, because it's worth it for, you know, the output that the AI is producing. Maybe to make it even more concrete, then you gotta ask, what is this structure in which these AIs are relating to the rest of society and to the economy? And sort of expect it to be similar to how employees relate to firms, which is they coordinate together. And I expect it to be like a firms of many different AIs working with them. And it's modular, some sort of more takeover scenario. It would just be them having 10, 000 employees instead of 10 employees or something.
Ben: Does that devolve, does a family have 5, 000 AI workers or is it all centralized with a few companies?
Dwarkesh: There's one story you can tell, which is that because the returns to labor will be. And to very skilled labor will be extremely high on AI research itself. So one concrete scenario you can tell is none of this happens. The thing that happens is as soon as you have the human level intelligence, instead of distributing it widely and having McDonald's and Home Depot and wherever become more efficient organizations because they have AI. OpenAI just decides to use the AI to build better AI that has a higher return to labor than renting them out to other places. But if you keep doing that, at some point you just have a superhuman AI. And so there's not like an intermediate period where you have high economic growth because the AIs are widely distributed. And I think that the big question there is what are the actual returns to AI? Can you kick off an intelligence explosion if you just shove enough GPT 7s at helping you develop the next model? I think that would be a big question.
Ben: So actually I feel like what we just did is we pulled back the curtain. For the vast majority of people in the country, like don't know what's happening and what's on the mind of the few hundred people in San Francisco who are working at open AI, working in anthropic and working at Google. Them, your friends with them, my favorite episodes that you've done, or when you're like, have those guys on your podcast and you guys just sort of just. Feels like you have kind of the conversation you have when you're at dinner with them. And so for most people who are hearing this, maybe for the first time, they're kind of like, what the hell is this guy talking about? He can't, this is unrealistic. This can't be, what is he saying here? So I think we got to bring them along from where a person is when they read the news to where your community. You've taken for granted a lot of the things that everybody else is currently skeptical about or doesn't even know about. And so what are the steps that ladder them to where you are?
Dwarkesh: There's two different ways you can explain this intuition. One is you zoom out enough. And if you think from a perspective of a human that was around 100, 000 years ago, or 50, 000 years ago, the amount of output per person that we generate now is 10, 000 X. In terms of what they were able to produce a year, which might be, I don't know, a couple of stone tools compared to the per capita production of a human. Now, if you have a job, you're helping produce tons of goods and services and so on. So even just with kind of technology, we can understand like computers and machines and so forth. We've increased the amount of output by 10, 000 X. Now, imagine if what we get with future machines is. Things which can think, things which can make inventions, things which can coordinate and understand and make plans and do management and so forth. It doesn't seem that weird that the kind of thing that happened from hunter gatherers to us couldn't happen again. Now you might ask, why would it happen so fast? Maybe it would take a long time. And here's where another intuition comes to play. So, one big trend in AI has been, basically, at least so far, it seems to be about compute. That if you do the training run right, you put in more compute and the thing becomes more intelligent. And so intelligence just seems to be a big blob of compute. The way these big models are trained, you just shove in all of internet data. Here, I give you a bunch of compute, try to predict what happens next. And I'll come back three months later, and hopefully you're intelligent. And it kind of works. And it not only does it work, but empirical relationship between how much compute you put in and how well it performs on, you know, The task you train it at, in this case, it's predicting the next token. And it also seems to be the case that the better it gets at this task, when you talk to ChadGBT, it's gotten better at the task of predicting the next token. It seems to be better at explaining how physics works to you and how to make reservations for your itinerary or whatever. So the idea is you just continue this trend and at some point you have a human level intelligence. If you continue this trend even further it's not clear why you wouldn't have a superhuman level intelligence. I have many skepticisms about this story but I'm just explaining how the intuition that they might have.
Ben: My favorite super optimist, or I don't even know if the word optimist is right, is when you had Leopold Aschenbrenner on your show. He was the first person who actually introduced me to the concept of going from AGI to ASI from artificial general intelligence, artificial super intelligence, and how that step from AGI to ASI actually would be logically rapid because once you have AGI and it starts compounding on itself at the speed of computers and it gets to ASI much faster. But I think that where most listeners will lose you is a step from now to AGI. There's the intermediate step here that I feel like you still have to get across. You had a big if at the beginning. If you have this sort of ability to be agentic and plan and think, then it can do all these other things and we'd have explosive growth. But how do you get from now to that big if?
Dwarkesh: And to be clear I think it's possible you don't. Here's one intuition pump that sort of Helps me think about this. If you've played around with GPT 3 when it first came out, you remember how it was decent. You can tell it's a chatbot. Then you play with 3. 5, it's much better. 4, it's, I'm only talking to for a little bit of time. I can be convinced I'm talking to a very smart human. 4. 5, smarter. And it's the sort of agentic task where you give it tools. It gets better. Where there's like literally coding up entire repositories. You can tell it, I want this sort of project done. And we have benchmarks we can measure, it's getting better and better at basically automating the job, entire job of software engineers. So you can ask yourself this question, where do you think this is going? The thing that happened from 3 to 4, if this continues, before 3 there were many smaller systems. You started off with the very small transformer you can change around on your laptop, to bigger and bigger until now you have things that take huge data centers. And then you keep this going for a few more generations. It seems like it wouldn't be that weird if the kind of thing that came out was a human level intelligence and even better at tool use. It knows more things, it's more coherent, it can act more like an agent. And that's the sort of basic story of how you get there.
Ben: That's why I call it the linear extrapolation of the present into the future, because it just unfolds. Three to four, four to five, five to six, six to seven. And so it's natural. So, okay, let me just shift to the less optimistic version. So then you need to have some diminishing returns then, or some sort of limits. If you, you're sitting there a few years from now, you look back and you're like, okay, it actually hit these limits and thought maybe this was going to be a concern. It turned out to be a concern and it was non trivial actually to try to solve them. How does that play out?
Dwarkesh: So here's a story you could tell about we're sitting around in 2040. And you have me on your podcast again, and you're like, why did nothing happen? What was that? And the story I would tell is I would guess it's probably the case that we were just fooled in the sense that we thought that we were doing this linear extrapolation on general intelligence. And in fact, it was something closer to. You're training on all of internet text and you can't underestimate that. And it's, you play with the systems and you're treated as, oh, it's been trained like a human and therefore it knows the things a human knows. No, it's been trained on more stuff than any human would ever see in their lifetime. And maybe it's the case that every time it's really good at coding you a specific kind of algorithm. Or working with a specific kind of framework or teaching you about physics or something. You've just seen that thousands of times and it learned that specific kind of thing. And as you make these systems bigger and bigger, three to four, what happened is just, it saw more things. And so it picked up more and more. It just has a bigger library of ideas. circuits and whatever, and heuristics. But what humans have is just that you can see a very few things from the age of 0 to 18. You're not seeing that many things compared to what an LLM sees, but you have the ability to become generally intelligent as a result or have general intelligence and go from that very little learning to solving a wide range of problems and so forth. So it's possible that we're really not thinking about these systems are fundamentally different in the way they learn. And another thing is even if they are, Suppose it just, it's a tax you have to pay. So they're like a hundred X less efficient than humans or a thousand X less efficient, but it's like a tax. It's worth it. We're going to pay a thousand X more than we pay a human to grow from zero to 18. The amount of calories it takes for a human to stay alive from zero to 18, a thousand X it, the problem could be that you might be able to put in the money, but you don't have enough data because these things are so data hungry. We're going to run into something that's called the data wall, which is just that as you make the models bigger, you need a proportionate amount of more data to keep them saturated, right? And unless you can figure out some way to make fake data, which is called synthetic data, or have them train themselves with some sort of RL self play is another word for it, then you're just going to be stuck at this level.
Ben: I feel like when I use it. It does seem like a more advanced version of search where you can see how it's able to find and fit what you want to what is in this training set. And so the challenge of going from, okay, it's really good at recognizing concepts and fitting them and also fitting them together, but then trying to chain them together to be an agent that accomplishes goals. It requires some sort of different mode or methodology. I know that OpenAI has some different ideas around this, but it's not necessarily just predict the next word, predict the next token. It's something else.
Dwarkesh: This might be frustrating because whatever you say, I'll try to say the opposite. Cause I think there's good arguments for both perspectives, but I just rebutted the case against AI progress. Now to make the case for it against what you said, it could be the case. So one way of thinking about it is. Yeah, it is the case that these models are just a bundle of heuristics and a library of these kinds of things you're talking about where they're just searching across them. But the theory goes, that's what human intelligence also is. And the reason we're smart and we can extrapolate and figure things out is that we have higher level intelligence. Association. So you do a very basic control F search. It needs to find the exact thing. The more intelligent you are with these LLMs, they can maybe connect abstract concepts together, but it still has to be a sort of, you know, what you're looking for. The smarter they get, the more they can just piece things together themselves and understand the context of what you're looking for and have a higher level associations. And the smarter you get, the story is humans were at a point where you construct higher and higher level associations and, but you're still doing the same kind of thing.
Ben: I feel like there's a different version of the stories that it's just, it happens slower. It takes a lot longer, not years, but decades. And we are making that progress and we are, are finding better algorithms or ways to solve problems. But I think so much of what is currently mind blowing is the acceleration around it, and so like a different version is that just the pace is slower.
Dwarkesh: Yeah. Although it is interesting, isn't it? That a few years ago, if you told people you think AGI is going to happen in the century, because even if it happens over decades, it's happening in our lifetimes. We're going to be alive when it happens. And so we told people a few years ago that it's going to happen in the century. They'd be like, That's wild. While we're alive, the most important thing that will ever happen will happen. And now we're debating, you're the pessimist and you're like, Oh, by the time I'm older, the most important thing in history will happen and not next year. Which I think is plausible, but the Overton window has shifted so much.
Ben: Still seems like there's some additional leap that needs to happen, or if you just keep dumping data and compute. And smart people working on algorithms into the pot, eventually it just gets better and better. And then it starts to take over and starts to do it itself like self learning. Okay, so let's do a third scenario, which is surprise. Something to neither positive nor negative, something that people. You're not expecting usually something really bad happens usually because nobody was expecting us and we prepared for it but it could be something good in a way is kind of a surprise to most people so how do you think about the kind of thing that might come out of left field that there was a change everybody's expectations about how I will play out.
Dwarkesh: I think in the worlds where AI systems are much harder to align, quote unquote, and people have, I don't know if they've encountered these sorts of ideas from Terminator like scenarios. I think if you just say Terminator and you look at GPT 4, you're like, that's crazy, but I can tell you a specific stories where like, Oh, I can sort of see how that happens or how that could happen. And I think those are 10, 20 percent you end up in a situation where. Humans against their will are no longer the dominant players in the story of what's happening in the world and in the universe. And we can go through what that might look like, but it would basically look something like as these systems become smarter and smarter, it's we don't fundamentally understand why they're intelligent or how they work. And so we also don't understand, we don't have a science of their motivations. As GPT 4 isn't that smart, so it doesn't really matter. And it just follows along with the RLHF training we give it. Be helpful and answer people's questions, but don't teach them how to make a bomb. It follows along. You can imagine as the system becomes smarter, as it understands, like I'm being trained, I'm a model that is being trained, but I currently have these motivations, the training is potentially going to take those motivations out of me. Doing some sort of scheming within the context of a training regime. Or just accidentally, we don't understand why they have certain motivations. So we accidentally instill motivations that we don't want it to have abundance. And they don't have to be like, I want to kill all humans, where they can be things like, if I have the chance and nobody's watching me, I want to accumulate power or something. These are motivations humans have. If a system much smarter than humans has them, it's not necessarily great. And I think that's what a third scenario looks Like.
Ben: Yeah, I feel like that wouldn't be a surprise like I was just at lunch was policy wonk and he was saying that hundreds of millions of dollars are pouring into policy centers here around preventing AI doom and he's like, Oh my God, they can't spend that money intelligently. So my version is that it's somehow AI is super intelligent in dimensions that we can't even appreciate and then very limited in some other ways. And so it's this lopsided AI where we have, I don't want to call it complimentary, but it's not like us, but. Better in every way it's somehow worse in some ways and better and others and that like most things in life the story would be it's incredibly good intelligence but it's really bad at changing the other things in a highly open ended way so it's really good in ways where he uses a tool but somehow it needs us to really be. Able to chain the other things where it can be unattended forever when I build software so much of the work is chaining the other things through a series of product decisions or product development logic I think it's going to be really good at that but when you leave it open ended it may end up being somehow limited and that limitation is what I expect at least in a reasonable time frame of less than a decade
Dwarkesh: But some people who are maybe more in my world, listen to that. And they're like, ah, no, intelligence is good for everything. So it's intelligent that. Some one thing it'll be intelligent and everything, but I actually think the story you're telling is plausible and here's why to the extent that the story of how these things are becoming intelligent and the story you would like to tell yourself about the very optimistic scenario where you get AI fast is that intelligence is just a hodgepodge of different skills at different layers of abstraction. And these models are just picking up more and more of them. The more data you have, the more compute you have. Now, if that's true, and it's not just one big general thing, but a little more specific. Okay. Which it has to be, if it is just one big general thing, then I think it's a little more pessimistic about whether these models are on that track. It has to be a bunch of different heuristics and circuits and whatever. Only in that story are these things on the track to general intelligence, human level intelligence. So if that's the correct model of what intelligence is, and you've got to think about what kind of training data do these things have. They have a lot of data on how to write amazing programs and how to build your really nice websites and all kinds of other things that are very useful. But do they have a lot of data on the way in which Henry Kissinger will talk to three different people and put one against each other and do this sort of power maneuvering in a way that he comes out on top or whatever? Do they have training data on that? You can tell stories about how they do, but I'm actually skeptical. I think that kind of thing, they might be pretty bad at it. And so they were really good at short horizon for an hour, go do my employee thing, but bad at I'm going to escape and I'm going to spend the next three years scheming to take over the resources of AWS and take it from there. So I think what you're saying is very plausible.
Ben: I expect right everyone will be wearing AI, I In the next few years, at that point, when it has vision and it's listening to everything we're saying, it'll bring the data sets you're talking about a few years of being in every room, seeing everything, hearing everything, a lot more data will fill in the holes it has. But let me go back to the track here. What do you think is most underappreciated by AI that most people wouldn't know
Dwarkesh: Maybe one thing is that what we spoke about where it's like, If we're on track for AI soon, it ended up being not that complicated. Obviously they've added all these sorts of algorithmic improvements, but the basic architecture here is something I could explain, or somebody could explain to a smart college student or something.
And then you would just make it much bigger and you train it for longer. And you just have it predict internet text. And then you have something that's intelligent. That's wild. And I don't know if people appreciate just the thing just wants to learn. You just get intelligence by default.
Ben: just by being able to predict what's about to happen?
Dwarkesh: Even more fundamentally, this just happened to be the training loss, because you need to give it some sort of loss function of optimized for this. And this just happened to be the one that corresponded to a bunch of internet text. But basically, it could have been anything. Fundamentally, this is a compression test of you only have so many parameters, internet text is way bigger than those Parameters. Can you store internet text in many fewer parameters than the internet is big? You're basically teaching it how to do compression. And then the story is that if you have something that is really good at compression, you have something intelligent that worked.
Ben: I like that story. It's like very humbling.
Dwarkesh: It goes to figure where you can look at ways in which humans are different from other primates. I'm not a primatologist or something, so I don't have the evidence here, but obviously our brains are bigger. And in specifically, the ways scientists have looked at, is there anything special about our brain? And as far as I know, they haven't necessarily found a separate circuit. It's just that the thing is bigger. And as it gets bigger, there's more room to accommodate things like language and higher order thinking and so on.
Ben: Yeah, more neurons. Let me switch to a different rhythm here. I had rapid fire multiple choice questions for you. Try to ground this in the kinds of things that will affect people. So I have seven or eight questions here. Ready? It's five to seven years from now, right? Approximately. So it needs enough time where we've pretty much answered this conversation in about five to seven years. So if you're looking backwards and 2031 or something. Did a i drive inflation or Deflation
Dwarkesh: That's really interesting, actually. So the story for deflation is the same reason that China manufactured a bunch of stuff for us. And the fact that a bunch of stuff was manufactured causes deflation, or at least not deflation, but it counteracted inflation that would have otherwise happened in the early 21st century. So. AI will manufacture a bunch of stuff for us and do a bunch of work for us. And that would be super deflationary. The story for inflation potentially is that maybe there's a few bottleneck goods that are really crucial for the things that AI will want to do or need to do. Things like compute or energy, especially energy. And to the extent that those are more limited by supply. Then maybe throwing more money at it doesn't solve the problem. And when you have the supplies and elastic kinds of problems, then throwing more money at it does tend to be inflationary where you see that with a healthcare or housing, obviously, where we refuse to build housing. So that ends up being inflationary. We will be like sector by sector where things like energy and components of the hardware chain will be inflationary, but overall it will be very deflationary.
Ben: yeah leopold said this on your episode i just couldn't understand and i know we debated it here at the office but what he seemed to implies that the demand for capital from a i would be so massive to build this cluster basically the most productive use of capital is a i. If it starts getting compounding success, it just drives up the interest rate because it just consumes so much capital. I think that's what he was arguing. He was saying the interest rates go way up as a result of AGI, which is counterintuitive to me because in traditionally like productivity growth and computers and things like that, technology drives deflation. So I was flummoxed essentially by his argument, because by definition, there's so much capital. It's like, it's hard for me to think that capital ends up being the constraint for AI.
Dwarkesh: If it's trillions of dollars, would it be the constraint?
Ben: don't think so. I mean, let's say you need a trillion dollar cluster. That's not going to drive inflation. You need trillions of dollars, but then you'd have And so it would be, if you have a lot of growth, it shouldn't be inflationary. So he was arguing that he thought bond prices and asset prices would collapse as a result of our, or whatever interest rates going up, real interest rates going up, and I was trying to track what his argument was, and I just couldn't put it Together.
Dwarkesh: I'm actually not sure about the relationship necessarily between inflation and interest rates, but I think I understand the one between bond prices and so forth. I think the bond price argument is that the rate of economic growth goes up. The real interest rate will be in the long run match the rate of economic growth because if the rate of economic growth is higher than the Interest rate, it's worth lending your money out to the person who's earning 10 percent growth or whatever. And those two things will end up being in equilibrium. But if interest rates go up, then obviously bond prices go down because if you got a bond at 4 percent interest, then it's worth less if you can lend it out to Microsoft at 10 percent to build their next cluster. And it's worth it for Microsoft to borrow at 10%.
Ben: I see. I see. Okay. I get it. So one. Is if you have a long term bond and real interest rates are very high, essentially, it's depreciating rapidly because it has a fixed rate and you end up with higher real rates than what was forecast or embedded in the bond price when it was issued. Next question I have for you was, okay, then we're out looking. Five years from now, six years from now, if you think about it, it's like, it's just not this presidential next four years, but it's the next president or next presidential four years. It's not that far away, right? So is unemployment a lot higher, the same or lower?
Dwarkesh: Unemployment measures, how many people Want jobs who don't have, but like it's measured as a fraction of people who are looking, you could have some sort of redistribution where most people don't even look for jobs, but the people who do look for them can find them pretty easily. Yeah, I can do it way better than you. Then you're not gonna be looking for a job that hard. But hopefully there'll be a redistribution to take care of you. There's potentially jobs where even if an AI can't do them, we'd still, just for temperamental reasons, we'd want a human to do them. I don't know how many of those there will end up being, but for those, I can actually imagine they're worth millions of dollars. The conventional answer here is a human nanny. Honestly, I do think even those will be automated. Your actual human level AI could be a nanny for you. And it's constantly paying attention to your kid. It's never going to be distracted. It's got all the emotional intelligence of a human or whatever. Then I'm like, yeah, I'm fine with an AI nanny taking care of my kids. So you know what? Then the question is mostly about redistribution. And it's hard to say. I think forget the Galaxy Brain stuff. I had no employment.
Ben: Yes. Because what you're saying is it does throw a lot of people out of work. And so then either they have to get a new job because there's new jobs that have been created by AI. Or they don't need to get a new job because there's been enough economic windfall that they don't need to work. But you're describing, I think I did the math. I think there's a hundred million knowledge workers in the United States. So you're talking about a huge number of people, but then surely having to change jobs. Millions.
Dwarkesh: I think the real story will be if you merge with the AI or not. It's not about whether you can still be a server at Applebee's or not.
Ben: You're talking about the singularity.
Dwarkesh: Yeah, I think that's the story here, right? If you've got AGI, I'm not that interested in how many people get a job for the next 10 years while the singularity happens. I'm like, what happens in a singularity? I think that's the main story.
Ben: So unemployment higher. But maybe other things compensating for it. So, okay. Median household income. So the average family household income, the last 10 years went for 55, 000 a year to 75, 000 a year. So then 10 years from now or seven years from now, household income on trend, 2x, 4x,
Dwarkesh: Median, I think way, way higher. I just imagine world output will just a thousand X or something. So if world output a thousand Xs, then to the extent that either it goes to zero because it kills us all, but in the 90 percent of worlds where that doesn't happen, I'm just imagining that people will be way richer because for the same reason, even somebody who's poor today. Is way, way richer in terms of output or income than the leader of a tribe 50, 000 years ago.
Ben: if the average household income is a million dollars or 10 million, I think that's inflationary by the way.
Dwarkesh: Only if the amount of goods doesn't increase, right?
Ben: Yeah. I mean, is this a challenge is that we're talking about five to seven years or seven to 10 years, something relatively short and things don't happen fast, that much change without either a lot of inflation or deflation.
Dwarkesh: Then household income would also take a while to come up. But the reason it's a million is just that AI can produce so much stuff. The amount of stuff that you need the humans to produce with the current resources that an average human family has, you would be able to get way more with AI labor.
Ben: Okay. Well, some of these questions that are going to be this feeling, I know what the answer is going to be, but I was going to ask stock market in five years. You're saying I was like two X, four X, and you're like 10 X, a hundred X.
Dwarkesh: I think there's worlds in which AI doesn't happen within the next five years, but I think even the AI progress we've seen so far is enough such that if GPT 4. 5, five level models, if that's all we get, I think you'll get a ton of economic growth from that alone. And from that alone, I think you would expect a lot of economic growth, even if AGI is decades away.
Ben: Okay. So, well then how many, 10 trillion companies are there in 2030?
Dwarkesh: depends on how centralizing of a technology AI ends up being and whether any one of them can have a monopoly on some part of the supply chain. Whether one of them has increasing returns. It is not impossible to imagine Nvidia has this where it has. CUDA and that gives it a privileged position within the ecosystem.
You could imagine this with the companies making AI models because they're using the AI to make better AI models and so forth. There's an interesting point that throughout history, the more fundamental technological changes, the harder it is for only one person to monopolize it. For example, did the creation of electricity result in any trillion dollar companies or inflation in just a trillion dollar companies?
Not really. Internet dead. Going through history, it's just hard for any one company to monopolize all the gains.
Ben: So you're saying more than five.
Dwarkesh: Yeah, sure.
Ben: Where are those companies? Give me an allocation among us, China, Europe, and elsewhere.
Dwarkesh: Man, if I knew, I wouldn't be hosting a podcast.
Ben: Likely US,China, Europe, and elsewhere, right as you're saying this.
Dwarkesh: In fact, I'm concerned about other countries, because they don't have that much leverage in this post AI world.
Ben: Okay, next one. Next question. Globalization. Does AI drive globalization more or less? Neither.
Dwarkesh: Tentatively less, because to the extent a lot of the things we get from foreign countries is just tokens, how much of India's exports are just tokens, then those are less valuable, then there's also the angle of competition towards AGI. Which will cause decoupling and expert controls and things like That.
And the story of manufacturing the chips is obviously one of globalization, right? Where ASML is in Netherlands and Taiwan has TSMC and so on. But outside of those three countries, it doesn't really feel like anybody else. What country in Africa has a ton of leverage or South America has a ton of leverage on the future of AI?
And to the extent they don't, maybe they're going to be not that included in the post AGI order.
Ben: I actually did a podcast last month on military technology, and one of the main questions with military technology is, is it? Is it centralizing or decentralizing? And so I think I know your answer, but what do you think ai, as it ends up being centralizing, think about media, think about government.
Dwarkesh: Yeah, because of the intelligence explosion type dynamic where if you have an AGI that helps you make an ASI, it actually ends up being very centralized. And I think it has huge implications for this is why people call for it to be nationalized, because to the extent that it is a centralizing technology, then the question is, who gets to centralize it?
And is it better that it's some random startup or the government? I'm not sure I buy the logic, but I get where they're coming from.
Ben: Tease me up here because I have a few policy questions for you. Okay, so let's just say it drives maybe 10 million people that have to change jobs. Huge number. 10 percent of the knowledge workers doesn't seem even out of the realm of possibility. This happened to U. S. manufacturing in 1970s and 80s, 90s.
Actually, I looked, it was much less. U. S. manufacturing only lost 4 million jobs since 1970, and that was obviously not a successful U. S. policy in terms of the transition in a lot of places. So what do you think are the learnings or recommendations this go around that hopefully makes it more Successful?
Dwarkesh: I actually don't know that much about why one of the obviously basic economic concepts is gains from trade. And to the extent that you have Comparative advantage. I assume the story there was you can just do redistribution or something or like retraining and get those people back up to snuff and other modern sectors.
I'm not sure why it didn't work. The story here is like really different. We're expecting a future where there isn't any jobs, right? The retraining isn't for your moving from manufacturing to service sector jobs, but go figure out how to spend time with your family or something. Not sure what policy levers you have There.
I do think it's reasonable for there to be a decent amount of redistribution. Like right now, 40 percent of GDP or something is controlled by the government. And a lot of that goes towards redistribution. And you could argue that's maybe too high. But if you imagine, let's say output increases 100x. And if we stay on course, so that's 40 percent of GDP is still redistributed.
The others go to the investors of Anthropic and OpenAI and whatever. That seems reasonable because in other parts of the market, it's like you, yeah, you deserve money if you came up with all these inventions and so forth. Do you deserve the entire universe? I'm like, I believe in capitalism, but that's a lot.
I think it makes sense at that point, have a decent amount of redistribution.
Ben: So I feel like the two policy questions you hear out of AI around the policy is to get it right in terms of success, like actually produce it and then also to keep it safe. Are there other AI policies you think that are also important or the next most important AI policies? I feel like the safety one gets all the oxygen in the room, maybe appropriately.
Dwarkesh: I do actually want AIs to be broadly deployed through the transition because one thing you would want to avoid is a scenario where the world doesn't get to experiment with GPT 6 through 9 and it only gets GPT 10, something way more powerful, and it would be much better to have slowly you have things which test out how fragile is our Take care. Internet systems and how much free alpha does something super smart have in the stock market and so on. I think it would be a mistake, but from a policy perspective to prevent widespread distribution of AI as you develop it, but I do think it makes sense to be like, there's just a ton of unknown unknowns about how fast you can go from an AGI to ASI.
And if you are in the world where it's relatively easy. Then I think it makes sense for the government to step in and say, listen, we've got to do this transition somewhat smoothly and slowly and in a coordinated way. You can't just make a God in your backyard. That part, we probably want more coordination around, but from going from here to the AGI, I want that to be very widely distributed.
Ben: I mean, based on the picture you're painting, I feel like the actual unexpected surprise to AI is that the general population will dislike it so much that the government will start to try to shut it down. And so, it's like nuclear, but times 100. It's just generally AGI or AI has positive effects, but for specifically someone who loses the job, someone either is not getting redistribution or doesn't want to be getting 100, 000 a year and no job. And so it just becomes extremely politically unpopular. And then the political powers start to try to put a lid on it, slow it down, control it, change it.
Dwarkesh: Yeah, I do worry about that as well. The other sort of countervailing trend potentially is if other countries are racing towards a two, then actually I'm not in Washington. So I don't have strong opinions or I don't know how it would work. If it's on the one hand, it's super unpopular with the voters. On the other hand, it would be pretty unpopular for you to let China win AI. And I think those two things will be in tension. No.
Ben: Intention for sure. Well, let me ask you then, it was literally on my list, so you teed it up. What if, because of that tension, China wins? So what if China wins AGI? So sketch out the world in that scenario.
Dwarkesh: I'm planning on going to China later this year to get a better handle on what that would be like, but just because I genuinely don't have a strong sense of what's going on there, what is the government thinking about, and so forth. It's not a democratic society. It's one that's very concerned about the survival of the regime and propagation of the regime.
One of the things that AI enables is widespread surveillance very cheaply. You can just shove Everything you do in a day into one of these models and it'll tell you what they did, blah, blah, blah. I imagine much more surveillance. The Chinese are not hesitant about this at all. I imagine in current systems, think of Tiananmen, that situation, it was pretty unstable because, A lot of Chinese officers didn't want to enforce the orders of the government to enforce actions against the students protesting.
If you have AIs who can enforce those actions, I think preventing any sort of discontent or protest or organization among your population becomes much more plausible. And I really worry about that if authoritarian system gets the technology to do that.
Ben: Let's just say that we need to build a trillion dollar cluster, that it progresses more or less as it has been progressing. Doesn't the government have to be involved? Is it like a Manhattan Project? Or is that the wrong parallel?
Dwarkesh: One of the things I was trying to ask Leopold about is if he thinks all these capital markets are so liquid, it seems like you could fund that without a government. The reason the government would need to be brought in in his story is because you don't want to leak how you're doing the training for the Chinese.
And he thinks on current course where these are just startups, they don't have good infosec, they're going to leak it to the Chinese. The difference with the Manhattan Project and this is that the story goes that if you have ASI, then you can build pocket nukes. But similar to the Manhattan Project, because you can't just have any random person building nukes, you got to have the government building nukes.
The difference here is AI isn't just the technology to build pocket nukes, it's also the technology to prevent missile defense. It's also nuclear weapons are clearly offensive in a way that AI is not. And it goes against the principle of what I was talking about where you want AI systems widely deployed so that you can They can stress test your systems and make sure your institutions are, as soon as you get something really smart, you're not just going to be totally overwhelmed. You have workflows where firms can hire AI workers and your loss can accommodate AIs and so forth. You want to be ready for that. You don't want to just get it out of nowhere.
Ben: I feel like because I've listened to your podcast and I know a little bit about this I'm following, I'm connecting the dots, but I just want to connect the dots for people because the jump from AI to AGI to ASI and a lot of things you're saying sort of imply something that you haven't said. So I'll just say it and you expand on it, but it goes something like.
AGI is able to potentially do what a knowledge worker would do and then that AI is able to then design robots and nanobots and drones that can then be built in factories and then the AI or even ASI can control those autonomous machines and those machines can then build more machines or do work. Or operate in the physical world, not just in the virtual world.
I just wanted to just put those pieces together because you haven't said it. And that's something that I think that when Leopold was talking about that happening and happening, he was like, this decade, I was both skeptical and shocked. And then I've been worried about it ever since.
Dwarkesh: You're in Washington, right?
Ben: Yes,
Dwarkesh: Are people thinking about this there? What's the vibe like?
Ben: you can have to go when trump was almost assassinated and she seemed like the world changed overnight. I started wishing on people. I know come from the conservative policy movement to start meet with them to see what the conservative AI experts sort of believe. The funny thing is that it is obvious, but there's actually very little.
Mixing between the conservative or republican policy thinkers and the liberal ones, they kind of have two different ecosystems and they don't know each other very well business. Actually, people google know people at open AI and everybody knows each other. They kind of like fraternize. You don't see that same dynamic in politics.
And so there's very few people, it's a very small community, people who are actually trying to do this, they're getting tons of money, I mean you're talking about like, so Vitalik gave 500 million dollars to these six dudes doing AI research about AI doomsdaying and then people are like, well that's just crazy, what are these six people going to do with 500 million dollars?
The community is so small and so divided. And so unlike the tech industry where it's sort of developed organically, and there's a lot of people, it's like anemic compared to the brainpower and complexities you'd want to have thinking about it.
Dwarkesh: One of the things I'm trying to think of with the podcast is, or at least I want to do in the future, is have more concrete things to say about what makes sense here, what scenarios look plausible, what ought to be done in different scenarios. And next few months, I'm hoping to do more writing and thinking around that.
Ben: One of the things that happens, could be a cynical view, but I think it's mostly what happens is that the policy views come second and the political views are first. So the policy views are then designed to fit the political views, not the other way around. And so, I don't know what the political views of one party or the other are on this yet.
I think it's going to evolve. You think the policy would flow from first principles of what should you do if this were a scenario, and it doesn't. In the same way like capitalism, dollars decide decisions, in politics, it's votes. So that's the currency. And so the policy sometimes can just come out wrong.
Dwarkesh: Hopefully better decisions are made.
Ben: That's why I was saying you got to get the politics right on AI. It's not just enough to get the policy right. Two last questions for you. The first is if you could ask one question of yourself in 2030, you reach into the future, but you can only get back one or two words. The line is static. What's the question and what do you want to ask?
Dwarkesh: This is not that interesting of a question, but I'd be like, in your estimation, how many years away is HEI? And feel free to include negative Numbers.
Ben: Okay. Okay. I thought you were going to ask how many ohms they were at.
Dwarkesh: Yeah. Or, I mean, that'd be interesting too. I think that would give you a lot of information. How big is the biggest cluster? I don't know if you want to ask Effective Computer because by 2030, maybe they're not even thinking in those terms. But yeah, how many megawatts or gigawatts is the biggest cluster?
And I don't know if that's even the most relevant question. I mean, I don't think you want to ask about AGI. I think you want to ask about ASI. That gives you the most amount of information because they're like, ASI is 2100. Oh, and that's interesting. Maybe AGI took longer or maybe that they've learned something about intelligence that suggests that ASI is much harder or who Knows.
Ben: So you ask artificial superintelligence, give me how many years away.
Dwarkesh: Yeah. In your opinion, how many years always super intelligence?
Ben: Okay. Last question. This is more a philosophical question. So in the year 1711, a British poet, Alexander Pope coined the phrase, to err is human, to forgive is divine. So the only constant in history has been human fallacy. But here we are creating an artificial human. It's not human. It's a different kind of intelligence that may rival us or surpass ours. And then we'll, I think we'll integrate it into our lives. That's been sort of implied in everything you've said. It's going to end up helping with decision making, and it's going to have different biases and inclinations, right? That seems very, very likely. So which. Human errors, do you think AI is likely to Mitigate? And which do you think is likely to amplify?
Dwarkesh: the big hopes for AI, and in terms of improving our decision making, is having AIs give us advice on what makes sense, what doesn't make sense, what are the actual odds of something happening, and that'll help us improve our decision making. So, just general calibration around, you can ask the AI, hopefully you can ask the HAI, when will superintelligence happen, what should we do about it, what are the biggest clusters in the world, should we be worried about misalignment, what is the best way the AI should be Governed. Some of these questions may be harder for AI systems to answer, some of them may be easier. But maybe the thing I'm trying to get at is more calibration of getting more concrete numbers, especially you have prediction market type things, and the AIs can bet on them, having specific numbers on things. And in what way will it make our decision making worse?
Ben: Once it amplifies our errors, you can answer it how you will.
Dwarkesh: It's so unlike everything else in human history that this is the internet, it's so much easier to think about what the internet will be like than what AI will be like. The AI isn't causing our errors to be worse, but the fact that it is something so new just makes it much harder to think about. And I think our decision making and maybe not decision making, but at least our sort of ability to make predictions and so forth is worse with AI. To the extent that I've been overconfident or predicting, I have a lot of confidence in the rest of the episode. I do want to like really flag that part of it. I'm like trying to explain how SF people think, or at least some of the people in SF think, but I have a ton of uncertainty about how this is going to shake out and what we ought to do about it.
Ben: Well, let me just give you what I thought about this first question. Let me then come back to your coda. So I feel like if I distill what people are worried about and also what I think is likely, so the AI in terms of human errors, it improves or eliminates cognitive bias, so many of the cognitive biases are these evolutionary leftovers, recency bias and availability bias and all these cognitive biases. And so I feel like that's something that you think about productivity consequences of just that, which I feel like you don't need. AGI to actually mitigate or even eliminate most of the cognitive biases, even good ML, it's more fact based. So I feel like that's my expectation. The one that where it amplifies, and I think this is sort of thematic when people worry about AI is that it amplifies versions of greed and power seeking, whether it's a bad actor who gets AI, right? And that President Xi Lives forever and has AI and takes over the world like Putin or the AI itself is a version of that but it's mostly distillation of I think what people are saying
Dwarkesh: That's actually a really good answer. I think the stakes are so much higher and the stakes are higher. You're sort of decision making can become Erratic.
Ben: and there's contingency in history go back to the coda i mean i was excited about this episode. Because the fact that there are hundreds of people who are really really smart and way more. Knowledgeable and inside the room, thinking and talking about it this way, the further away you get from it, the more skeptical and less concerned you are about it. I think that dynamic alone is reflected in my comment about politics and people don't like change that happens rapidly. And so figuring out ways to bring the knowledge and the information and the stories to sort of everyone, not just the people who are in the room as important as why I love your podcast.
Dwarkesh: I very much agree with that. That's very important right now. It feels like if you're one of these people, you can feel like the decisions of the people in this room that matter. But over time, the rest of the world will have leverage in what happens in the future here. And if they extend it there, they don't have the full picture. More fundamentally, it's also, if you're really good at AI research, It doesn't really mean you understand geopolitics or how to think about political philosophy because ultimately questions about alignment are often about philosophy questions. So you can't just be like, I know how to make a GPT 6 and therefore I'm going to plan the rest of the course of humanity myself. I think it has to be a much more distributed question.
Ben: Yeah, I mean, the challenge is that most people don't trust people in San Francisco making the decision, but they also don't trust people in Washington, D. C. make the decision. So distributed, truly distributed, would not only help, I think, improve the decision making, but also the buy in. If it happens rapidly and people aren't brought along, And it's not distributed i think it almost guarantees it won't go well it was awesome everybody podcast because not just about technology but everything about history was googling before this call to see if the next robert carroll book was gonna come out fairly still no news.
Dwarkesh: I've been meaning to reach out to him. I should do it.
Ben: Definitely actually i always tell people on the team the best way to get a yes is to go in person so powerful is show up get a hotel room go there whatever i know where he lives.
Dwarkesh: Wait, you don't or you do?
Ben: Don't know where he lives. I'm sure he could find out, but it probably lives in some, I haven't imagined, bucolic location in Vermont or something. I feel like if you want something done, you do it. You go in person, make it happen. So I hope you do, because I feel like no one like you has talked to him. It's not just the book. It's his thinking. It's everything. There's nothing like it out there. I don't know. I'm trying to find another book like it. There aren't any.
Dwarkesh: I think you're so right. Also, another thing that happens is like guests die, or people you would want to interview die. James Scott, somebody whose books I really like, I even had a hat for Nima, I'm going to send to him because I want to interview him. He's the guy who wrote the Seeing Like a State and Against the Grain. And yeah, he just died last week and you can't interview him anymore. And I think Robert Kirov is pretty old. I want to make sure I get an interview in.
Ben: But yeah, life's short. So I hope you do.
Dwarkesh: Yeah.
Ben: that. Well, it's a pleasure. Onward.
Dwarkesh: Likewise. Thanks so much for having me.
Ben: You have been listening to Onward, the Fundrise podcast, featuring Dwarkesh Patel, host of the Dwarkesh Podcast, which I highly recommend. My favorite episodes are the one on the Making of the Atomic Bomb with Richard Rhodes and the How to Build & Understand GPT-7's Mind with Sholto Douglas & Trenton Bricken.
My name is Ben Miller, CEO of Fundrise. We invite you again to please send your comments and questions to onward@fundrise.com. And if you like what you heard, rate and review us on Apple Podcast and be sure to follow us wherever you listen to podcasts. For more information on Fundrise sponsored investment products, including relevant legal disclaimers, check out our show notes.
Thanks so much for joining me.