The text below is a transcript of the audio from Episode 56 of Onward, "Will AI do your job? with Raza Habib, Member of Technical Staff at Anthropic".
---
Ben Miller (00:02):
My guest today is Raza Habib, member of technical staff at Anthropic. Anthropic is one of the leading AI research labs and the creator of Claude and Claude Code. Before joining Anthropic, Raza was the CEO and co-founder of HumanLoop, which is a product we used at Funrise and how I originally met him. Before we get started, I want to remind you this podcast is not an investment advice. It is intended for informational and entertainment purposes only. Raza Habib, welcome to Onward.
Raza Habib (00:34):
It's a pleasure to be here. Thanks so much for having me, Benjamin.
Ben Miller (00:37):
Okay, so here we are. You work in one of the major research labs, so I'm interested to know what you know that other people in the wider public don't understand about AI. But before we get into it, I just want to make sure when we talk about artificial general intelligence, that people understand what AGI means. So how would you define AGI before we talk about how revolutionary it might be?
Raza Habib (00:59):
Yeah, it's a great starting question. I don't think that there is a clear or an accepted definition of AGI. Different people say different things. In fact, one of the two founders of DeepMind, Shane Legg, has an entire chapter of his PhD dedicated to trying to define what intelligence is, and he goes about gathering 30 different definitions from operations research and psychology. And the reality is that I think it's not a well agreed upon term and because AI has gotten so hypey as well. It's made AGI even more of a moving target. So the definition that Shane Legg came up with in his PhD thesis was that intelligence measures the ability of an agent to achieve its goals in a wide variety of domains. So in that case, you're measuring intelligence on two axes. One is how general is the thing? How many different environments can you achieve your goals in and how good are you at achieving your goals?
(01:48):
So I quite like that as a definition of intelligence. It still doesn't tell you where the line for AGI is. And so I think maybe a thing that's more useful to talk about is something like economically transformative AI or something like that, where you could write down a set of criteria or evaluations of things that the system is capable of and then measure whether or not we've crossed those thresholds. Then I think it becomes easier to say, "Do we have that thing or not? " And you can just say, "Okay, across this range of economically valuable tasks or something like that, can the AI do it? " But yeah, the reality is I don't think there is a good definition and AGI has become a bit of a marketing term.
Ben Miller (02:21):
In a way, it's like, how many jobs can it do?
Raza Habib (02:24):
The other way you could think about this is give me a bucket of tasks that you care about, maybe all the things that you can do at a computer, and then what's the probability that AI can do this better than a human on that task or something like that. And maybe when that probability is below some threshold, you say, "I have AGI." So this idea in machine learning are probably approximately correct algorithms, but it's very hard when you're making a classifier to say that it can do everything accurately on all possible data sets. So if you want to prove bounds about learning algorithms, then the best you can say is it's probably approximately correct and you set some threshold of accuracy and some probability of being right. I think you could probably do something similar and probably approximately solve all the tasks that humans could solve or something like that.
Ben Miller (03:05):
Do we have liftoff?
Raza Habib (03:07):
I think so.
Ben Miller (03:09):
Okay. I need Becky to say yes.
Raza Habib (03:13):
To be clear, because the comms theme and Anthropic will get after me otherwise, I'm speaking entirely in my capacity as an individual here and not on behalf of the I work for. But I think there's a lot of uncertainty and we won't know making predictions is hard, especially about the future as the old line goes, but I think there are enough pieces of evidence stacked up that make me feel like, yes, we're on a trajectory where this thing is happening and it's happening quite plausibly soon. For me, the last release of models that we had from Anthropic and also others, Opus 4.5 was a big update where I was like, "Oh no, actually things are still just improving on a very steep trajectory." And I don't see any sign of that slowing down.
Ben Miller (03:54):
Yeah. And we saw that, I mean, as a consumer with Code Interpreter and immediately put into a bunch of our products and it's awesome. I hope you guys don't move on from that too soon. But anyways, I knew you before you joined an AI research lab and you were at Human Lube and I want to know, are you more AGI pilled? Were you already AGI pilled? What's your evolution personally?
Raza Habib (04:21):
I thought I was AGI pilled before I joined Anthropic. My story is I did a PhD in machine learning something like a decade ago. I came in to the PhD not very AGI pilled and then was consistently just beaten over the head by the success of deep learning. And I kept thinking, "Oh, X thing won't happen for however many years." And then it just kept happening. So by the time I joined Anthropic, I thought I was pretty AGI pilled already. Part of the reason I wanted to join is because I think this technology is likely to be transformative. I am much more AGI pilled now than I was before.
Ben Miller (04:51):
So it's not even like a red pill, blue pill. Apparently it's the spectrum.
Raza Habib (04:56):
Yeah. How powerful and how quickly. My probability is that we will get transformative stuff soon have gone up for sure.
Ben Miller (05:03):
Well, I want to talk about what that could mean for society, for companies. I have a 14-year-old son. He keeps asking me what job he should do when he grows up because he's worried about AI taking his job.
He's asking, "What's the most resilient to AI?" And he's never been happy with my answer. So how do you respond to that? What should I tell him? He's going to listen to the podcast probably.
Raza Habib (05:29):
Yeah, I get this question a lot. Even before I was at Anthropic, I've been working AI for a long time, so people, especially parents, parents that are coming in like, "What should I teach my kid? What should people learn?" And I honestly don't feel like I have a great answer either. I can tell you some things that I think will be resilient to automation, but I'm not sure that's the answer you really want to hear. Things that I think will be resilient to automation. One would be things like professional sports, things that basically have their meaning because they have a large social element. I don't think any of us are going to go watch the Super Bowl with two teams of robots playing each other. I think that is, in my mind, something that's very fundamentally human and will stick around for a long time.
(06:07):
In terms of skills or what to learn, it is so hard to know. But I think that on a time horizon of 10, 20 years, which is what young people might be thinking about, I assume that machines will be able to do all the things that we do cognitively as well or better. Actually, I heard something interesting. I was listening to a podcast, the Reith Lectures with Rutger Brigman, the Dutch historian, and he gave these lectures this year on moral ambition and trying to get the most ambitious people of today to focus on things that will be societally beneficial. I think it's a great lecture series. He had this statistic, I don't know if it's true, but he said, if you go back 20 years, something like that, they did a survey of incoming undergrads to Harvard, and 20 years ago you asked them what they wanted to get out of their educational experience.
(06:52):
And a lot of people said things like an understanding of a philosophy of life or what it means to live well or something like that. You ask the same question today and people say they want to get a high paying job. Maybe some of the answer is revisiting those old questions, the things that will give life meaning and purpose, our community and connection to others and craft. And it doesn't necessarily have to be economically valuable work that we end up spending our time doing, but it depends on how well we as a society distribute the benefits that come from the technology.
Ben Miller (07:23):
Yes. Oh, don't get me. Now you're on my mission. I wanted to do this idea of walking through a timeline. This is obviously super contingent, but it gives, I think, people like me who are on the outside a sense of pace. And I'm generally of the opinion when I talk to people that they're really skeptical of the types of things coming out of San Francisco, coming out of the AI labs. They really are like, "I don't believe you don't make sense all hype. You guys are just selling your wares." And so I don't know how specific you want to be, but I think a sense of timeline, year one, the end of 26, end of 27, end of 2030, something like that, I'd be super interested to hear what you think is happening with it.
Raza Habib (08:16):
Yeah. I hear the skepticism a lot as well. And it's interesting, I often ask the people who are most skeptical whether they're using the tools, and I often hear the answer is no. And so I do think there's no substitute for playing with the models, trying stuff out. But Nat Friedman had this great line. He said they alternate between spooky and kooky. That's increasingly less true, but there are moments of true brilliance and then these make dumb mistakes and I think that undermines people's confidence. But yeah, on the timeline's point, maybe we can go back a little bit and then do timelines up till this point and forwards, because I think it's helpful to get a sense of the pace of acceleration or how things have been happening. So I would say there was this moment back in the early 2010s when we got convolutional neural networks for images working, and that was considered this huge seminal moment.
(09:05):
People started to get excited about deep learning. The Neurops Conference started to have more people with titles of VC and other things show up at what was previously a small academic conference. That was a little over 10 years now. And if you go back and you look at what the things the models could do that got the world so excited, it was like, oh, they could look at a blurry 20 by 20 pixel of a car and recognize that that thing was a car. And that was mind-blowing to everybody at the time. And then right up until, say, 2020, if you were talking about natural language processing or understanding, if you were buying a legal AI product on the market, what you were getting was something that could do things like classification and named entity recognition. So this is five or six years ago. What it would do is it could take in a contract and it could extract with some difficulty, and I genuinely mean with some difficulty, all the named entities and the key terms of the contract, and that was really impressive in 2020.
(09:57):
And to build that system, you needed a team of experts to annotate thousands of example documents, and then you would train a specialist model, and the model could do just that thing. That's it. That's all it could do. And by 2022 with GPT3, now you have models that like, okay, you've got one model that can do extraction, can do classification, can do some amount of question answering, but it's still crappy. The early use cases of GPT3 were basically just writing assistance of various kinds. It was like writing marketing copy, CopyAI and Jasper were the early successes there. And then you go forwards just a year or two later, 2025, and you now have systems that are able to take actions on longtime horizons and complete full tasks such that it's the case that we now have coding assistants where there are senior engineers at Anthropic who say to me they have not written a line of code themselves in a couple of months.
(10:53):
I said, "How are you checking all of this stuff?" And it's like, I still read every line, but I don't really feel like I need to. I do because it's belt and braces, but the fraction of the time that I'm actually finding errors now is pretty close. We've gone from blurry image being classified to, oh, I can reasonably just expect this thing to write an entire software application in the space of a very small number of years. And the progress in the last few years, I would say three or four years has been quite consistent with projections that were made on the basis of scaling models. So it's not just that we've had very rapid progress, but the people who made quantitative predictions on how much intelligence we would have as a function of model size have been borne out now across something like 10 orders of magnitude of scaling.
(11:39):
So that's just to say, it seems like we've been on an exponential for a while and the exponential could continue for a little while longer. So what would happen if it continued for another two years? It's very plausible that by the end of 2026, we can do the work of a senior level software engineer end to end completely as good as any top engineer can, which is already on its own, quite mind-blowing. I think we'll increasingly have agents that can do long-range, complex, judgment-based tasks. So we'll see a lot of enterprise automation. We might start to see the first early signs of models that are doing creative work or pioneering or solving problems on their own, but it'll still be early. These are my personal predictions, and I think we'll start to see continual learning starting to work a lot more. So today, if you get one of these AI assistants, it's maybe really smart on day one, but by day three, you're getting very frustrated at the fact that you've not gotten any better despite the fact that you've given it so much context.
(12:31):
Models are getting better and better at that, and I expect that to improve in a big way as well. And honestly, 2027 feels like a time horizon that's too far away for me to make any reasonable prediction.
Ben Miller (12:41):
Okay. So 2027 is too far away, which is next year.
Raza Habib (12:45):
I feel like I can see through the haze maybe towards late 2026.
Ben Miller (12:49):
Let me just ground some of the things you said in my experience and then come back to the 2027. So you saw this, how we got to know you originally was we were building real AI, we're building real estate AI product, and we did it very different than I think a lot of the other AI applications because we built a huge, huge data set that's updated every day because it's not just enough to be smart like a person. You have to actually also know the information. You have to know what's the rent of this property.
Raza Habib (13:17):
Yeah, you have to have the right context.
Ben Miller (13:19):
I now tell real estate people, you can go get the smartest person who graduated from Harvard, but he hasn't spent five years in real estate. He's not going to be that good at real estate. And so in the last 30 days, we now could produce proformas and underwriting memos from end to end, the real estate professional, the analyst, you can do a lot of their work. This is, as you said, the rate of progress. In the last 12 months, it went from a glorified Google search, but some kind of closer Google search than intelligence to me. And now it's doing whole task. A job of real estate analyst, because I'm trying to use another example on software because people aren't as familiar with software programming, is looking a lot of data, synthesizing that down to the conclusion and key data points. It might be five pages of data, turning that into an output like a proforma or an investment memo, and it can do all of that, all of it.
(14:16):
And that's today. And nobody in the real estate industry is using it yet. We are whatever. Basically, it's so new that the diffusion hasn't happened yet.
Raza Habib (14:25):
It's crazy how early it still is. I think a lot of people mistakenly think that the lack of diffusion is a signal that there's snake oil here or there isn't there there in the technology. And I actually just think that part of what's going on is that the rate of change is so fast that you have to revisit the same task every couple of months to have a sense of the capabilities because you try something two months ago and it's like, oh, it
doesn't work. Two months later, it might work. So having an accurate mental model of the frontier capabilities is very difficult unless you're just trying things a lot. What I find now just in my personal day to-day life is if I have a hard task to do, I will try to give it to the AI first. And even if I don't expect AI to be able to do the thing, I just try it with the model first because I want to have a good intuition of where the boundary is and I keep being surprised by, oh, that thing I didn't expect we could do yet, now we can.
Ben Miller (15:17):
A special internal model, or are you doing that with the same model I am?
Raza Habib (15:22):
I can't really.
Ben Miller (15:25):
Okay, forget it. I definitely don't want to cross any lines either.
Raza Habib (15:29):
That's totally fine. You can imagine that we have a slightly early access to things internally, but nothing I'm saying I think wouldn't be true were I to be outside.
Ben Miller (15:38):
Yeah, for sure. The edge may be different than my edge. I mean, we haven't adopted ... It's hard when we're in the application layer to change the underlying models for certain tooling at the rapid pace because you have to change it. You basically have to change a lot. You can't just change the model because all the prompts and the language, all the things that don't work at the same way. Tooling ends up being different. That's a minor diffusion problem, a very logistical one. Biggest diffusion challenge is human behavior. It's interesting because it's almost like the big AI companies, they don't even even care if the person ... I mean, they do care, but the edge is usually the adoption product market fit, but in this case it's not product market fit, it's something else. As
Raza Habib (16:21):
A
Ben Miller (16:21):
Company at scale, that's something new.
Raza Habib (16:24):
I agree with that.
Ben Miller (16:25):
Let me just go back to something else you said when you were doing the lead up. I had had this impression that the scaling laws, not laws, but they call them laws. The scaling progress had diminished and that what was happening was there's a lot of application work, all these other things that are happening to make progress. You're opinion broad brushes.
Raza Habib (16:46):
I'm definitely painting broad brushes, but no, I don't think you're misunderstanding. I think that there are many different dimensions along which we can scale. So you're scaling the size of free training, you're scaling how much RL you're doing, and then there are continuous small innovations in that whole process as well that allow you to unlock effective scale because you figure out how to unhobble the model in some way. So I think that you can map all of this onto compute in some way. How much effective compute is the model able to use in its training process across all of these different dimensions? And then as a function of compute, I think we're still seeing power law returns to scale. It was always a power law, so it is still always diminishing returns. You have to scale more to get the equivalent increase in the loss of the model.
(17:29):
Though the thing that's maybe not so intuitive is when you're training these models, there's some loss curve that's going down as you train the model that gets better. And the same amount of change of loss going from loss five to loss three is not necessarily the same in terms of capabilities as going from three to one. Even though those might be both reductions of two, improvements like late in training might be small improvements in loss, but very big improvements in capability. It's not true that it's exactly a power law, but there are some diminishing returns to scale, but it's still giving returns. There's still lots of benefit to scaling.
Ben Miller (18:04):
Okay. So now let's go back to this fuzzy 2027 many layers outside where you are, but I do spend time trying to advocate ... I'm in Washington DC. I try to advocate to people that the policymakers aren't thinking about this the right way. In my little way, I did a podcast in December basically saying, let's just stop talking about if AI is a bubble and just what if they're right? What if this happens?
Raza Habib (18:29):
Yeah. I've been thinking about this a lot.
Ben Miller (18:31):
Yes.
Raza Habib (18:32):
I think if you take the perspective of a policymaker, I understand why they are skeptical. They are faced with this uncertainty. There's a huge amount of uncertainty. There's a lot of noise. There are people on both sides. Some people like myself or others who are saying, "Hey, we think there's going to be this very transformative thing in the very near future." And then you've got other people who are saying it's all hype, it's a bubble, don't worry about it. And they've got these very quotidian problems that their voters are actually caring about today and someone's asking them to take very drastic action on this very uncertain thing for the future. It's very reasonable for them to have skepticism. And so I think there's two things that I think about there. One is the payoffs here in the two scenarios are very different. If you imagine that AI turns out not to be that big a deal and it fizzles out or it's not very extreme and you acted early and you started campaigning for some kind of redistribution or you campaigned for other policy that would help mitigate the economic impact, you probably end up looking pretty embarrassed.
(19:31):
Maybe it's not good for your career. I can understand that there's a risk there, but the risk on the other side of it turns out it is transformative, that we have huge labor impacts, that we have AI systems that are smarter than most people in the not too distant future and you don't act. This is potentially quite catastrophic. And so I do think that that asymmetry at least should motivate some amount of action. And then I think the other thing that's really important is I think that the frontier companies and the other people in the space need to bring the data. It's not enough to tell people, "Hey, we think that this stuff is going to be transformative because I think the skepticism is very warranted." And so I think things like the Anthropic Economic Index or other projects where people are trying to measure what are we seeing already in the economy today can at least give policymakers some cover and some foundation on which to do bold things.
(20:23):
But I think that we need to take much bolder action and much sooner than most people realize. I think people should start thinking about and campaigning for now. I think there will be a crisis that will come at some point as a result of this. I forget the Friedman quote, but when the crisis happened, the idea that are lying around are the ones that get picked up. Now is the time to start seeding the ideas that we want people to pick up.
Ben Miller (20:44):
Yes. Let's do government policy a little later because it's most high level. I have to run a fairly large investment business. We have lots of real estate. For me, one of the lessons I've learned because I've been doing real estate investing in some text since 1999. It was a long time. I went through 2001, I went through 2008, went through 2020. And so my takeaway is that the macro is everything. Macro is the most important thing. If you get the macro wrong, everything else is wiped away. You can be complete idiot, get the macro right and you look like a genius. And I think the AI consequences are very, very macro, you're saying potentially it would be very extreme. Let me just walk through some easy economic questions because that sets the stage of what the policies would need to be. So is AI inflationary or deflationary?
Raza Habib (21:31):
Why would you expect it to be inflationary?
Ben Miller (21:34):
Well, I don't, but everybody thinks inflation's the problem today and they're obsessed with inflation. And I'm like, don't you see that there's this massive deflationary wave coming? I mean, it was softball. So okay, what does that mean that it's deflationary? Tell me how that gets its way into prices.
Raza Habib (21:50):
I feel like you're taking me now outside of my circle of competence. I don't know whether I have a right to opine here, but my intuition is that if you can do a lot of the work that you were doing before at far lower cost and you can make it more efficient, that should be a deflationary process. It seems to be intuitively hard to see how you manage to do a lot of existing processes more cheaply and drive prices up.
Ben Miller (22:14):
Yes.
Raza Habib (22:15):
But it is complicated. We're also dramatically increasing demand for energy and data centers and chips. One part of the economy is having ... But overall, it feels intuitively that it should be deflationary, but I am not an economist.
Ben Miller (22:27):
I have an economics degree, but this is one of those things that economists always get wrong. And so it's highly uncertain because there's also these reflexive dynamics that surprise people. But yeah, I think it's deflationary. When I think about the internet, what do the internet do? Internet, like the e-commerce, but
fundamentally just electronic commerce. So it got rid of retail and then that amount of retail square footage, the amount of retailers, department stores, and they eventually got eaten up by e-commerce. And so I think about AI, it's just fundamentally it does cognitive labor. It will have an effect on the job market. And then there's this other side of the equation, which is that it's productivity driving, it creates wealth, it drives efficiency. In 2010s, you had both because you had massive inflation in San Francisco, New York, and you had massive deflation in the Rust Belt, and Rustbelt was collapsing because of the China trade, and there was a huge cloud and mobile technology boom on the coast.
(23:23):
So we could have both in America and in the world, having thought about AI's effect on the world, AI is mostly an American product.
Raza Habib (23:33):
Consumed globally.
Ben Miller (23:35):
Consumed globally
Raza Habib (23:36):
Produced predominantly. To me,
Ben Miller (23:37):
They've come
Raza Habib (23:37):
The
Ben Miller (23:37):
Biggest quarter of America in a few years.
Raza Habib (23:40):
Yeah, it does seem very plausible. Globally, it feels today like there are essentially only two players, us in China and China seems somewhat far behind even now. I mean, catching up potentially, and we can't be complacent, but still not at the frontier.
Ben Miller (23:52):
It seems different too. It's hard for me to say here, but it just seems such a different kind of AI ultimately.
Raza Habib (23:59):
But they're also training frontier large language models. I think the politics of China will make it different. But again, I try to stay within my circle of competence, and this is definitely on the boundary of it. But my intuition, again, if I was to give my man in the pub opinion and people can take from this what they want, is from what I understand at the CCP is that they don't like things that are variants increasing. There's some sense in which things that threaten their supremacy are difficult for them to do as aggressively. And I think AI is a challenging one for them because on the one hand, it has the potential to give enormous power to whoever produces it. And so maybe for that reason, you could expect them to pursue it very aggressively. And at the same time, I think it has the potential to be a bit destabilizing.
(24:41):
And I don't know enough about politics in China to know where that's netting out right now.
Ben Miller (24:45):
Yeah, yeah. Who does? So I was just trying to set the fundamental economic challenge that the policymaker has to address. And I think there's an economic one, which is that if a lot of cognitive work, which is mostly white collar work, knowledge work, whatever you want to call it, gets done or co produced by AI, then there ends up being some displacement or job suppression. That's problem one. Then there's so much of policy has been downstream of media. Social media had this huge effect on policy and AI is going to change media radically. So radical, I think people can't appreciate. I don't even know what media is once you have AI.
Raza Habib (25:29):
As I said, I struggle to see beyond the fog of the end of the year. If you plausibly have AI systems, maybe not in a couple years, but not that far away, it seems not implausible to me that these systems might not just be smarter than any given human, but they could be smarter than all humans. And again, this is just a very difficult world to reason with when you have the equivalent of a country of geniuses and a data center is the line that Dario uses, and I really like that line. It's very hard to make forecasts. I do think the uncertainty is so high on what the impact of that is.
Ben Miller (26:02):
But I haven't heard great ideas lying around that somebody's likely to pick up. The UBI feels like a terrible idea to me. It doesn't seem like it addresses what people really want out of their life.
Raza Habib (26:16):
Yeah. What I worry about with UBI as well is that it concentrates power. It feels like it will make the majority of people somewhat powerless or dependent on the state. It's very difficult if you're fully dependent for your livelihood on the state to really have any meaningful freedom or ability to protest or complain. It could be pretty uncomfortable. I guess historically, you have balance of power between elites and everyone else through labor and violence. Violence doesn't go away as a threat, but labor might, and labor is a big negotiating lever here that maintains the balance of power in society.
Ben Miller (26:52):
So what are some of the ideas you think that you got to play strategically around the decision makers when the crisis happens? Do you have a top three?
Raza Habib (27:00):
I honestly don't right now, and it is something I've been trying to think about and speak to people who are policy experts more. The role I think I can play and where I feel I can personally add value is helping people who are in politics to understand the state of technology, the rate of change, the likely implications, and motivate some form of action, get people to believe that something drastic is needed. I don't feel like I'm in a good position to suggest solutions, but it feels to me that some form of redistribution from the AI companies themselves, from the people who are generating the huge amounts of wealth towards everyone else will be necessary. But the details of what that looks like or how you do it, I don't know. Again, I worked for one of these labs, so I got to be a little bit careful about what I say, but in my capacity purely as a citizen, I would love to see some form of redistribution from the AI wealth and profits towards the rest of society.
Ben Miller (27:57):
Yeah. I mean, typically we try to do that through an ownership society, through our 401k.
Raza Habib (28:01):
Yeah. Taking equity is one possible way of doing that.
Ben Miller (28:04):
I actually thought that some AI lab would offer to put some of their equity into the Trump accounts for every kid in America. I sort of imagined that idea is something somebody would suggest for lots of reasons.
Raza Habib (28:16):
Yeah, I can see for lots of reasons. Yeah, maybe. I mean, one of the things that I think is amazing that is true, at least for us, is that there's a donation match. So a lot of the employees at least, and the founders of the company have pledged to donate the vast majority of their ... Or the founders, certainly the vast majority is public that all the founders of Tropic have pledged 80% of their wealth. But if you as an employee, you can donate a significant fraction of your stock and the company will match that, but that's scratching the surface. It's nowhere near enough what we need to do.
Ben Miller (28:48):
Yeah. And the problem, unless you've had this job, you don't really appreciate, but making money, like building things is a skill and actually managing money, a lot of people who make things end up going bankrupt. They can't manage it after they've made the wealth, but there's an entirely different skill and capability and whatever philosophy of actually being able to give away money successfully.
Raza Habib (29:07):
Oh yeah, absolutely.
Ben Miller (29:09):
And so the challenge is you create a whole new problem that we're bad at, which is, okay, fine, OpenAI has this massive foundation, but we don't actually know how to give away money and be effective.
Raza Habib (29:20):
Yeah. I think it's well documented that the difference in outcomes or quality adjusted life years is something like this of the median charity or philanthropy versus the stuff that's in the top 1%. It's not like it's 20% better or 30% better. It's often orders of magnitude better. So the difference between doing philanthropy well and doing it badly is often pretty extreme.
Ben Miller (29:40):
And there's a scale challenge too. I think you're saying this, so I might put words in your mouth, but I don't think good ideas come out of politics. I don't think they come out of the people who are in power. So going to them and saying, "You need to come up with some good ideas because this is going to be a big crisis." I grew up in Washington DC. I live in Washington DC that has never Remember how I've seen good change happen?
Raza Habib (30:02):
I believe you, and I wish I could come to you and say, "Hey, these are the three things I think we should do. " But honestly, I don't have good answers myself right now. It's not something that I feel qualified to answer and that it's unsatisfactory, but it's the truth.
Ben Miller (30:17):
I'm not demanding it. I don't think anybody's qualified just to make you feel a little better about suggesting ideas. Let's try to go to another area. So if AI progress were to slow and plateau, why would that be the case? I'm sure people have thought about this, but you're not expecting that.
Raza Habib (30:35):
I'm not expecting that, but there's a bunch of reasons why it could happen. And I still put significant probability mass on that possibility. It's not that I think we're 90% sure to get AGI or superhuman AI, whatever you want to call it, transformative by in the next couple of years. I just think the probability is high enough that it's rational to start behaving as if it's going to happen and preparing for that outcome. And the more I see my probably essence have only been going up, not down for some time. But I guess if you say, "Hey, Rosa, it's 2032." It turns out that AI fizzled out. It didn't happen. What happened? What's your story? I guess my explanations would be, I mean, one is there is just some genuine science risk. These things that we call scaling laws are empirically observed. And so there's no guarantee that this thing continues to work as we scale.
(31:19):
That's one possibility. I think more likely possibilities are factors from outside of AI end up actually impacting the industry itself. So you have some global unrest. I don't know, China invades Taiwan or something like this happens that disrupts the semiconductor industry and then we can't get the compute capacity we need. And those kinds of situations, then yeah, I think AI progress would stall. What else do I think could happen that would make progress stall here? I mean, compute and energy constraints really do feel like a big one. So there's stuff you have to do in the physical world that takes a long time that if that becomes a rate limiting step, then it'll take longer.
Ben Miller (31:56):
Yeah, I'm familiar with that part of the world. I'm loving when people were obsessed with AI being a bubble and amount of data centers people were going to build whatever, two months ago. I was like, "Have you ever built anything?" Good luck building all of these data centers in the timeline they're talking about.
Ben Miller (32:13):
So hard for so many reasons. There's just a million reasons why building things in the real world in America. Maybe in China, you can just clear mountains and things like that. But here, I mean, it's so hard.
Raza Habib (32:26):
I could be a rate limiting step.
Ben Miller (32:28):
Yeah, slowing. Okay. So do you have a contrarian take right now giving that a lot of what AI research lab people say? So do you have something you're like, "Well, I think that they're right about a lot of stuff, but here's where I think they're wrong."
Raza Habib (32:44):
It's a question of degree. You said how AGI pilled are you and where are you on that spectrum? If you buy the most extreme timeline predictions, one of the most extreme, but I don't know, you read some of the super forecasting results or even, I don't know, Dario thinks like May 2027, 2028, we get a country of geniuses and data center. Let's say that you buy that timeline. What happens six months after that? Progress is reasonably fast right now. You've got a few thousand good researchers at the pop labs doing their work, but you're rate limited effectively by how many of the smart people you have. What happens when you've got some form of recursive self-improvement? I find it really hard to make predictions. So I don't know, do I have a take that's particularly contrarian to the labs? Honestly, probably not. There's a reason I fought my way to be here.
(33:30):
I think that this is going to be transformative. I think it's very important that we do everything we can to try and make it go right for society. I think that there is both huge potential upside, but also genuine, very real risk. And you're trying to capture the upside and mitigate the downside just feels me like the most important thing we can do in the world right now. And so in some sense, yes, I have the same belief as the other people here because that's why I'm here. I used to doing something else. I changed it to get here.
Ben Miller (33:58):
In the last couple weeks, the public software as a service businesses, the public SaaS companies have been getting gored by the stock market and they're down tens of percent. Some of them are down 80%, 60% in two weeks because they are afraid that AI is going to eat up a lot of their business. Do you have any more nuance take on that?
Raza Habib (34:20):
I guess yes and no in that I think it's directionally correct. I think the marginal cost of producing software is going down. There's this think tank, the Golden Gate Institute, and one of their founders had a blog post that I thought was really excellent. There's this analogy that people have been using for AI for a long time where they talk about what happened to factories when electricity came in, and the fact that it took a long
time for electricity to actually make a difference. And the analogy's been kind of crude up till now. The analogy was factories used to have this single central power shaft, everyone connect off of it. And so when you brought in an electric motor, you initially just replaced your steampowered engine and nothing really changed inside the factory, so you didn't really see some benefit until people shrunk down their electric motors to be small enough that every piece of equipment had its own motor and then the entire factory could change.
(35:06):
And people have been using that as a metaphor to say that AI diffusion will be slow because you have to restructure things. This is the first time where I actually think the metaphor is extremely spot on or extremely accurate because when you think about building software, building software for an individual or for a small group of people that runs locally is not that hard. There's product challenges, et cetera, but the smaller the group of people and the more bespoke the software is, in some sense, the easier it is because you're really building for that person. What makes software products hard is some part of it is infrastructure challenges, distributed systems, scaling, and some part of it is products because you're building for a distribution and you have to understand how to build them that's delightful to a group of people who you can't go and speak to all of them.
(35:50):
And what AI is making possible is you can shrink down that engine where you had that one big steam engine that was powering the entire factory and give every person an individualized piece of software. Now, not all software products fit that description. There are some things that are inherently multiplayer that you really do want something to be in the cloud for the software to work, but for things that you really could imagine there being more bespoke versions within a company where they might be paying a huge amount of money to an external company for something they could reproduce themselves. I think that that seems very clear to me that it's going to happen. Anytime where you would think about a build versus buy, build has suddenly become phenomenally more appealing than it used to be.
Ben Miller (36:32):
I know what that's like. Our team, for the first few years, we're really biased towards build, and then we learned that we should mostly buy. And so now it's basically, oh, we should actually build again. We're not a normal organization. We have a lot more software programmers than most companies. So you have to get to a place where normal people can become product developers essentially.
Raza Habib (36:53):
I think we'll get there. It's very interesting right now. I think the roles are all blurring. The boundary between PM, designer, engineer, all of these are starting to overlap way more.
Ben Miller (37:03):
I
Raza Habib (37:03):
Love it. And it's much worse. Yeah.
Ben Miller (37:05):
For me, it's
Raza Habib (37:05):
Great. I think for the high agency ideas person and for the CEO of the company or whatever, it's wonderful. You amazed how many C-suite executives I speak to who are like, "I'm all over cloud code every weekend or
Ben Miller (37:17):
Whatever." This is something I texted you about. How do you build software in an AGI build way? Is that a thing?
Raza Habib (37:23):
Yeah, I think so. And I think you've probably experienced this a little bit yourself. The most important part of it is one, to be building with the expectation of significant improvement in place. So what does that mean? You should be able to ask yourself, someone hands me a model from six months in the future that's significantly smarter. Do I need to rearchitect my product to get the benefits or will my product just instantly improve? If you think about what that means is if you are doing a lot of hard-coded workflows with a lot of deterministic logic where you're relegating the AI model to be making small decisions and you're not allowing it to orchestrate the flows or take key decisions because you're trying to mitigate risk, then you are going to have to re-architect a lot to get the benefit of when a new model comes in.
(38:08):
Whereas if you trust the fact that the models will get smarter, you put more of that logic into the model, you allow agents to run the show, say, and you put the guardrails and the safety in, then I think you're going to end up with a thing where as the model gets smarter, your system just naturally improves. So I think that's one way to be AGI-killed and how you think about product development. I think the other way is in how you think about pricing as well. So I think we're used to pricing software in ways where we charge on the basis of seats or things like this that are somewhat fixed, and that makes it hard for you to constantly be adopting. Then you have to really worry about margins. You have to start thinking about whether or not to adopt the smartest model because usage of that model goes into your cogs.
(38:50):
And I think that shifting towards more usage-based pricing and thinking about how do I price my product in such a way where I can pass on some of those costs to my user actually means I'll be able to give my user a better experience they're willing to pay for because I can always be using the most frontier model. Whereas if I'm trying to pack everything into an all you can eat price of some kind, I'm going to constantly be thinking about optimizations on cost. So that's another way in which I think you can be AGI appealed in product development. Honestly, the way I think about it a lot is how do I get out of the model's way? How do I get the right context into the model? How do I make sure the model has the right permissions to access data in a way that is trusted and secure, where it has sandbox execution environments?
(39:33):
How do I put the scaffolding in place so that then I can safely trust the model to get on with things and benefit from the exponentially improvement capabilities of the model?
Ben Miller (39:44):
Yeah. Right now, my biggest challenge with real AI is that the user struggles with the way to use AI. They're not used to using language as the interface. They're not used to blank screen problems. There's all these challenges around actually, again, maybe this is just the state of AI, that the people are the bottleneck.
Raza Habib (40:04):
You'd be amazed at how good the models can be at elicitation as well. And actually you can make this the AI's problem rather than the person's problem. Going to a prompt box and having to figure out what to put in is very daunting, but if you can put in something that's just a starting point or there's some suggestion and then the model can elicit from you more information, ask good questions, make proposals, I think you can get a lot further. And actually the models are pretty good at this already.
Ben Miller (40:32):
Yeah. I don't want to get too far down this little rabbit hole because people find when the model's asking them for stuff, they get annoyed. They're like, "No, no, just leave me alone." You
Raza Habib (40:40):
Can definitely put too much on the user. You see great examples of this in products like CloudCode that have, for example, plan mode. So they've thought of good ways to, in a high bandwidth way, get feedback from the user, sort of when to come to ask for permission versus just do stuff. But they're also products that ask you to give a lot of trust.
Ben Miller (40:59):
As we sum this up, the thing I feel like I'm going to try to articulate, well, I almost want you to articulate it, but I'll take a shot. People don't like inequality, not just wealth, but if a thousand people are making all the decisions or it says power, wealth, popularity, things like that, but you're talking about a world where that's becoming ... The nature of technology amplifies certain people way more than others. And so we're heading into a world that's not going to be well liked for this transition period. When you get to the other side, maybe there's enough abundance, maybe we figure something out, but we got to get there. The
societal consequences you keep bringing up, but do you have any parting thoughts, recommendations for individuals, companies? It doesn't have to be some magic policies prescription.
Raza Habib (41:50):
At the start of the conversation, you said to me, there's still a lot of people who are very skeptical or just not that many people who know yet. And so I feel like the first thing we need to do is persuade people
that this thing is real. And I think persuading yourself as well, because if we're going to get the politicians and others to take the size of drastic action that I think will be needed, then they have to believe that there is a crisis that's unfolding. And I don't think we're quite there yet. I think there's still a lot of skepticism that can come bottoms up as well. So I think we need to persuade more people that this technology is real, that it's not snake oil, that the consequences are real, and then we're going to have to bring the receipts to do that.
(42:30):
So to the extent that we can communicate to others with the data and with examples and really show that this is actually happening, I think that's a very powerful thing in and of itself. And then at the individual level, I find it very unsatisfactory. I think that we need to try and find ways to democratize control or decision making about what's happening, but a prerequisite of all of that in my mind is first accepting the situation we're in and persuade more people of the reality because people are just not very well oriented right now.
Ben Miller (43:00):
When I listened to all the things you were saying, it reminds me of every major war we ever got into. The World War II, people were fighting us getting into it until Pearl Harbor. World War I surprised everybody, even though retrospect, you look back. So it feels like there's no getting ahead of the crisis realistically, and there's just things you would do you wish you'd done ahead of time when the crisis happens. Now, maybe this is too pessimistic, but I think the lesson in history is that generally people don't see governments get ahead of these types of crises.
Raza Habib (43:34):
All of this makes it all sound very doom and gloom. And I think that there is reason to be concerned and there are things that we need to do. And I am personally doing things to try and get more people aware and to drive advocacy. At the same time, I do think it's worth thinking about what are the potential upsides if this goes well. And there are so many things that I'm very personally excited about in terms of being able to give every person on earth access to, in their pocket, potentially an angel of their better nature, something that can allow them to self-actualize their own ambitions, that can access that little coach and partner that nudges you in the right way, that helps you learn things. Your 14-year-old son who's trying to figure out what he wants to do with his life. In some sense, it's a question of discovering your values and finding out how to get there.
(44:21):
And the teachers that I remember or the people who've had the most impact on my life were very emotionally aware. They helped me self-actualize in some way, but it's a very rare thing. You have to be very lucky to have someone who can see you in that way, who can understand what you care about and who knows when to push you or not push you and give you that nudge, et cetera. We can maybe give that to everyone. I think we could potentially see huge impacts in healthcare, potentially see huge impacts in science and drug discovery. Those ones will need progress, not just in AI, but in data and in other ways as well, but there are huge upsides here as well. And also it might be very deflationary, just a huge amount of wealth might be created in general. We just need to figure out how to distribute it.
(44:59):
So I try to keep both sides of this equation in balance and yes, we need to take pretty drastic action, but if we succeed in this, I think there is a lot of upside on the other side too.
Ben Miller (45:11):
Yeah. I mean, it's not like war, it's a golden swan, it's not black swan.
Raza Habib (45:14):
Yeah.
Ben Miller (45:15):
Eventually you mess it up because you're about to get this massive amount of abundance and the mess up is that you don't figure out how it's shared in a way that everybody wins.
Raza Habib (45:25):
Yeah, that seems right to me.
Ben Miller (45:27):
I mean, that's definitely what I believe. And I'm super excited for you. I'm so glad you're just crushing it, even working on some new product to find out what
Raza Habib (45:36):
It is. I'm excited to be able to talk about it soon in the future.
Ben Miller (45:38):
Yeah, you got to let me know. Well, hopefully this podcast did a little bit of work in the advocacy onward.
Raza Habib (45:45):
Yeah, absolutely. Thanks so much for having me, Ben. And hopefully the podcast raised awareness a little bit, and I do think that people should start thinking about what are the mechanisms for distributing the benefit.
Ben Miller (45:57):
You've been listening to Onward featuring Raza Habib, member of technical staff at Anthropic. My name is Ben Miller, CEO of Fundrise. We invite you again to please send your comments and questions to onwardfunrise.com. And if you like what you've heard, wait and review us on Apple Podcast. Be sure to follow us wherever you listen to podcasts. For more information on Fundrise sponsored investment products, including relevant legal disclaimers, check out our show notes. Thanks so much for joining me.