The text below is a transcript of the audio from Episode 41 of Onward, "AI Policy under the Trump Administration, with Neil Chilson".

---

Ben: Today's guest is Neil Chilson. Neal is head of AI policy for the Abundance Institute and one of the foremost policy leaders likely to advise the incoming Trump administration on AI policy. He was the CTO at the Federal Trade Commission, a senior research fellow at Stand Together, an attorney, and a software developer.

With a master's in computer science, so he's not just a policy wonk. Before we get started, I want to remind you that this podcast is not investment advice. It's intended for informational and entertainment purposes only. Neil, welcome to Onward.

Neil: Thrilled to be here.

Ben: This is exciting. The way I thought about starting this conversation is by you and I having.

The main argument that is playing out in the public domain about AI policy. I'm going to take a position, not that I necessarily believe this view, but I'm going to argue it to try to see if we can tease out some of the challenges of both positions. Cause I'm going to say that there's a, namely a position.

I would say most you're the expert here, but most acutely. held by the EU, the European AI Act, the UK, and to some extent by the administration. They're treating AI, it's so important, the potential is so big that they have to regulate it really before gen AIs could be broadly commercially adopted. And that to me seems like very different, so it's more like treating it like an FDA drug development than like a computer or the internet.

Neil: Yeah, I think there is a sort of, I would call it a precautionary model that I think the EU in particular is taking, but that's reflected more broadly across some other policy makers as well, as you pointed out. And this has a pedigree that's, you know, Earlier than AI, the precautionary principle is an old sort of approach.

The principle would say that before a new business model or a new technology gets a market test that first the government should vet it and try to examine the risks that might be there and try to put a legal framework in place to make sure it's safe before we let it out into the market. and see how people adopt it and use it.

The flip of that is the sort of permissionless principle, which is this idea that entrepreneurs and innovators should create a new technology and that they should be able to deploy that technology and see how the market responds to it. And then we'll deal with the problems that come up. as they come up.

And so these two different approaches, I think you can sort of classify the EU as a more precautionary system and the US historically, especially in the software space, as much more of a permissionless space. That's not to say that any specific application of software, you pointed out drug regulation.

And the start, and we do have frameworks around, say, medical devices that use software. And in some of those areas, we might say that, hey, before you can put a new technology on the market, we are going to have a rigorous set of tests that apply to it. And so the question is whether or not AI fits into that framework.

that type of category of a medical device or if it fits into the general software category of I develop an app and I don't have to get the government's permission before I upload it to Apple. I might have to convince Apple that it's a good app, but I don't have to convince the government. And so I think those two frameworks are a good way to think about this.

And I think in AI, the challenge is that it's a general purpose technology. And so at the very abstract level about all AI, it's hard to make that distinction. It's hard to claim that all AI uses somehow fit into the drug or the medical device category. Some of them very clearly fit much more into the app or general software category.

And so approaching AI as if it were Every use is essentially a drug or a medical device, I think is overly precautionary. And the big problem there is that you might stop new innovation from happening just because you slow down the process of getting it out to market and on the margin that reduces innovation.

Ben: I want to argue this with you, even though I'm very sympathetic to your point of view, but just to put a finer point on it, we have a change of administration coming from Biden to Trump, and that's going to have, I think, likely a very different AI policy. I want to get to the question of how you think they're going to be different, but let's just stay higher level for a moment and think about.

These two approaches where they might fit. So let me just see if I can find a place where you might think the precautionary principle could apply here at Gen AI, because it's easy to imagine why, Oh, Netflix uses AI algorithm for, Picking what movies to recommend or Spotify, what music to recommend, that doesn't scare anybody.

But even Elon Musk has said that there's a 10 to 20 percent chance of AI being an existential risk for humanity. Most of the sort of AI, real AI experts have thought about that question a lot, come down different places on it. And since nobody knows the answer, doesn't that mean we should be more cautionary?

Neil: Again, this is a really good question, and I would say I'm not an absolutist on this. I do think that the precautionary approach makes sense when there is a high risk of catastrophic, irreversible harm, and where we have some pretty good certainty that's the situation that we're in. To your point, I think there's a lot of different priors that people bring to this question around existential risk.

That's what makes this particular technology somewhat different than, say, if we had taken a precautionary approach to the internet, is that we, very early on in the history of generative AI, we have people really disagreeing about whether or not this is going to turn into something that causes catastrophic harms.

And there's a couple different ways to tackle this. You can disagree on the priors, and a lot of that is speculative. We're basically saying, Hey, I have my priors about how this technology is going to evolve. Because I think there's general agreement that at the current point, the current technology, the current generative AIs that we have, these are not creating existential risks.

This is not the sort of out of control artificial intelligence that's going to kill humanity. This is not the Terminator scenario. And so it all is speculative. Attacking that prior might make some sense. Generally speaking, I think we're still very much in the situation where we don't really know what the distribution of risk is here.

And to me, that suggests holding off on the precautionary approach. The strongest argument in the other direction, I think, is that if we don't do it now, that the thing is going to spiral out of control. And I think that's the type of argument that requires pretty strong evidence, I think, when we talk about the potential benefits that we're shutting down here.

All of the mechanisms by which people talk about existential risk right now, at least the ones that I hear, are use cases of the models going out of control. So it'd be basically like somebody is going to use these models to generate new synthetic biological weapons or somebody is going to use this.

You're going to automate a bunch of things and then this is going to be a super persuader that's going to be able to convince anybody to do anything and therefore the AI will be able to take over. I think those are pretty strong claims. I think they need pretty strong evidence to shift us from a permissionless framework into a precautionary one.

Even if we shift into that precautionary one, we might want to shift to it in specific application areas. So we might look and say, when models are being used in a synthetic biology environment, that we might need to have some guidelines around that. And do we have guidelines already on access to the raw materials that you might need to create synthetic viruses?

And Maybe we need some new rules around that, especially in the context of automated design. So that's how I respond to that. Generally, more fundamentally, I think that a lot of the people who are concerned about existential risk underestimate the intelligence. I think they overvalue intelligence as a means to get things done in the world.

No surprise, these are really smart people. And so they tend to think that intelligence is the primary mechanism by which things happen in the world. I think that's pretty wrong. The way we get things done in the world is collaboration with many other people. And so the smartest person in the world still honestly can't get that much done without getting other people to help them.

And so, we as a society are actually much, we already have super intelligence, I guess is what I'm saying. It's the mass group intelligence of human society. We get so much done by our specialization, not because any one of us is super smart, that helps in some cases, but primarily because we have figured out mechanisms and institutions that coordinate our intelligent activity to be the benefit to many other people.

We already have super intelligence, and I'm not really that worried that if we develop computers that are equivalently intelligent that they will somehow be able to capture that system because I think they'll have to, first of all, they're trained essentially on the outputs of that system, and they'll have to figure out how to negotiate and navigate that system if they're going to have any effectiveness at all.

And so I think they'll look in many ways, to the extent this somewhat science fiction future comes about, I think they'll look in many ways like components of the current very complex system that we have of society and economy. So

Ben: I love the point you're making that intelligence is overrated because as an entrepreneur, you really learn that most of the things that happen are surprising, unpredictable, and you figure it out by doing, figuring out by doing rather than thinking is way underrated, not just the idea of being collaborative, but just that I don't care how smart you are, the world's still unpredictable.

But a lot of executives have a bias to action and they're very aggressive. If you're watching it, people do very aggressive things and yet it works out for them, even though you feel like it shouldn't, it's because there's this power in action that I think is under appreciated intellectually. So I love that point.

I'm trying to steel man this argument that AI should be heavily regulated because most of the actual regulators seem to have that opinion. I'm talking about Europe, UK, even China to some extent has taken a sort of very hands on approach and Biden administration was. So I'm going to keep trying to argue it, but let me just for a moment, most of the regulation has not been about the catastrophic risk to say that I'm arguing this point, but then you look at, and I went and read.

The EU AI Act and the unacceptable risks as defined by the EU have almost nothing to do with AI becoming super intelligent and taking over the world. The things that they describe as unacceptable risks, which means they're not permitted, you can't do this at all, are cognitive behavioral manipulation.

Which by the way, arguably TikTok and Facebook and media and my wife all do to me constantly. Predictive policing, which also seems could be smart in instances where the likelihood of somebody doing certain crimes might be something not banning outright in the precautionary principle. Emotional recognition in the workplace to me was the strangest one.

I want to know if my team's happy. I want to know what kinds of problems are arising so we can try to care of them. So much of organizational management is about emotions, not about information, yet that's banned as an unacceptable risk. It just seemed like the headline risk is not what the substantive details seem to be about.

You can talk about the Biden executive order, which I'm switching teams for a moment so that I can highlight these issues so you can talk about them more fulsomely.

Neil: Like I said, the pedigree of people worrying about how computers affect us is very old. So a lot of these concerns are ones that have been raised for a while.

In fact, if you go back to the very first, what I'll call it, the very first AI law in the U. S., the Fair Credit Reporting Act, if you listen to or if you read the transcripts of the hearings, it was basically like computers far away are making really important decisions about my life. And that seems unfair or unaccountable.

We want to make sure they're fair and they're accountable. And the rhetoric around that sounds very familiar to anybody who's listened to similar concerns about AI making decisions about us today, but that was in the 60s and 70s. This idea has an, an old pedigree. The challenge that it always faces is that it never asks compared to what?

The Fair Credit Reporting Act. It turned out if you were a minority or just a disliked person or you ran a business that your local banker didn't like, it was basically impossible for you to get a loan because you only had that guy who knew who you were and was going to possibly give you a loan. It turns out having A more objective rating system that was based on factors that were not your personal relationships or the color of your skin was actually much preferable for a vast majority of people to the local banking officer system that we had in the past.

And I think that's true largely about algorithmic decision making. And I think you're right that a lot of the concerns that, for example, the EU Act is trying to address are sort of dreamed up hypotheticals about how companies might misuse this without really thinking about like, what might actually happen in the market?

What incentive do people have to your point? What incentive do employers have to do sentiment analysis, except to try to make sure that their employees are happier. You can come up with all sorts of hypothetical reasons, black mirror type episodes about how people might misuse this stuff, but what's likely to happen out there.

And to your earlier point, we honestly don't know a lot about how these technologies are going to be deployed until people try them out. And we'll see what some of these effects are, both the positive sides and some of the negative sides. And I think that's a better situation to make good decisions. And if we're not dealing with something that is like an existential risk, that makes sense.

We can figure out other ways. address the harms that might come about or to limit them if they're not catastrophic and irreversible. And so I think that's the sort of precautionary approach here really risks losing a lot of the potential benefits from deploying these technologies.

Ben: Some of the arguments I've heard, again, pro regulation, when I listen to the tone of it, it actually merges in my mind with some of the degrowth arguments.

So degrowth thing is that insatiable desire for progress is hurting us actually net impact on humanity is as people are less happy and less healthy. And we really have to understand. The long-term impacts is something before we broadly commercially roll it out. Again, I'm just making these arguments.

Plastics, turns out the plastics are our, and everybody's bloodstream and microplastics, lead gasoline, global warming. There's lots of examples where the market really has failed. Usually the failure is where there's just such a huge impact. The bigger the impact, I think the harder the market is to address.

The smaller the impact, the more the market. It's good, but these big impacts, we have concentrated benefits and diffuse costs. The market struggles with reconciling that. That's why we have clean water act. What's the argument for that is because it seems very hypothetical at the moment. That's the problem is that most people, internet's very widespread use.

How many people use gen AI? Maybe they use jet GPT. That's probably it. Copilot. So considering how little it's used in commercial applications, the regulations seem to be Disproportionate, but then I'm letting my bias sneak in here.

Neil: I think it's a really great point. The traditional, if you ask law and econ folks, they would say the traditional rationale for government intervention is in those cases that you talked about where there's externalities that aren't really priced in to the government.

the market and the fact that leaded gasoline leaches into the ground and has lead exposure. People didn't anticipate that and they certainly didn't charge the companies or they didn't use the product thinking that. And I think if you had asked, what are the safety effects of gasoline? If you had brought a precautionary approach to gasoline, like early on, I don't know that would have.

hit anybody's radar. They would have said, Oh, it's an explosive substance, right? There's lots of harms that are much more obvious than the lead in gasoline. Global warming, another one that would have been very hard for somebody to think of that as [00:17:00] a potential effect of widespread deployment of internal combustion engines.

And so that is my main challenge to the idea of the precautionary up front. Those externalities are by their very nature as externalities, you can't anticipate them. If you could, we would have regimes that would easily price them in. The market would be able to. They're the sort of thing they have to be discovered through progress.

And that's not to say they aren't real harms. They are, obviously. The challenge is we can't anticipate all the harms up front. We can't anticipate all the benefits either. And so the question is, should we have stopped all gasoline production or internal combustion engines given what we know now? It's a nice hypothetical, but it doesn't help us much about deciding about AI because We're at the situation where people were at the start of the internal combustion engine.

We're not a hundred years later seeing all the benefits and all the costs that that sort of thing brought about. Human progress is never over. I think that to your de growth er point, there's this [00:18:00] sort of idea in de growth that we can have a safe world and safe is like somehow defined as a binary. We have an unsafe world or we have a safe world.

No, we can have safer worlds and we can have less safer worlds. And I think technology. solves problems, and then we have new problems to solve. And that's the history of human progress. I think that if you look at it objectively, that history is extremely satisfying from a human welfare standard. And so when you look at the past 200 years and life extension, the health benefits that we have, I think the story of technology has been one where, yes, it creates some problems, but it solves more problems than it creates.

And human life has gotten happier, wealthier. healthier over those past 200 years. And so I think that will probably be the story of AI, very likely be the story of AI. And there's a lot of opportunity in this space. And so I don't think we should approach it with some sort of different frame that somehow this time we'll be able to predict the harms and still keep the benefits in a precautionary approach.

Ben: Let's come to the present for a moment, and then we'll go later back to the principles. So here we are. Weeks away from a new administration, most people aren't that familiar with Biden's AI policy, which was very different than Europe's. And I don't know what Trump's policy is going to be. I can imagine how it might be different, but it'd be interesting if you could contrast Both generally and maybe some specific ideas of what they would probably disagree about.

Neil: You're right to point out that the Biden administration did not take the EU AI Act approach. First of all, an executive order is not legislation. So there are limits to what the president can do in an executive order. I would say the vibe in the Biden administration executive order and in The AI Bill of Rights, which is another document that the Biden administration put out, the emphasis is very much on how do we prevent the harms that might come from this technology?

And the harms that they're thinking about are often in the form of bias or privacy or safety and content censorship, misinformation, disinformation, a lot of those types of concerns. And I think the main difference that we'll see in the Trump administration will be a algorithmic harm ideas. Elon Musk might be worried about existential risk.

But I don't think the Trump administration is going to emphasize these types of bias or the misinformation concerns that are pretty prevalent in the Biden approach. And so I see a real shift away from that type of misinformation, disinformation concern. Anybody who's listening probably heard at least once during the run up to the election that AI It was a big risk of misinformation in this election, and that just didn't appear at all here.

So I think there's going to be a shift away from that. There's going to be a shift towards national competitiveness. I think that the Trump administration is really worried about us maintaining our technological edge compared to China. especially in the AI space. And so I think you'll see more, Hey, we have these American champions.

We have these American innovators in this space. We shouldn't be knocking them down. That's a favor to China. Now, obviously the Trump administration is no fan of some of these big tech companies because of primarily social media reasons. And so there is a sort of cross cutting current there, but I think that there will be more of an emphasis on our need to stay competitive vis a vis China.

compared to the Biden administration, which in some cases was, for example, going to Europe to help them out with antitrust investigations against American companies because the U. S. laws weren't strict enough to bring those types of cases here. I think you'll see less of that. I think you'll also see probably a lot more interest in how AI affects national defense.

How does it affect border security? Like what might be some applications in that space? Some of those raise to my civil libertarian mind, some concerns. I worry a little bit about how government might use technologies to police its own people. That's something that China does a lot. I think hopefully we won't, but I think you'll see more exploration of those issues under a Trump administration.

So that's just some of them. It's hard to predict presidential positions on this stuff. And I would say the Trump administration is maybe harder than most to predict on some of this. But those are some general trends that I see.

Ben: To some extent, they're lucky because Gen AI's near future is a lot clearer than it was six months ago.

There seems to be scaling challenges recently. That means that we have more time, which I think is the most valuable asset in existence. And so all the more reason to take a lighter touch with regulation because the chance that you have artificial general intelligence living computer within 24 months or something seems now very unlikely.

One thing I do worry about, and I wonder, maybe this is misplaced, but you've heard Bezos talk about it and you've heard Zuckerberg talk about it, but that there could be more targeted, you could have using regulations by special interests to advance their own gains. Obviously all politicians do this to some extent, but OpenAI is the leader.

We've actually invested in OpenAI. We also invested in Thropic. We've invested in these AI companies. I worry that there's this kind of a very personal. administration in a way, it doesn't seem so technocratic as the last one. And there's some pros and cons to that. How do you worry or how do you think about special interests in the coming administration?

Neil: That's a great point. And I think there have been concerns that people have raised about the sort of protectionist approach. We're the big company. We have the ability to comply with a bunch of complicated regulations that maybe startups don't, or that may be open source. Developers don't, and therefore we might want to get those.

Honestly, I think that's a little bit rare among the AI companies. Maybe some of the bigger tech companies, they're much more familiar with that type of playbook, the Microsofts of the world. But I think that a lot of them are coming to this when you hear Uh, call for regulation from some of these companies.

I think a lot of them are coming to it from good natured intention place. I still think that can have a raising barriers to entry effect, even if it's well intentioned. I do have worries about that, but to your point about the administration, I think. You're right that this is a perhaps a more transactional administration than maybe some in the past have been.

I do think that there are limits to what any one administration can do. Real regimes in this place have to come from Congress. Congress does not move fast. They certainly don't move fast on this kind of stuff. But there are some needs for Congress to do some stuff because the states are moving fast.

There's 700 plus pieces of legislation that are AI regulatory. They have regulatory effects on AI companies. And so it does make sense, I think, for the federal government to step in here and talk about what are some of the key principles. And if we get those types of key principles around permissionlessness out of a Trump executive order on AI, which would line up nicely actually with some of the EOs, the executive orders that he had put out just a in his previous administration.

I think that limits the ability to come in and ask or it damps down the appetite for that. And I think there will be an increasing appetite from even the big companies to say, Hey, the federal government needs to step in here. So we don't have a patchwork of 50 different heavy red tape policies that we had to comply with in order to even get our company off the ground.

And I'm not super concerned about that regulatory capture angle at the federal level. just because that's a hard thing to do at the congressional level. At the agency level, I do have more concerns. And to the extent that these companies bringing applications to the FDA or applications to the Department of Transportation for transportation like things, you always have those concerns at agency level.

And so it's just something I think people will need to keep an eye out for. We live in a world where exposing those types of problems is easier than it's ever been, because the megaphone you can get from social media is pretty loud. Yeah, I think we can push back against those types of attempts, but that's something we always need to watch out for.

Ben: I want to come back to special interest because I think that's the biggest risk of the permissionless approach. I'll just articulate it a little bit. I think there's two kinds of ways special interests can play out in that space. One is that the special interests are worried about protecting the impact of AI.

So that could be, obviously there's people who worried about bias, but also people who worried about companies going out of business, there's actually a college education company, I think it's called Chug or something, and they lost 99 percent of their value, a public company lost 99 percent of their value because Chad GBT replaced this company's products.

And my general impression of people outside of tech is that actually the AI is mostly viewed very negatively. It seems scary, it seems like it's going to hurt their jobs, maybe have negative effects on their life. I can see it becoming very popular. Once AI, Gen AI starts having impacts on company employment, you roll out some kind of new product and you lay people off.

If that starts happening widespread, which I think is more than likely could see special interests, all different kinds of special interests jumping on that bandwagon to get what they want to protect their own turf.

Neil: I totally agree. In fact, we're already seeing it. The dock workers strike, the U. S. ports are already the least efficient in the world.

for a major country. And in part, that's because we have all these provisions about who has to be able to work and the limits of automation. And I totally think that is a giant threat. It's also a way under discussed one, the labor impacts, not just the labor impacts. They're not very well theorized. We don't really know how this stuff is going to be deployed.

A lot of it will really depend on if there's one bad guy to point at, right? Like you're totally right. That And our polling, the Abundance Institute has done some polling on this too. If you talk about AI in the abstracts, people don't like it. They're pretty worried about it. And in part, especially in America, it's interesting when you see we're a very tech savvy country overall and we tend to be early adopters.

But on AI, Japan loves AI as a concept and Americans fear it. I think that's in part cultural. It's also what are the cultural narratives? What do we think about? We think about Terminator. We think about other things like that. 2001. But that's because it's in the abstract. When you point out that, oh, there's all types of AI algorithms on your phone, people are like, oh yeah, I like those.

I wouldn't like that. I wouldn't want that to go away. And so to me, the sort of backlash will really depend on how these things work. technologies. How fast do they roll out? What are the effects? And do they roll out in a way that is identifiably AI? Or is it more like the effect that word processing had on typist pools.

They just disappeared. It took a little bit of time. And also it wasn't obvious that there was like one big bad entity that was responsible for that. If it's just a everybody's job slowly transform into something that's much more assisted by AI, then I think we won't see that same backlash. But I think it's really going to differ based on sector.

We've seen some really interesting research already that suggests that in call centers, for example, the least skilled and the newest employees benefited quite a lot from having an AI agent helping them out. In fact, it raised their performance a lot, but the highest skilled people didn't really benefit that much from having the extra help.

In contrast, in places like materials research, there was some interesting research that showed it was the scientists who were the best, who knew how to basically screen out what were plausible suggestions that the AI had for avenues of research. And it made them like 10x better than they already were.

And the people at the lower end didn't benefit as much. So those distribution effects, I think are going to really vary depending on the application. And I think the reactions to them will vary quite a lot. But you're right. That is the type of backlash that could really force special interest carve outs in certain sectors.

The other area that's already very active is in copyright. And a lot of the Media Actors Guild, Screen Actors Guild, and others are very worried about this replacing their jobs and are very actively pursuing avenues to limit the use of this technology in their industry. We're going to see that over and over, and I don't have good solutions for that other than it is the natural way that new technologies play out.

I just hope that we can have a culture in the U. S. that embraces the innovative effects because the one distributed interest that isn't well represented in any of these special interest fights are, you know, The average consumer who is largely going to benefit from automation and other types of technologies that drive down prices and expands the range of technologies that they have access to in the products.

So it's a big problem.

Ben: It's interesting for me when I listened to say Cass Sunstein, is that his name? But there's a number of new think tanky types who are pro labor.

Neil: You might be thinking of Orrin Cass.

Ben: Oren Cass basically has, I think, the tariff policies come out of this view that we need to help American workers, help American productivity, and tariffs will help American workers.

And I've even heard some of his arguments, I think I've even heard espoused by J. D. Vance, which is that globalization failed the American worker. Even though I think that a lot of people point to globalization that I think had as much to do, not more, to do with technology. The Trump administration looks like it's going to be more deregulatory, more business friendly.

It's not obvious to me that's actually will be how it plays out. If the impacts are, as you say, or as they could be. So how do you think about you're sitting here? Let's say you're one of the AI policies are for the Trump administration. You're trying to balance the political capital and the actual regulations that need to be put in place or legislation and regulations.

What do you do? Maybe ladder, you think of it as a three or four or 10 year ladder. How do you take the steps? What do they look like? If you could be that czar?

Neil: It's a really great question. And actually Trump named an AI policy czar last night, David Sachs, who is a great guy. Very interesting guy. I listened to his podcast.

Good off all in podcast quite regularly.

Ben: Quick fun fact, I actually pitched my Series A fundraise for Fundrise to David Sacks. I think I made a favorable impression, but he would only fund us if we moved the company to San Francisco.

Neil: I'm glad you stuck around in D. C. Great to have entrepreneurs here. So

Ben: yeah, he made an interesting case.

He argued that you could start a company in D. C., but you couldn't scale it in D. C. Had to scale it in San Francisco because the long tail of talent was there.

Neil: I would love to hear more about that sometime, because I think that's a really interesting, I worry about that too, being a tech adjacent guy lives in DC.

The talent pool in DC is challenging.

Ben: Well, we went remote the way we ended up scaling. He never predicted when I pitched him back in 2017 or 16 or whatever year that was, was that you could hire globally. We've started hiring globally and the talent we can get even outside the United States is just unbelievable talent.

So anyways.

Neil: But back to your question, it's a great point. The U. S. actually has a lot of manufacturing, but a lot of it's automated now. It doesn't have the employment effect that it did in the past. It's very interesting because I think the AI employment effects are really challenging to predict, obviously.

On one hand, a lot of the threat that we have to employment right now, the AI, especially these generative algorithms have is focused on basically knowledge workers who are not the sort of blue collar workers that Vance, Cass, Trump are worried about and appealing to. And so you see the Hollywood actors getting up in arms about this.

That's interesting. I don't think there's a lot of sympathy for Hollywood actors from the average Trump voters.

Ben: I haven't thought about it, but actually like AI could disrupt the bluest of blue, like software and tech software engineers and. Journalists. Journalists. Yeah. And the movie industry.

Neil: We're talking about content generation, right?

That's not what the average blue collar worker is doing. On the flip side, these AI models require enormous data centers, which are. Construction, plumbing, energy, the energy sector is facing a lot of demand here. That could include natural gas. I was just in Fargo speaking at North Dakota State University, and they're super excited up there.

They have weather that is very conducive to having data centers and they have a lot of natural gas. So they're like, ah, we're sitting pretty here. I do think it's not obvious exactly how this is going to play out on the distribution. And then you have China banning. the export of certain rare earth minerals, which we can mine here if we get the paperwork out of the way.

So who knows, maybe we'll have a mining boom here and all of a sudden it'll be a real boon to blue collar jobs. It's not obvious how this is going to shake out. But to your point, if I was trying to scaffold this out, I'd say, We don't really know right now, but the main thing that matters is that we keep the innovation here in the US.

If I was in the Trump administration, I think that's a message that really resonates. We keep it here and we keep it on the cutting edge. And what do we need to get there? We need energy. We need talent. To your point, we should resource that globally, and I don't know how that's going to play out in the sort of immigration space, but those are the two things that I hear constantly from the companies.

That's their big gatekeeper resources, our energy and talents. And so I think those will be appealing arguments to a Trump administration. I think they should be appealing. distributional effects, we don't really know right yet. We should find ways to correct them if we need to through policy, and we can do that.

But it's going to take experimentation first and finding out exactly how this plays out in the market.

Ben: Going back to special interests, I think that there are two kinds, the one we were talking about, which is the ones who are resisting change. And then eventually, as AI becomes very economically successful, that you'll have new special interests that represent the AI companies who then lobby against whatever it is that would constrain them.

Today, there's not a lot of power there, but we've seen with Google and Amazon. And Facebook meta that they went from being the innovators to being considered big tech, potentially needing antitrust actions and not well liked by a lot of parties for a lot of reasons. And yet the cat's out of the bag, it's really hard to regulate social media or any of these companies now that they're so large and so powerful.

How do you think about, okay, You want to let the technology blossom, but then when it starts to have harms that actually are manifest, not just in theory, you need to be able to control for them, and at that point it becomes a different countervailing force. Even today, there's obvious impacts. I think of social media as being something that has huge impacts.

I think there's no way to regulate it at this point. in practice. So you could tell me I'm wrong. Isn't that the risk of waiting too long on regulation?

Neil: Maybe. It's hard. Hindsight's 20 20. I think part of the challenge, the reason it's hard to regulate social media right now, there's a couple of reasons.

One, the first amendment makes it quite challenging in many cases to regulate. content we've actually seen. It's not that people aren't trying. Florida and Texas both passed laws. They went up to the Supreme Court last year. The court knocked them back down to the circuit court saying, Hey, you need to review these under this rubric on the First Amendment and issues.

And I think those laws are going to end up looking very different than they started. So that's one reason. The second reason is that You're right that social media companies are politically unpopular, but they're politically unpopular on opposite sides for the exact opposite reasons. You can say more or less that Democrats don't like social media companies.

It started in 2016 because they basically blamed Facebook for getting Trump elected, even though there's not really any strong evidence. That was largely a social media phenomenon. So the Democrats big problem with social media is that it spreads misinformation. It has too many Republican ideas on it, and Republicans feel the exact opposite about it.

They're trying to build exactly different solutions. They both think social media is a problem, but they're trying to build like exactly opposite solutions. I think those are the main reasons. Some of those apply to AI. They're not in the AI sphere yet, but I'm actually just working on a piece and right now that's about.

the free speech issue coming to AI because it's going to, especially to these content generation platforms. We've already seen some debates that fight is coming to AI. It's going to look pretty different than social media in the dimensions of the problem. But I think people have that playbook already that, Oh, we're going to look for, if this is woke AI, Musk talks about this already.

And so I think those fights are coming, but I don't know that I really answered your question. There's a risk of not regulating. early enough, but the problem is often if you don't understand what the problem is, it's hard to write good regulation ahead of time. And so I don't think I've heard people say, Hey, we missed the boat with social media.

We didn't regulate it in time. But then when you ask those people what the problem was that they would have fixed often, like I said, they totally disagree on what the problem is. That they would have fixed that doesn't suggest to me that even if they [00:40:00] had identified the problem early on, that they would have been able to do anything in particular about it.

And then I guess I would just say, I actually think the social media has generally been a boon to humanity. I think allowing people to communicate is a powerful thing. We all have industrial scale, quality, instant communication around the world. That's pretty awesome. And for the most part, it's free or zero price as the economists would say, that's pretty great.

And so I'd just push it back against that, even though I know it's popular to bust on social media. One other thing that adds an interesting dynamic here is that both Trump and Musk now have social media platforms.

Ben: They both have billions of dollars of personal investment in social media. And then probably AI soon enough.

Neil: Musk is already deeply, obviously invested in AI.

Ben: I think about closing this up or winding this down rather, what you look forward in the near term, are there certain signals that would help you know better how you think it's likely to play out? Or if you saw something like if Elon Musk started targeting open AI or something, that would be a very different thing than what you expect.

So what is it that we should all watch for to see if things are going according to the way you would imagine?

Neil: Yeah. So on the policy front, the first early indicators are appointments, obviously we won't really see action until after January 20th, but appointments who's in charge of what as Trump makes those announcements.

I think you start to read some tea leaves and the GOP platform right now has a plank that says that they're going to repeal the Biden AI. executive order. Signals that I would see that would be concerns about that plan going would be basically if they picked some of the people who are more on the pro regulatory side of AI to lead various functions of government.

For example, the Office of Science and Technology Policy. They haven't done that yet. SACS is not in that vein. I don't think even the appointments, the recent appointment for the head of the DOJ's antitrust division is skeptical of big tech, but not of artificial intelligence generally, I don't think. And so I think we'll see.

The signals I'm taking right now is that Trump will be much more in the vein of the U. S. should lead on AI. Enthusiastically lead on AI and that AI is a great opportunity for humanity not in the other direction. So when it comes to like signals from industry, I don't know what to look for there. There could be big surprises.

Open AI is in its 12 days of Christmas releases or whatever they're doing and just released a full version of 01 and I've been using it. It's pretty good, but it doesn't seem like a total step change.

Ben: OpenAI has been prolific in their product development, never seen anything like it. Maybe Androl is the only other company I've ever seen put out more product in a short amount of time.

Neil: I guess they're working together now too. I guess that makes sense. If there was a transformational new model that came out, I think that would change a lot about the policy dynamics, but that hasn't seemed to happen yet. It seems like incremental progress. Honestly, I think Though that's not to say AI isn't going to have a giant effect that we can't really predict.

I think basically if all progress on making it models more advanced stopped right now, what we have has not worked its way out through the economy and is extremely powerful. And so I think there are going to be giant effects of this technology, even at the current stage. I'm pretty excited about those.

I've been doing tech policy for 20 years or so now, and AI is just repeating all of the tech policy fights that I've had for 20 years all at once. And so it's swallowed the tech policy world. The only one thing that is quite different is the existential risk factor and the people who are really worried about that.

That dynamic did not really live in the internet age. internet policies. There weren't really people making that argument. And so I don't know how that's going to play out. That is the one sort of wild card in all of this. There's a lot of money and funders and advocates on that side, even though I think they're a minority, a small group.

If you count the numbers, I think they have loud voices and they have some very prominent advocates. And so that voice, I pay a lot of attention to that voice. As far as thinking about how they might affect policy going forward. I think they preferred a Biden Harris administration. And so I think they're trying to figure out how to fit exactly into a Trump administration.

And Musk is maybe an in there, but a bit of a wild card as well. I'll be looking to see how closely Musk is involved in say AI policy rather than. Obviously he has some other interests, rockets, communication, he's doing this doge thing. So he's got a lot on his plate, but the guy seems to get a lot done.

That's the other signal I'd be looking at. How closely involved is Musk in AI policy?

Ben: You've seen lots of money and new people show up around existential risk of AI in D. C. People show up in D. C. with money, they're going to put that money to work. You've now created a special interest almost because there's people who are funded to run around and Talk about that and manage that.

And so they have to have a position that gets promulgated some way or another in order to continue to have a job. And not that that's wrong. I would say that's probably the, my biggest concern. I'm not really concerned about the other problems, which I think will get managed by the institutional friction that exists in America.

So how do you imagine, what's your guess of what you think they'll end up doing in a Trump administration? Do they have a policy that they'll end up coalescing around that asking to be included in a Trump executive order?

Neil: A lot of these groups are technologically deeply savvy. A lot of the people come from like a tech background, but don't have a deep policy background.

And I think that you've seen that sort of play out in a couple arenas. And so they're still getting experienced in how DC works and the speed of DC, which is often hurry up and wait and then offer a solution right when there's a crisis. And so I think that I don't know exactly what they might offer specifically.

I think they're trying to figure out how to work the China angle because there is a big chunk of them who their solution to a bunch of these existential risk problems rely on international agreements where China was always like, how are you going to get China onto these things? And they never really had great answers.

Often, sometimes they would point to China's regulatory approaches as ones that we should adopt in the U. S. And so I don't think that's going to work very well with the Trump administration. So I think they're trying to figure out new angles. The one that I'll say I worry about the most and I'm a little worried about saying this publicly because I don't want to, I don't want to give a roadmap to people that I think are pushing policy ideas that are bad.

But if you think of them primarily as trying to slow down the pace of development, a lot of these red tape bills in the States are a great way to do that. And so I worry a little bit about the existential risk people, the AI safety group, aligning more with the AI ethics group, which is more of the algorithmic discrimination folks that look to Got these things a lot of the way that we looked at privacy and think of big, comprehensive reporting requirements and algorithmic audits and other things as the primary way to govern this technology.

And they may line up together and with the existential risk, people being like, Oh, red tape might be a good way to slow down these companies and push back the, existential risk that we're worried about. That hasn't happened a ton yet, but I could see it very easily happening because there are a lot of established pathways to spend money on that side of the space.

There's a lot of think tanks who are worried about ethics in computing and have been working in that space for a long time. And coming up with legislative approaches to that. So I'm a little worried about that.

Ben: As a real estate person, I've built in many cities in America and I've gone to Congress around different laws.

Like I was around regulation. I, for example, jobs act, I testified. It's pretty easy and pretty cheap to change laws in states compared to the federal government. You show up and no one wants to talk to the state Senator or the it's really remarkable. I always saw it as like a positive because I was like, Oh, it's very like accessible and there's not as many layers, but I can also see that as a negative because it would make it very easy to get stuff done in the extent that you have to lubricate change with political donations, also not very expensive.

Neil: You were talking about interest groups earlier. There is a sort of roadmap for this type of effort. Online privacy fights, [00:49:00] policy fights have been going on forever. Congress hasn't been able to do anything for 10 years on this, but there's been a lot of action at the states. And there are some groups of privacy compliance professionals, the IAPP, who are basically big lobby groups basically now for privacy legislation.

And They are moving into the AI space and we've seen some bills in Colorado that passed. There's even a bill in Texas or a draft bill in Texas that looks very much like privacy compliance, but it's called artificial intelligence and it would create these big auditing requirements and reporting requirements that would give a lot of lawyers jobs.

And so these bills are being shopped around to the states. And so to your point, I think that's where a lot of the action is going to be in 2025 is going to be at the state level around these types of compliance bills.

Ben: That's going to be true with almost everything is if you push more stuff down to the states, which has been a conservative political philosophy for a while, then the fight happens at the states and whether it's AI or abortion.

But I think for AI, the problem with that is that big companies can afford red tape and small companies can't. And so the unintended consequence is concentration of power. Anyways, we've got over here. It's been great. I really appreciate it. And you know, it's a fabulous conversation. Thank you.

Neil: It's really fun.

And I need to find the forum to sit down and ask you a bunch of questions about how you're thinking about things. This has been a really fun conversation, so really enjoyed it.

Ben: Yeah. Thanks again. Onward. Onward. You've been listening to Onward. The Fundrise Podcast featuring Neil Chilson, head of AI policy for the Abundance Institute.

My name is Ben Miller, CEO of Fundrise. We invite you again to please send your comments and questions to onwardfundrise.com. If you like what you've heard, rate and review us on Apple Podcasts. Be sure to follow us wherever you listen to podcasts. For more information on Fundrise sponsored investment products, including relevant legal disclaimers, check out our show notes.

Thanks so much for joining me.