On Monday, I got to talk with Matthew Yglesias, who runs one of the Substacks I read most often, Slow Boring. We jumped off from the OpEd / Journal Entry I put out last week on how AI companies are stealing people’s content and data, why that could lead to a pretty dystopian future, and what can be done about it. That led us on a path through his home court of the current political landscape, the merits of compromise, the missing ingredient in today’s capitalism, and inevitably, Super Pumped: The Battle for Uber, in which I played the former Uber CEO, Travis Kalanick.
We had some technical difficulties, so no visuals this time, but the audio sounds good, and that’s what matters anyway, right? And if you’d rather read than listen, here’s a transcript:
🔴
—
Matt Yglesias: So, you wrote an op-ed recently. You're getting into my line of work, my bread and butter.
Joseph Gordon-Levitt: Right. You're a much better writer than I am, though, but thank you.
Matt Yglesias: No, I don't know. I don't think so. So let's tell the people about it. What's on your mind?
Joseph Gordon-Levitt: Yeah, okay, cool. This is an op-ed about the thing everyone's talking about with AI. But there's a part of AI that I think deserves more attention. And that's the way these products that get called AI is built. It's not just some quote-unquote artificial intelligence. In fact, I think that word is a little misleading. The way these products are built is they take content and data from millions and billions of people and they throw all of that data into an algorithm. Our writing, our photos, our videos, our voices, our everything. And that gets crunched up and sort of probabilistically spit out as new content. And I'm not against the technology itself. I think it's pretty amazing the way it works. I just think and kind of know for a fact that the products wouldn't be able to do anything if it weren't for all this content that they've hoovered up into their algorithm. They call it training data. And they've been doing this for years now, just taking people's stuff without asking permission and without offering compensation. And I think if they continue to do this, we're going to land in a place where there's no longer any, I guess you could say economic incentive there, no one will ever be able to be paid for having a creative idea anymore because as long as one of these companies can hoover up all the ideas into their model and not pay anything for it and then spit out quasi new versions of the ideas or the content, uh, there's never going to be any sort of business case for paying creative people. And of course, I could go on about how this doesn't just apply to creatives and artists. It really applies to kind of everybody as more and more of our economy, more and more of our jobs are automated through this technology. So I've been talking about it and writing about it and really curious to talk to you about it because you know more than I do about what it takes to have laws and policies put into place that can stop this kind of behavior.
Matt Yglesias: Yeah. So, I mean, you know, one way people think about this sometimes is through analogies, right? So you could say, well, you know, this is the same as piracy, right, as Napster. You know, different kinds of copying. And then what you hear from the industry is, no, no, no, it's not like that at all. This is like, you know, people learn, right? I mean, there's nothing that we do in a creative space that's 100% original. People have seen other movies, they've read other books, and they are in part responding in a cultural sense to what's come before, but they're also picking things up. They're quote-unquote training themselves by reading. I didn't pay a fee to the columnists who I read, who I sort of learned my craft from. But then I just—I kind of think that this, like, reasoning by analogy is a mistake, right? Like, we write laws that have reasons, you know, for them, right? Copyrights have specific terms, specific durations. We have specific ideas about free use. And the idea—I mean, not that we get it perfect, but the goal is to write laws to make policies that will have, you know, good outcomes for society. And, you know, I think that's the question here, right, is how cheap and easy do we want to make it to do AI training? And one frame that's been very powerful lately is this idea of an AI race, right? So the Trump administration's policy statements about AI said that they were going to win the AI race. And the idea of a race is that, like, it will advantage America to advantage American AI companies. And the best way to advantage American AI companies is to let them do whatever they want in terms of training data. So, you know, I guess, like, what part of that logic do you reject?
Joseph Gordon-Levitt: Yeah, well, so there's, I think there's sort of three things you said there that I would respond to all three of them. One is comparing the AI to a person. There's an argument made, and this is very common in Silicon Valley, and this is echoed by Donald Trump the other week when he spoke about this for the first time at his Winning the AI Race summit. He said, one of these models is no different than a person. And when a person reads a book, like you said, a person reads a book, takes inspiration, maybe works that into their next piece of work. They don't have to pay the author of the book that they read. And in a way, that's logical. But what underlies that logic is a comparison between an AI and a person. And basically saying that this AI should be treated under the law equally to a person.
Matt Yglesias: Mm-hmm.
Joseph Gordon-Levitt: Now, as soon as we start doing that, as soon as we start saying an AI is just like a person under the law, I think we are asking for dystopia. AIs are not people. They imitate the writings of people. They seem like people. We, more and more of us, are having very people-like relationships with them. We talk to them. In fact, AI didn't blow up. You know, like, it's interesting to know that the large language model, GPT 3.5, which could do all of the things technically, that didn't get popular until they called it ChatGPT and put it into a chat interface that made it feel like you were texting with a person. Once they did that, it became the most popular, fastest growing, fastest adopted tech product ever in history. Because there's something very compelling about sort of pretending that these things are people. And frankly, you know, there's a whole debate. There's even, there's a whole bunch of folks who will argue, oh no, they are people. They're just as, they deserve rights and they deserve moral standing. They deserve, they call it patienthood just as much as people. But I think that this is really dangerous for us to trick ourselves, fall for the trick that these tech products are people.
Matt Yglesias: Well, and of course, I mean, you know, the companies that are making them have, you know, limits to how much human rights they exactly want to give their employees.
Joseph Gordon-Levitt: Right. If they're people, then we should be paying them like people, right? If they're people, then they're slaves or something, right? Look, there's so many differences between what this technology does versus what a human does when a human reads things and takes inspiration. When a human reads things and takes inspiration, a human can read, you know, maybe a few books at a time and maybe remember some of them. What they've done is not like what a human does. What these companies have done is taken all the text that they can possibly scrape up, all the everything that everyone has made and just scraped it all up. A human can't do that, obviously. And a human can't do that much of it. A human can't do it that fast. And even just technically what's going on with this algorithm that's crunching ones and zeros, it's not the same as what's going on in your brain. They call this technology a neural net because it was sort of loosely inspired by how neurons work. And a lot of folks in Silicon Valley I've had conversations with will say, no, it's basically the same as a human brain. They're neurons. Like a neural net has silicon neurons and you have meat neurons in your meat brain. But it's basically the same. But then when I ask neuroscientists this, anytime I manage to get into a room with a neuroscientist, one of the things I like to ask them is like, oh, oh, oh, can I ask you this question? Computer scientists will say that these neural nets are the same as the human brain. Is that true? And most neuroscientists that I've talked to sort of roll their eyes and they're like, look, there are certain similarities. Yes, your brain and has neurons and yes it might fire and not fire. But that's only a small sliver of what's going on in the brain. And frankly, we, the scientific community, don't even understand what all is going on in the brain.
So it's really kind of cartoonishly simple to compare these neural nets to a human. And especially silly to say that these neural nets deserve kind of equal rights under the law to a human. And so to me, it's not a real argument. It's an excuse for, you know, to optimize business.
Matt Yglesias: And so I thought another good point you made toward the end of your op-ed is that, you know, this race framework has, you know, it has some merit to it, right? I mean, we do need to care about what happens in foreign countries and hostile governments. But also, it's just not the case that, you know, whether it's OpenAI or Google or X or ... These other companies are not ... working out of a sense of patriotism and national obligation.
Joseph Gordon-Levitt: That's exactly right. They're just responsible to their shareholders. They're just businesses that have to make maximum money. Comparing that to a national security effort, like one thing that always happens is this gets compared to the Manhattan Project, the race to create the nuclear bomb towards the end of World War II and what happened was whoever got there first was gonna win the war. And that is what happened: the United States managed to do it first. The United States therefore managed to win the war and set the table for the whole way the for the next however many years, what, 80 years or something, that instead of the world falling into the hands of the Nazis ...
Matt Yglesias: And so if you had, if we were talking about the Manhattan Project and the Apollo program to create American superintelligence, right, well, the point of that wouldn't be to ... replace journalism and Hollywood and novels without compensation. You would work something out. You might say in the national interest we need to put these resources at the disposal of the project. But you would also do something on the compensatory side.
Joseph Gordon-Levitt: Right, that's exactly right. Because I was showing a draft of this op-ed to a lawyer. The lawyer brought up this reminds me of the Fifth Amendment because there's this thing in the Fifth Amendment called the Takings Clause that says the government is not allowed to take your property for the public good without compensating you. The government can do it. And so in this case, if there were, like you said, some kind of Manhattan Project where the government said we need to build the best AI for, you know, the interests of national security, and that were really what was going on, then theoretically, the government would say, okay, we need to take all your data. Yes, all these books that we're ingesting into our model, all these movies, Yes, everyone's writing. There's some, you know, economic value to all of this, Yes, and the government can do that. But that's not what's happening, because ultimately, I don't really buy that they're building this in the interest of national security. I don't buy that, you know, the leaders of these big tech businesses have any particular loyalty to America's national security. It's not because they're bad people or because they're unpatriotic people. It's not what they're set up to do. The Manhattan Project was led by generals. It was led by people working for the government. These tech companies are not.
Matt Yglesias: Right, exactly. So, you talk about this bill, this legislation that Josh Hawley, a Republican, was doing with Richard Blumenthal. And, you know, that's good. I mean, yeah, I mean, you know, we were both being a little critical of the Trump administration's statement on this. I don't think either of us voted for President Trump on various other grounds. I think if we want to sort of get on top of the AI revolution, get a handle on this in a reasonable way, it's gonna have to be a bipartisan effort. I mean, you don't want to have like a stark partisan polarization. So it's good to see a Democrat and Republican working on this. What does this bill do?
Joseph Gordon-Levitt: So I'm not a policy expert, but from what I understand, this bill would bar AI companies from using copyrighted works to train their AI models. And copyrighted works, that could be, you know, a book that an author published, or it could be whatever, something that someone wrote on Reddit or Quora or wherever it is that they wrote it on the internet. If you wrote it, you have the rights to it. And this bill would say, you can't just suck that up if you don't have permission. And if that does happen, then people have the right to sue. And as far as I understand that comes with some amount of transparency meaning the tech companies right now, they got these huge sets of training data that they use but they keep it a secret. They don't tell anybody what exact content and data they've stolen and used. We can't share this with anybody. And so there's no way of any of us knowing, well, did you take my shit or didn't you? And it seems pretty obvious that they have because, the tech companies right now, whether they're writers or visual artists or whatever, can show like, hey, obviously you took my shit. You can enter my name into a prompt and it'll draw in my style. So like, obviously you did it, but I don't have any proof. This act that the two senators are putting forward would say, no, all that has to be transparent. It needs to be public knowledge what information went into these AI models. And if they took something that didn't belong to them, then, you know, the person has the right to sue.
Matt Yglesias: Interesting. So, you know, I mean, I wonder what the prospects for anything like this are. I mean, it's one of the most frustrating things that's emerged in politics over the course of my career.
Joseph Gordon-Levitt: This is what I want to ask you about. You know more about this than I do. Is there any world where this could actually happen? There's a Republican and a Democrat, right?
Matt Yglesias: I mean, on the one hand you've got a bipartisan bill. On the other, polling also shows that, like, the mass public just does not share this super enthusiasm for AI. I think people are interested in technology. Lot's of people are using LLMs. It seems very promising. Most voters have very serious concerns about this. So you would think that Congress would be open to acting. And it seems like they ought to be. Motion seems so hard to imagine because Congress has taken such a passive role over so many elements of life.
Joseph Gordon-Levitt: Especially in technology. Since the rise of the internet, there's been no legislation about social media, nothing.
Matt Yglesias: Well, exactly. Everything that's happened on tech has been kind of, Like, taking these square pegs of issues that emerge and trying to pound them into round holes, so we have this body of antitrust law from 100 years ago, and people are, like, worried about how social media is impacting their kids. So it's like, can we make this an antitrust issue? And, you know, I've been very frustrated that we don't see more initiative to say, look, there's whole new technologies. They've been transforming our lives for decades now. It's only going to continue. We need some new regulatory frameworks. We need some new regulatory entities to take these things on rather than trying to do everything, you know, by saying like analogy, like, is this like a railroad? You know, what's it like?
Joseph Gordon-Levitt: The technology is new. There's never really been anything quite like this before. A technology that could ingest all the writing ever and scramble it up algorithmically and spit out these really impressive outputs. There is no framework for that. It's not like a VCR and a movie. It's not like a professor quoting a piece of a book in their lecture. It's not like any of that. It's a whole new thing. And we need to think about, and this is like one of the things you said a minute ago that I really liked. It's like, we ultimately just need to think about how do we want this to work in our society? What ultimately do we value? And this is to me the problem is there's a mentality and I don't think it's just the Republicans that are guilty of this. You know, Trump certainly exemplifies it, but the Democrats are guilty of this, too. If it's making money, if it's good for business, then that's the answer. Then that should be right.
Matt Yglesias: And I think we've seen, you know, in the kind of backlash that I think a lot of people, we're both parents, you know, have been experiencing about technology and kids, social media, you know, like innovation is great, technology is good, but we still need to ask, you know, specifics about these things. And when, you know, when I talk to people in the industry right, and they're like, what, you know, what's good about AI? And they're talking about, you know, improving productivity of our workplaces. They're talking about, you know, we're using artificial intelligence to do new drug discovery. And like, that all sounds great, right? But then if you ask, it's like, well, do we really need to, like, have short-form videos that are even more optimized than the current set of short-form videos so that it's even more compulsive, but also it's cheaper to produce them, and also, like, do we really need to, well, like, do we need that? And so you can tell the difference between what's in the public-facing kind of case for why you should be excited about this technology versus some really obvious use cases for the current track that we're on. It's, like, nobody says that what we need right now is like cheaper and more ubiquitous, scrolling on your phone.
Joseph Gordon-Levitt: Which is what these companies are going to offer. Not because they're bad people, but because that's the quickest route to making the most money.
Matt Yglesias: We know it makes money. And so we don't need to make that as low-cost and accessible as possible to them because it has nothing to do with, like, the public interest in advancing this technology, right?
Joseph Gordon-Levitt: But this is the big question, right? I don't consider myself anti-capitalist. I started a business. I raised venture capital. I employed a number of people. I'm not against the system that we have entirely in principle, but it does seem like there are these excesses. And if you just have it be purely profit-driven, we get all these bad outcomes. And in the media, you see it may be worse than anywhere, although I guess you see it throughout various parts of the economy whether it's in healthcare or the finance industry or whatever else, that as long as companies are ultimately just chasing maximum profits, maximum shareholder value, they're actually not doing what's good for the world or even their customers. They're just gaming the system to make more and more money and we get all these bad things in our world.
Matt Yglesias: Well, you know, I mean, this has always been, I think, part of life, right? I mean, you know, when we realized that cigarettes were doing a lot of harm to people, we didn't, like, abandon capitalism as a result. But we also said, look, you know, like, we need to regulate this industry. We need to redirect what it is people are doing. And as our lives have become, you know, when the internet was new, right, it was this incredibly unregulated sort of playground. And some of the thought from that, from the policy community was, you know, there's a really new space. We want to give people the opportunity to experiment. It's all like ones and twos. It's intangible. So the downside risks seem really, really low. But as, you know, digital technology, where now like five of the top five companies in America are, like, in the computer industry. All of our lives take place this way. It's just, you know, it's too important to treat as this kind of total regulatory black hole. And I say that, I mean, I've seen some people in the chat down here, you know, they're like, what are you talking about, Matt? You know, people think I'm too deregulatory. I wrote an article today about how I thought Uber and Lyft are really good, and it's good that they were able to change the taxi industry. But, you know, it's like you've got to ask, like, what is the product, right? And I don't even think it's that AI is bad or that we don't want that product, but this particular thing of just duping the written and increasingly visual and audiovisual content, it's a low upside, high downside zone of the economy. And there's no need that I can really see to be, quote-unquote, racing with it.
Joseph Gordon-Levitt: Can we go back to Uber and Lyft?
Matt Yglesias: Yeah, yeah.
Joseph Gordon-Levitt: I was in a show all about Uber. I played, you know, the former CEO of Uber.
Matt Yglesias: I know, yeah. No, not... the most sterling human being who has ever existed in our lifetime. I just don't think that at the end of the day, you can go out, have a couple of drinks and then get a ride home conveniently instead of driving yourself. Like that has a value that is easy to sort of explain to the world.
Joseph Gordon-Levitt: A lot of these AI products have a lot of value.
Matt Yglesias: Sure. I mean, there's some real, real like, you know, so that's a great show if people haven't seen it.
Joseph Gordon-Levitt: But to me, this is this maybe gets to the crux of the issue. And this is where I actually wouldn't put the blame on Travis Kalanick, the CEO, although maybe he maybe he deserves some. To me there's a larger thing that's worth talking about. I use a ride-sharing app sometimes, and it is really convenient, and it is better than the old system of hailing cabs or having to call a cab company. But they did it in such a way that was predatory and abusive and bad especially for the drivers and the folks who earned their living as drivers got screwed in a pretty problematic way. And couldn't we have had that same advance, that same better thing? We could have an app on our phone where we can hail a ride in a better way than we used to be able to hail a taxi, but do so in a way that doesn't have these predatory, excessive, bad side effects, I think you can. But the problem is, if you do it that way, you don't become an overnight billion-dollar unicorn. And if you want to be a fast-growing billion-dollar unicorn, you have to do these kind of optimal things. When I say optimal, I mean like, let's take it to the very edge of like the very most profitable we can possibly make it. And that's where you end up screwing people over, screwing over people or screwing over the environment or screwing over your customers or whatever it is. That's where you get these sort of predatory business practices. But if Travis and Uber didn't do it, his competitors would have. So this is where I get to this question of like, it seems like the whole way this business ecosystem is set up is a problem. Like, we're setting these CEOs up to do the bad thing, to do the wrong thing. Because if they don't, then their competitors do and they just end up failing.
Matt Yglesias: Well, that I strongly agree with. I mean, you know, because I've talked to some different, you know, executives of different AI companies over the years. And what's disturbing to me is that as time has gone on, their actual behavior, their takes, like, it's gotten worse. You know, I think that most of these people got into AI originally for very idealistic reasons. But they are in a competitive environment. They need to raise capital to, you know, to build out their infrastructure. To raise capital, you need to make promises to investors. And to compete, you need to compete, right, in a very sort of cutthroat kind of way. And that's, again, this is why you need the government. You know, you can't just trust in companies or their investors or their executives to sort of do the right thing. And, you know, in terms of the Uber analogy, right, I mean, I think an important thing to know in life, right, is that even things that are good on balance have downsides, meaningful downsides for people. In this case, a lot of, you know, very sympathetic cab drivers, license holders and stuff wound up kind of getting screwed. And I've seen, in particular, a couple times on Twitter, you know, Vice President Vance, who has a track record of being a thoughtful person, be kind of breezy about the idea of AI displacing people's jobs. And he's just saying, well, you know, it's growing productivity and that's good. But he has written, I think, really sensitively in the past about the impact of trade and job losses on communities in Ohio. And, you know, those transitions, like, it improved productivity, you know, like letting companies go to lower cost, outsourcing and so forth. Like that has real benefits, but also has big costs, big human downsides, big problems for society. And I think it's like really irresponsible for the government to just sort of race full speed ahead and say, like, well, you know, I'm sure it'll all work out in the end, right? I mean, we've been talking about the copyright issues that sort of face people in the lines of work that you and I are both in, but this same process is coming for more and more and more people. It could be very broadly beneficial, but it's only going to be beneficial if we write rules that make it be beneficial.
Joseph Gordon-Levitt: That's right to me. But when you say this kind of thing, oftentimes you say, doesn't the government need to step in? If we just let it be driven by business, we're going to be in trouble. And then people will say, yeah, but I don't trust the government. The government can't do anything right. I also kind of buy that. I have to admit, like when they say I don't trust the government, especially now, I don't trust the government. But I also didn't have a ton of trust of the government even before Trump got back into office or throughout really my whole adult life, like the first time. presidential election I voted for, George W. Bush won. I didn't really particularly trust him for the eight years that he was in. And then when Obama was in, like, I probably trusted him more than I did Bush, but Obama also did things that I didn't like. And so I don't know. I understand why people say, I don't trust the government to help with this. What do you say to that?
Matt Yglesias: Yeah, I mean, look, I mean, I agree that, you know, on some level, I'm not like, what we really need is, you know, Trump AI, MAGA AI, or whatever to come control everything. But, you know, we are a democracy. The government, on some level, is our elected representatives responding to what it is we are saying and what it is we want to do. It would be nice, you know, to have a functioning legislative branch of government that just, like, was passing laws, right? I mean, this bill, this Hawley-Blumenthal bill that you're talking about, like—I think, like, by no means, like, solves everything, but I would love to see it, you know, get to the floor, get a markup. This is—you know, they take a bill to a committee, right? And it's where the different members of the committee get to have their input, so on and so forth. And, you know—you're not going to, like, solve everything or get it all right on one try. But it's just crazy to me that we are now, you know, it's 2025. We're 25 years into the 21st century. We're years into the AI revolution. We have not had a significant piece of legislation about digital technology since probably 2001, you know, Digital Millennium Copyright Act.
Joseph Gordon-Levitt: With Section 230, which basically said that the companies aren't liable for anything, right?
Matt Yglesias: Well, Section 230 was trying to create a safe harbor for moderating internet forums, essentially. And I think it was, like, a good idea. If you talk to the people who are involved in that, like, what is it that they are trying to do. But the way it's written and the way technology has evolved is that it's become, like, a complete blank check for, like, giant social media enterprises, etc. And, you know, it's an example of the fact that ... there's a race, right, between legislation and technology, right? You see a set of practices and technologies and institutions, and you write rules that you think are going to make sense. But then as the situation changes, like, you need to keep updating what the rules are. Right. Or else you're going to end up.
Joseph Gordon-Levitt: The law is supposed to evolve and be living. And our laws can't adapt fast enough. Is that I mean, it seems to me like.
Matt Yglesias: I don't know that they can't, but they haven't been.
Joseph Gordon-Levitt: But can it not like, can it though? Like, do we need some kind of like big overhaul to how the government works so that it could adapt more quickly and be a lot more effective and pass more laws that are more tailored to what's actually going on now as opposed to 20 years ago?
Matt Yglesias: Well, you know, I mean, the traditional way this has been done is that the government would create sort of, you know, commission type agencies, right, that were had rule-writing authority and would do fact finding and could hire technical experts. So like for a long time, the railroad industry was very heavily regulated along these lines. And, you know, we stopped doing that, I think, sometime in the 70s, I think for good reason on the thinking that the technology once again had changed and that, you know, freight railroad faced competition from trucks. And so we could we could move off that regulatory paradigm. But I but I think that we need to, in addition to looking at specific sort of legislation, I mean, think about creating a like, you know, something like the FCC, right, but like for artificial intelligence, right, that can make decisions about, you know, in what sense do we want to say that an AI agent is like a human being, right? Like, who is going to have legal liability when something malfunctions badly? I see some people have noticed, right, we're not on video here, and that's because we are having some technical problems with our chat, right? And we wanted to go forward with it anyway. And, like, that's fine. But we just also know that, like, technology products malfunction in different ways. Like, bugs happen. Things happen. Sometimes you say that, well, because of the Chevron Doctrine, you can't do this. But that's not right.
Joseph Gordon-Levitt: Wait, what's that? I've heard that term, the Chevron Doctrine, but I admit I don't really know what it means.
Matt Yglesias: So the Chevron Doctrine was that courts should defer to regulatory agency judgment if statutes were ambiguous. So the new Supreme Court says, no, you know, we don't want to have that level of deference to regulatory agencies. But fine. But it's still Congress can write a law specifically empowering an agency to make different kinds of decisions. That's been done from time immemorial about different kinds of subjects we just we don't have an agency that has the right kinds of authorities, right? What we have is these different kinds of little bits and pieces of it, right? So the FCC, which was really created to regulate the radio industry and then expanded to broadcast television, but, for example, has never regulated the movie industry, right? Because it didn't interact with the airwaves. And for now, elements of technology kind of pass into FCC purview, but not like in a relevant way. It's not the right kind of agency to tackle this.
Joseph Gordon-Levitt: Why was it that back when the radio was invented, that back then when there was a new technology, the government was able to spin up an agency to deal with it, and we're not able to do that now?
Matt Yglesias: It's a great question. Putnam was looking at this Progressive period of a century or more ago. And one of the things that happened back then was that we saw an increase in people's engagement with their communities and things like churches, other organizations that were not partisan politics, right? And that it was out of those kind of community institutions that people started saying, hey, like, we need to tackle issues in our society that are changing, right? Because if you look at partisan politics, which, you know, I'm very involved with and I think is important, but it's organized around certain topics, right? You know, abortion rights, taxes, healthcare. These are important subjects, and, like, it's important that we continue to argue about them and debate them. But we have these new issues. And, you know, the existing people in politics don't want to sort of put on the menu things new issues. They want to talk about their old issues and the things that they are most expert in. And, like, people, we need to organize as citizens and to the extent possible, like, reach out to people who are in the other party, right, but who may share our concerns about these kinds of things. Because that's how you get new kinds of action on this stuff.
Joseph Gordon-Levitt: Do you feel like there's really an ability to reach out to the other party in that kind of good faith way and say like, hey, let's actually work together and try to accomplish something that's helpful to people?
Matt Yglesias: I mean, it feels hard.
Joseph Gordon-Levitt: I feel like so cynical right now. It just feels like that's ... And this isn't just this year. It felt the same three years ago and it felt the same kind of, I don't know, ten years ago.
Matt Yglesias: I agree. I mean, for a decade or more, it has felt in so many ways hopeless. Although I will say, like, on another topic I cover, right, housing policy, Tim Scott and Elizabeth Warren, who are not, you know, particularly moderate members, they just worked together and they produced a big housing, sort of federal housing policy overhaul. It passed the Banking Committee unanimously. So I take a lot of inspiration from that. You know, if guys like Josh Hawley and Richard Blumenthal are serious about this, you know, I have all kinds of problems with Senator Hawley. But the fact is, he represents an incredibly conservative state. Anybody who gets elected for Missouri is going to have all kinds of positions that I don't agree with. And I think it's really great and really admirable that he is taking, I think, a little bit more seriously some of these, like, MAGA-type concerns about the trajectory of the country and, you know, families' desire to have control over the universe. I know that a lot of people, you know, evangelical Christians, serious cultural conservatives are not particularly excited for a world in which, you know, disembodied AIs control our entire culture. And so I think, you know, people like you and me, Joe, like, we need to find ways to talk to people like that and raise these concerns in different kinds of venues and different kinds of places. Because I do think that President Trump has gotten off in the wrong direction here. But for reasons that are not that, like, connected to, I don't know, the main themes of his politics, or I think what his voters are interested in.
Joseph Gordon-Levitt: I agree, but it does seem like, I don't know, it seems like a main theme for, not just Trump, but it seems like this has been sort of the main theme for a lot of the right wing, but also just a lot of the a lot of the American government for quite a while is let businesses do whatever they want. As long as the business is making money, it should be allowed to do that. That seems to me like kind of at the core of what really needs to change. Like we all kind of need to sit up and recognize like, hey, if we just let businesses do whatever they want to make as much money as they can, we end up with all kinds of bad shit happening all the time. And we need to have laws in place that say to those businesses, hey, do your thing, make a successful business, make a product that people like, but there are going to be some guardrails because we know that if you're allowed to just make as much money as you possibly can, you're going to end up doing these bad things. And I don't know if it started with like, I know that Reagan, who's in lots of ways held up as a more heroic Republican president compared to what we're seeing today. But I know Reagan was like famous for that line. I was just reading this. He said, the eight or nine scariest words in the English language are, "I'm from the government and I'm here to help." And this notion that the government just can't do anything good and the government is always going to be bad. I don't know. Like I said earlier, on the one hand, I don't have a lot of faith in the government either, but I know we've got to have something besides just businesses running wild and allowed to do whatever they want. And it seems like the only other thing we could do to try to rein in these businesses is have laws. So this is why I get interested in reading SubStacks from folks like you, whereas I don't come from the government. I didn't study the government. I remember being 18 years old and watching the presidential election in the year 2000 go the way it did. when Bush was sort of handed the election by the Supreme Court over Gore. And I didn't vote for Gore. To be honest, that year I voted for Ralph Nader when I was ...
Matt Yglesias: So this is all your fault. The whole ... The whole trajectory of the 21st century.
Joseph Gordon-Levitt: To be fair, I voted in California, knowing full well it would go blue. But if I were living in Ohio or Florida or something at the time, I might have voted differently. So I recognize a lack of trust in the government, but I'm finding myself wanting to understand it better because it feels like this is something, sort of a task for our generation. Whereas when I was young, I sort of turned my back on the government and said, like, nothing good can happen there. And by the way, back then, there was also this big ... valorization of especially the technology industry.
Matt Yglesias: I think that was like very real 25 years ago. And you know, it's like the world, the world needs to move on. Okay. So we, we got to a little bit of a late start. And so I'm about out of time here.
Joseph Gordon-Levitt: We should wrap up. Can you tell me, what's like, what's one thing you think that I should do. And, you know, for listeners, maybe you could also think of something that they could do. But I'm actually really curious, what do you think that I specifically should do if I want to help this bill from Hawley and Blumenthal? Is there any hope that that bill can actually come to pass and be real? And what could I do?
Matt Yglesias: You know, I mean, I think there is hope, right? I mean, passing laws is always hard, but it is also always possible and things do happen. You know, I think that, like, working to talk to as many people as possible about it in a way that avoids partisanship and polarization is sort of the most important thing to do. I think that, you know, if anyone who's listening to this, right, if you're on the more Republican side, you have to try to take cues from what progressives are saying about this. And people like you and me, Joe, like, we need to look at what are the Republicans who we think are doing something good here doing? And what can we learn from them to reach people on the other side?
Joseph Gordon-Levitt: Well, hopefully this conversation has helped in some way, shape, or form. I always really enjoy getting to talk to you. I definitely recommend anybody listening, if you haven't read Slow Boring. First of all, I love a good punny title.
Matt Yglesias: Okay.
Joseph Gordon-Levitt: Slow Boring is a great title. I find your writing simultaneously not boring, but maybe perhaps boring in all the right ways and that it's like doing the hard work to look at the actual specifics of things. Anyway, thanks, Matt.
Matt Yglesias: Thank you. That's very kind. I hope we'll talk again soon.
Joseph Gordon-Levitt: Yeah, I hope so, man. All right. Take care. Bye.
Share this post