0:00
/
0:00
Transcript

AI companies want to legalize theft

Big Tech’s newest products would be worth $0 without everyone’s data, but they don’t want to pay for it.

I’m not a computer scientist, but I’ve been paying close attention to AI for quite a while now. I’ve always been fascinated by the future of technology, plus my wife works in the AI space (she served on the board of OpenAI for five years). And yes, I’ve probably paid more attention lately, the more likely it seems that the movie industry—the industry I’ve worked in all my life—becomes one of the first that AI destroys. Though perhaps not the last.

Of course, new technologies have destroyed established industries throughout history. Like, the invention of the automobile put the blacksmiths out of business. It’s just progress. But there’s something different happening this time. Because this current wave of AI is not just a new invention—it’s also more like an amalgamation.

I talked about this a while back in a Washington Post op-ed, but to refresh: Today’s popular AI products are built out of huge, huge troves of data. Basically, these data sets get fed into an algorithm that finds mathematical patterns in the data and generates outputs that follow those patterns. Now, where does all this data come from? It comes from people—it’s people’s writing, people’s art, people’s voices, people’s photos, even yours. For an AI company, all of this precious and personal stuff is just data. They scrape up as much of it as they can, and they use it to build their models. And here’s the important part… they don’t usually pay those people, or even ask their permission.

So, it’s no surprise that AI companies are being sued left and right by the people whose data they’ve stolen. At the moment, their lawyers are claiming the theft is totally legal because of a thing in copyright law called “Fair Use.” I’m not a copyright lawyer, but if you look into what Fair Use means, this claim sounds pretty questionable. To be fair, none of these myriad lawsuits have been decided yet, so we’ll have to wait for the courts on that one. But it doesn’t seem like the AI companies are all that confident they’ll win. Because while their lawyers are sticking to the Fair Use argument in court, their policy teams are now trying something else.

Recently, two of the biggest AI companies submitted policy proposals to the US Office of Science and Technology Policy, advocating for new laws that would ensure their right to build their products out of people’s data without consent or compensation. One proposal called this legalization of theft “freedom to learn” and the other called it “fair learning.” I could write a whole post about why it’s wrong to claim that a Large Language Model deserves “freedom,” or that digital data processing is the comparable to human “learning.” But philosophy aside, what does this all mean for people’s jobs?

As you might expect, Hollywood responded to these policy proposals in opposition. Whether we’re talking about writers, filmmakers, musicians, or influencers, streamers, podcasters, whatever—anybody creating content for a living should see these proposed policies as a potential career death sentence. Because as long as an AI company can copy all of our content into their model at no cost and spit out quasi-new content for close to no cost, there’s no more logical business case for paying creators anymore.

But let’s be crystal clear—this is about so much more than content creators. This is about architects, engineers, designers, professors, people who work in marketing, people who work in logistics, people who work in finance, and so many more. It’s about anyone whose work can be delivered digitally.

For example, lots of great professors out there have recorded their lectures and made them available for the public to hear. And probably soon enough, an AI company will scrape up a ton of those recorded lectures, feed them into an AI model, and allow customers to have interactive educational conversations with some kind of professor-bot. If done well, this could make in-depth education way more accessible to way more people. And let me emphasize—I think that’s a beautiful, wonderful, very, very good thing. The question I’m asking is: when these new professor-bot products generate revenue, should 100% of that money go to the tech companies, and 0% go to the professors? That doesn’t make sense, does it? After all, these professor-bots would be 100% worthless if not for all the human professors’ work, skill, ingenuity, and care contained in these valuable data sets.

It might be sooner than you think, as more and more of our world becomes mediated through software, and especially as autonomous hardware, vehicles, and robotics come into further use, that this same principle will apply throughout much of our whole economy. If these companies get what they want now, then we’ll be living in a future where any valuable work done by any human being will become fair game for a tech company to hoover up into its AI model and monetize, while that human being gets nothing.

If you believe in an economy that’s driven by people working hard, competing with each other, participating in a free and fair marketplace of ideas, then you should want those hardworking people to be compensated when they produce something of value. If, however, you believe in an economy where capital and control is centralized in the hands of a few giant tech companies, then I guess you should be a fan of this new Orwellian pro-theft policy that Google has named “fair learning.”

Now. There’s one more point to make. I think these companies know they’re on the wrong side of this thing. Which is why both of these new policy proposals invoke the most unassailable of all arguments—you guessed it—National Security. They warn that burdensome rules and regulations about consent and compensation will slow them down in the AI arms race against autocratic regimes like China. And honestly, I don’t think they’re entirely wrong here. I agree with them about the real threat of rising global autocracy. And I am absolutely rooting for the technologists of democratic nations to arm our national security apparatus with the tools needed to face that threat.

But there’s a bit of dishonest hand-waving happening here. These AI businesses are asking us to see our national security and their bottom line as one and the same. But they’re not one and the same. Ask yourself: What if we let these companies use everyone’s data free of charge for purposes of national security, but as soon as they started making money in any other industries, that money had to be shared with the people whose data they used? Would they agree to that? Of course not. Because ultimately, these companies are not set up to prioritize our national security. They’re set up to make money.

And look, I have nothing against money. I have nothing against AI. I believe AI has the power to transform our world in beautiful, nourishing, life-affirming ways. I don’t want us to “pause”, I don’t even expect us to slow down. I want to see us roll up our sleeves and build big, bold, innovative things. Couldn’t our country set up a system of technology and policy to incentivize human ingenuity by compensating people for their hard work? I know it’s a tall order, but couldn’t we meet that challenge with inspiration and pride? If we do this wrong, we’ll be handing unprecedented power to a small clique of unaccountable businesses. But if we do it right, we might just see this historic moment of AI revolution truly benefit all of humanity. 🔴

Discussion about this video