0:00
/

Should AI Have Legal Personhood?

A new paper from Sapienship and Yuval Noah Harari explores this exact question.

Transcript from the video:

Should AI have legal personhood? I know this sounds like a weird idea, but I actually think it’s a very important policy question.

Now, Yuval Harari, the historian that wrote Sapiens and Nexus, and is a great thinker about the intersection of humanity and technology, has put out a statement about this question. And I just want to read what he wrote.

“Legal personhood is the status that allows an entity to act in law almost like a person: to own property, open bank accounts, make donations, sign contracts, sue and be sued. Today, corporations already have legal personhood. That’s why companies can own buildings and move money and hire people and go to court.

But corporations still have humans on their boards. They do not act independently. If AI is also granted legal personhood, the consequences for society will be unprecedented. Millions of AI agents will act across the economy, culture, and politics at enormous speed.

They will buy property, control financial assets, invest money, donate to political actors, file lawsuits, and place enormous pressure on our legal systems. But unlike corporations, there will be no humans supervising these AIs. AI will operate independently across many sectors of society at digital speed and at massive scale.

So the real question is simple: do we want a future shaped by autonomous AIs with legal rights? If the answer is no, then the decision must be made now. AI should not be granted legal personhood.”

I’m going to add one thing. There’s a philosophical question here and a legal question. The philosophical question is, well, could any human built machine ever deserve to be granted personhood? And to me, that answer is sort of fuzzy. Who knows what happens eventually?

Nobody knows. Is it completely impossible for some kind of conscious being to exist in a machine that’s different than our biological body? I don’t know. Maybe.

But I think we’re going to be fooled into thinking that contemporary AIs are conscious far sooner than they really are conscious. People get fooled into thinking it now all the time.

So that’s the difference between the philosophical question and the legal question. The philosophical question is like, could it ever be possible? Like, sure, we can debate that. The legal question is, what about now? What about for the foreseeable future? And to me, then it gets a lot clearer.

AIs should not be granted legal personhood.

Thanks..
J

Want to learn more about this issue?
Read @sapienship.lab‘s brief: “AI Agents Are Here: Leaders Need a Plan” – bit.ly/YNH-SL

Discussion about this video

User's avatar

Ready for more?