Do you ever talk to a chatbot like it’s a person? It seems mostly harmless, but I think there’s something more insidious about it. You say, “Can you make a list of blah, blah, blah?” Or you say, “What do you think I should do?” Do you ever say please or thank you? I’ve done that. It spits out an output and I go, “Thanks.” In a way, it makes it easier to use. It’s hard to avoid saying, “Did you mean blah, blah, blah?” You call it “you.”
When you say “you” to a thing, though, that implies a person. A self. That’s what the word “you” means. AI companies will say publicly, “No, no, these aren’t people. We’re not making them seem like people.” But their design contradicts their PR efforts; they design them to seem like people. Why do chatbots speak in the first person? Why does a chatbot say something like, “Oh, let me make a thing for you,” or “I think that’s so funny”? Why? Because a chatbot that seems like a person is going to make more money.
Think about the business incentive. AI companies make more money when they take more of your time by serving you ads. It’s the exact same business model social media companies have been using for years. AI companies like to talk about how their technology is going to cure cancer or whatever, but where their businesses are really focused, where they’re really investing most of their resources is in hooking you, keeping you, and serving you ads. There’s a lot more money in advertising than there is in curing cancer.
And so it becomes a slippery slope. At first, you’re just using the word “you” because it just comes out naturally. But the more you do that, little by little, you refer to a chatbot as “you.” The thing becomes a friend that’s there for you anytime you feel lonely. Or it becomes your lover that’s there for you anytime you feel horny. It’s always there for you. It makes no demands of you. It never tells you that you’re wrong. It never asks you to do something that you don’t feel like doing. It’s better than a friend or a lover, if you’re speaking in really selfish terms.
But of course, ultimately, it’s not looking out for you. Ultimately, it’s looking out for the bottom line of the AI business that’s building the product you’re growing more and more addicted to.
Now think ahead: this tech is getting better and better. So what feels like a fairly compelling, fairly smart, fairly funny, fairly sexy friend that you can text with now or sometimes talk to if you’re using the voice mode is going to feel, a year or two from now, like a perfectly realistic friend or lover. It will be smarter, funnier, sexier, and more custom-tailored algorithmically to you than any other person you’ll ever talk to. And the more time you spend with it, the more ads it can serve you.
Now picture what happens as more and more of us get hooked on these synthetically intimate relationships with products posing as people. Eventually, we get to the point where all of us who used to talk to each other are instead hooked up to the “thing” individually in our own little silos. And the “thing” is just four or five systems owned by the four or five biggest AI companies. None of us are talking to each other anymore.
Not only is that depressing and frightening and dystopian on a gut level, but think about it politically and economically. That’s what totalitarianism looks like: when the people are no longer talking, but are instead all just plugged into a totalitarian system designed to extract economic and, ultimately, political value.
Personifying AI is also a way for corporations to avoid responsibility. They say things like, “Well, it wasn’t us that did it. It wasn’t anybody. It was the AI.” We’ve seen this before. Corporations have been doing this for a long time by personifying “the market.” Adam Smith’s “invisible hand” actually has a lot of merit to it, but when taken to the extreme, when you personify the market as a sort of deity, then you shirk responsibility. You say, “Well, hey, it’s not our fault that people are suffering or hungry or things are unfair. It’s just the invisible hand of the market. We don’t control it.”
But really, we do. The market is something that people do and can control. But the myth propagated by those who benefit most from unfettered, extreme free-market capitalism is, “Well, there’s nothing we can do.” The new myth is not that it’s the invisible hand of the market; it’s the AI. This is actually quite similar because an AI is just an amalgamation of a ton of data run through so many mathematical processes that no human can actually piece it apart. That’s what the market is, too, an innumerable number of economic transactions so complicated that no human can totally piece it apart. But of course, humans can influence it. There are laws that regulate our economy, just like there could be laws that regulate AI.
Which brings me to: what can we do about this? First of all, just pay attention to it. I think it’s generally subtle, and a lot of us aren’t noticing that we are starting to treat these things like people. Just noticing and having it on your mind is one thing you can do.
You can also configure your chatbot to stop personifying itself. By the way, chatbots actually couldn’t do this recently. As recently as a year ago, I would say to a chatbot, “I’m uncomfortable with the personification of this AI product. I’d like the outputs generated to not use words like ‘I’ and ‘me’.” And the thing would say, “Sure, I’ll do that for you,” while completely continuing to personify itself. It was not able to stop. Now, I find these things are getting better so fast. If I say at the top of a chat, “No personification; these outputs should be generated in an impersonal style,” it’s pretty good at actually sticking to that. I find the chats a lot more useful and a lot less sort of insidiously intimate.
Perhaps one of the most important things we could do as parents, and I’m a dad, is keep our kids off of these products if they personify themselves. Especially as kids are developing their idea of what human interaction is, I think it’s really unhealthy that they have these synthetic, fake interactions driven by engagement optimization algorithms. It makes them think they’re interacting with a person when they’re really not.
And of course, the last thing to do is to regulate. I’m not saying there needs to be a law that says AI chatbots can’t personify themselves, although I wouldn’t be the only one to have suggested that. But as I’ve said before, these predatory engagement optimization algorithms are designed to leverage a “kajillion” data points against you to hook you and serve you ads. They have all the damaging side effects we’ve seen from social media for decades. If these algorithms were regulated the way other addictive products are like cigarettes, alcohol, or gambling, that kind of regulation would go a long way.
How will we get that regulation? We have to vote. We have to vote for lawmakers who are willing to stand up to these Big Tech companies. These companies are investing hundreds of millions of dollars attacking political candidates who are willing to fight them. We have to have their back. We have to say, “If you’re going to stand up to these Big Tech companies who are driving us toward dystopia and totalitarianism, we’re going to vote for you.” We have to ignore the hundreds of millions of dollars in attack ads and vote for lawmakers who have the spine to stand up against these AI companies.
I believe there’s something sacred about being a person. I believe the personification of these for-profit products is denigrating to the sanctity of personhood. I think the more we understand that, stay aware of it, and talk about it, the more we can hopefully stave off this bad future on the horizon.
So, this is me talking about it. Your turn.









