There’s something sexy about acceleration. There’s this scene in The Rum Diary where Johnny Depp and Amber Heard are driving this beautiful, red, 1959 Corvette, and she’s got on her lipstick, and he’s got on his shades, and she offers him a flirty bet, that if he floors it and gets his beast of a car going as fast as it can possibly go, that he will scream before she does. Then she runs her hand along his leg, and pushes down on his knee to make him hit the gas harder. (A few smart people advised me not to bring up Johnny Depp and Amber Heard in a post about AI, but I think it’s funny, and this is my journal, so fuck it.)
The fantasy of just going as fast as possible, consequences be damned—it’s intoxicating. Much in the same way that a scene with Johnny Depp and Amber Heard in a red Corvette is intoxicating. It might turn you on in that adolescent boy kind of way. But ultimately, it’s sorta corny, and it obviously does not end well. Luckily, almost nobody in real life actually drives like this. And if anyone does, they get pulled over for speeding, or running a red light, or breaking any number of other laws that govern how we drive. That’s what laws are for. So the Johnny Depps and Amber Heards of the world don’t crash their Corvettes into the rest of us.
In Silicon Valley, this fantasy of limitless acceleration has a name: it’s called Effective Accelerationism, or “e/acc” if you want to put it at the end of your username on X. The core premise, as I understand it, is that the most effective solutions to the world’s most challenging problems—like poverty, or disease, or climate change—will require advanced AI, and therefore the tech companies building AI need to grow as big and lucrative as possible, as fast as possible, with no rules or regulations or laws blocking their way.
The problem, of course, is that the companies building AI should have to follow some rules. It might not seem that serious today, because AI at this point just looks like a bunch of chatbots. But if you listen to a lot of the foremost experts in the field, we could see some serious downsides pretty soon in terms of economics, misinformation, mental health, terrorism, and worse. It gets dark.
One especially vocal Effective Accelerationist is Marc Andreessen, the billionaire tech investor behind the gigantic venture capital firm, Andreessen-Horowitz. His unintentionally hilarious “Techno-Optimist Manifesto” reads like an adolescent boy wrote it, though to be fair, I agree with plenty of what he says. For example, I agree with him that the US should be investing in nuclear energy, and we ought to admit it was a mistake not to for the last four decades. Of course, any nuclear power plants we build will need to follow rules and regulations that lower the risk of them melting down in toxic disaster. But “risk management” is literally listed as an “Enemy” in Andreessen’s Manifesto, as are “social responsibility,” “trust and safety,” and a litany of other “bad ideas.”
Hot take, right? Clearly, only weak-minded normies care about being responsible and safe. He’s right that when governments try to manage risk, it sometimes results in corruption or failure. But it’s dishonest and self-serving for a powerful businessman who can protect himself with his own wealth to insist that all public-interest rules and regulations are bad. A lot of them work so well, we just take them for granted, like the aforementioned speed limits and traffic lights. In fact, pretty much every major industry requires some governance, whether it’s food, finance, aviation, or telecommunication. When it comes to AI, it’s so new that there are almost no rules and regulations yet. And this leaves us…dare I say it…at risk.
Right now, in the state of New York, lawmakers have put forward a number of bills setting up rules for AI, and I think two of them are pretty good. The RAISE Act would set up some basic safety protocols for the biggest AI companies, and the AI Training Data Transparency Act would force AI companies to be open about the data they’re stealing (cough, cough) using to build their models. I’ve written about both of these issues here, here, and here.
I hope both of these bills become law. But, you know who really doesn’t want that?
Yes, Effective Accelerationist extraordinaire, Marc Andreessen, who is spending his copious wealth lobbying against them. And while I might have debatable disagreements with the dude’s philosophy, I also have to call out the objective lies he’s spreading to support his e/acc agenda. For example, the lobbying group he founded, which laughably bills itself as representing “Little Tech,” makes the vague claim that the RAISE Act would harm small tech companies, when in fact, the bill very specifically applies only to companies who have spent more than $100,000,000 on compute power training their AI models. It’s probably something like the ten biggest tech companies in the world who meet the bill’s criteria, they’re definitely not small businesses, and they can afford to comply with some safety requirements.
It’s no small thing to me to publicly call someone a liar. But in this case, someone’s trying to pressure lawmakers with blatantly false information on something I care a lot about, and it’s just not okay with me.
The truth is that we don’t have to choose between being ambitious and being responsible. We can be both. We don’t have to choose between going fast and staying safe. We can build big, bold, beautiful technology that bestows untold benefits to society and at the same time avoids the massive harms heading our way right now. We can rally the best and brightest minds to tackle this generational challenge without lying to them. We don’t have to choose between “acceleration” and “deceleration.” We can keep hitting the gas while choosing to steer our car towards a destination that’ll be good for everyone. 🔴
Share this post