Picture yourself in an algebra class.
The air is thick with the scent of wood lacquer and boredom, the clock ticks slowly towards freedom, and the teacher, who you quietly suspect is not a real teacher, writes the following problem on the board.
2 + x = 4. Solve for x.
It’s a mixed-ability group, so the answers vary. “Three!” shouts one student, insisting that any other answer is white supremacy. “It’s one,” yells another with inexplicable certainty. “No, it’s six,” says a third, assuring everyone it’s a trick question.
And you, a lone voice in the wilderness, say, “Two.”
When the other students challenge you, you happily explain your reasoning. You demonstrate the answer by subtracting two from both sides, you generalise the result by plotting it on a graph, you even help them visualise the solution using little coloured blocks.
They’re still not convinced.
“Two just doesn’t feel right to me,” says one. “Whatever, sheep. Two is what ‘they’ want you to think,” sneers another. “Well, what the hell do you mean by ‘two’?!” demands an elderly gentleman in an elaborate suit. “And ‘solve’! And ‘x’!”
The teacher listens carefully to everyone’s input but shows no interest in weighing in. “There’s only one way to settle this,” she says, finally, “we need to hear more answers.”
“But what good will more answers do?!” you ask, “shouldn’t we be more interested in the correct answer?”
The teacher smiles patiently, she’s seen this mistake a thousand times before. She leans in close, her bourbon-soaked breath stinging your eyes.
“Ahh yes,” she whispers, shuddering at the profundity of her insight, “but tell me, who gets to decide?”
A couple of weeks ago, Mark Zuckerberg announced that the fact-checkers at Meta (the parent company of Facebook, Instagram and Threads) will no longer get to decide if posts are factual. A drum that fellow billionaires like Elon Musk have been beating for a while now.
And I, for one, can foresee no downsides to this decision.
I mean, who gets to decide if, as one Facebook post recently claimed, Mark Zuckerberg is the recipient of “the world’s first rat penis transplant”? Who gets to decide if he once tried to sell a baby on Facebook Marketplace? Who gets to decide if he “killed Jeffrey Epstein but still misses him every day”?
Zuckerberg also announced that Meta is updating its hateful content policies to remove restrictions on hate speech. And again, thank God!
Imagine the secrets we’ll unlock now that Facebook users are allowed to call women “household appliances.” What truths might have remained unknown without the freedom to call black people “farm equipment”? I don’t even want to imaginea world where calling someone a “Jew rat” and signing off with a swastika violates Twitter’s terms of service.
Frankly, I think they should go further!
According to a 2019 report, Facebook’s content moderators are being driven to despair as they sift through the hours of child abuse and beheadings their users upload. But why do these “censors” get to decide what pedophiles and psychopaths can post, right?
Why not leave decisions like these to a “community notes” style system that can take up to 70 hours to appear on posts, by which time the offending content has already been shown to millions of users? Why not embrace a world where signal is so outmatched by noise that voters have given up on facts and just follow party lines? Why not let CEOs abdicate responsibility for the lies on their platforms instead of forcing them to make their systems better and more transparent?
For all the things we can’t know for sure, we know this: none of these billionaires trying to rebrand themselves as free speech crusaders care about free speech. They care about profits.
They care about the fact that offloading responsibility for fact-checking onto users is far cheaper than employing a team of professionals. So even though community notes are slower and less effective than the current system, they make more money.
They care about the fact that divisive, rage-inducing lies are more likely to go viral than honest, nuanced analysis. And more viral content, you guessed it, makes them more money.
They care about the fact that lying about stolen elections and cat-eating Haitians (notice that these pressing concerns have gone conspicuously quiet since the election) helps distract voters from real issues like wealth inequality and health insurance and price-gouging, issues they’d rather we didn’t think about because fixing them would cost them (or their billionaire friends) money.
The erosion of public discourse doesn’t hurt billionaires, it hurts us. Being told that it makes us freer only adds insult to the injury.
Freedom, as George Orwell put it, is the freedom to say that two plus two make four.
It is the freedom to notice, without fear of torture or imprisonment, that the Earth revolves around the Sun.
It is the freedom to tell the truth, even if it’s unpopular. These freedoms are essential to a free and functioning society.
But contrary to what our tech overlords would have us believe, we are not better off because we can no longer hold them responsible for the disinformation on their platforms.
Our society doesn’t function better when our discourse is dominated by morons and sycophants.
We are not freer now that Russia-funded trolls and fragile egomaniacs can operate with impunity.
Because all the angst around “who gets to decide,” overlooks the fact that there are plenty of things we’ve alreadydecided.
We’ve already decided that swastikas and Nazi salutes were a bad idea the first time around.
We’ve already decided that if you think you’re being “silenced” because you can’t call LGBT people “freaks” on Facebook you probably have nothing of worth to say.
We’ve already decided, through years of hard-won progress, that facts are more useful than vibes.
All that’s left is to decide which we prefer.
I think that the problem with fact-checkers is that, in the past, accurate information and sincere questioning have been removed and written-off as the work of conspiracy theorists (https://www.tabletmag.com/sections/news/articles/invasion-fact-checkers and https://commonplace.org/2025/01/17/good-riddance-facebook-fact-checkers/ are articles I've read recently detailing some issues - the second is explicitly a response to Facebook's policy change). Part of the advantage of community-sourced corrections is that, although they may take longer, people tend to receive them more openly since there's not as much of a "Ministry of Truth" vibe, plus there's less risk of partisanship since anyone can contribute.
On the second development, I certainly don't want to see more hate speech online, but I think that the First Amendment approach isn't necessarily a bad thing. I'd like to see people discuss issues openly on the internet, and the cost of that is that some people will unfortunately say gross filth. At the end of the day, words are just words--how we respond to them is up to us. I think that part of the reason why people post inflammatory things online is for attention, so censoring them only confirms their perception of their own language as powerful and intimidating. That's just my two cents, though!
The internet comes with no discernment, you must supply your own. This has always been true.
https://www.accesstoinsight.org/tipitaka/an/an03/an03.065.than.html