When I Became the Speech Referee

Free speech is one of those phrases that has been repeated so many times it has almost stopped meaning anything.

You hear it from people who want to say whatever they want without consequence. You hear it from people who are trying to justify things that have no business being said in public. You hear it as a shield. You hear it as a weapon. And somewhere in the middle of all that noise, you have to figure out what you actually believe — and whether you're willing to back it when it's inconvenient.

I've been on both sides of this conversation. The person saying things that make rooms uncomfortable. And the person deciding whether those things get to stay.

I Have Experience in This Room

Here's what most people debating free speech online have never actually done: sat in the chair. Made the call. Looked at a piece of content or a live moment and decided — this stays or this goes — knowing that either answer is going to cost you something.

When you're in that chair, free speech stops being a talking point fast. You realize very quickly that moderation isn't about silencing — or at least it isn't supposed to be. The legitimate reason to moderate is safety. Real safety. The kind where something being said poses an actual, measurable threat to real people. Not: this makes us uncomfortable. Not: this is off-brand. Not: this person is saying something I disagree with. Actual harm. That's a different standard, and most of the moderation debates I've watched online collapse because nobody is willing to agree on what that standard even is.

Where Moderation Goes Wrong

The problem is that moderation rarely stays at that line. What starts as a genuine safety policy tends to drift — slowly, quietly — toward risk management. And risk management isn't safety. Risk management is: what might make us look bad? What might trigger a complaint? What might cost us an advertiser?

And once you're moderating for optics instead of safety, you're not protecting anyone. You're just deciding which voices are commercially convenient. That's not free speech. That's not moderation. That's gatekeeping dressed up in policy language.

I've watched it happen. I've watched people get silenced — not because they were dangerous, but because they were loud and inconvenient and the algorithm was nervous. And that's where I draw the hard line. Because I believe in free speech not as a slogan, not as a constitutional argument, not as a trend — but as a personal value. One I've had to defend in rooms where defending it cost something.

What I Actually Believe

Uncomfortable speech is not the same as dangerous speech. Disagreement is not the same as harm. And the moment a platform or a person or a policy starts blurring that line — because it's easier, because it's safer, because it plays better — they've stopped protecting people and started protecting themselves.

I'm not interested in a sanitized version of the world where nobody says anything that makes anyone flinch. That's not safety. That's a performance of safety that makes us weaker, not stronger.

What I believe is this: truth is sharp. Accountability is uncomfortable. Real ideas challenge people. And if we build systems that eliminate everything sharp and uncomfortable and challenging — we haven't protected free speech. We've killed it while calling it something else.

I've been the referee. I know what the rulebook looks like in practice. And the most important thing I took from that chair is this: the rules have to serve the people, not the other way around. The minute the rules start serving the institution, you've already lost something you can't get back.

Keith Bilous built and sold ICUC for $50 million, led 400+ people, and worked with Coca-Cola, Disney, Netflix, and Mastercard. In 2023, he created Mornings in the Lab, a daily LIVE morning format. Over 1,000 episodes later, he writes Format Notes to document what he is learning about format design, accountability infrastructure, and building the morning.