Substack’s founders — Chris Best, Hamish McKenzie, and Jairaj Sethi — wrote a thoughtful piece on how the company is approaching content moderation as their platform grows. Overall, it’s a solid post and it outlines how Substack is different from the big social media platforms. Namely, Substack’s business model is built on people paying for subscriptions, not advertising, which lets them focus on building a platform that centers writers (and their subscribers) not “engagement”.
When engagement is the holy metric, trustworthiness doesn’t matter. What matters more than anything else is whether or not the user is stirred. The content and behaviors that keep people coming back – the rage-clicks, the hate-reads, the pile-ons, the conspiracy theories – help sustain giant businesses. When we started Substack to build an alternative to this status quo, we realized that a tweak to an algorithm or a new regulation wouldn’t change things for the better. The only option was to change the entire business model.
We already know what happens when platforms optimize for engagement — President Donald J. Trump for one — but also how it twists the incentives for creators.
Overall, this is a good sign that Substack understands the fundamentals of what makes social media so toxic and has a plan in place to avoid those dynamics. Even as they push into that world with their Reader app, which will bring some discovery elements along with it, it seems like they’ve built a framework for guiding their product development.
Substack’s content guidelines are pretty much exactly what you’d expect from their values statement — no hate speech, no porn, no spam, no doxxing. They’ve said they’re dedicated to a small-l liberalism of free speech, but without the absolutist “free speech wing on the free speech party” rhetoric that sent social media off the rails a decade ago.
We are aware of the history here, of how initial hopes about the internet’s ability to promote healthy and productive discourse have been disappointed. Look around you: the internet is broken. But we are not convinced that the solution lies in more censorship; nor do we think the problem is that almost anyone can publish on the internet. The major issue, we think, is that business models based on engagement have created a class of wildly successful media products that distort online discourse. It is increasingly difficult to participate in reasonable discussions on these platforms.
Overall, I think this is a good framework for thinking about building a content platform on the internet today. The biggest piece that’s missing from Substack’s content guidelines is it doesn’t explicitly address disinformation or conspiracy theories — there’s nothing in there that would explicitly forbid QAnon, for example, or even Holocaust deniers (the later almost certainly qualifies as hate speech). What Substack has in its favor, though, is it creates just enough friction to keep the darker, more trollish sorts from showing up in the first place.