False Flags – Why content moderation can’t solve Section 230

In the paper Platforms are Not Intermediaries (2018), Gillespie carefully outlines the challenges in regulating online platforms that are both conduits of communication and content curators. He suggested a number of actions for balancing the power and freedom that social media platforms and internet service providers (ISPs) have with public obligations. These include an increase in transparency, standardized moderation practices and best practices, government oversight, shared moderation data, and legislative reconsideration of Section 230. By setting agreed upon public obligations in the form of standardized moderation Gillespie argues that the public would be able to better understand how social media platforms moderate content and shape public discourse. What Gillespie fails to address with his particular solutions in this paper is that much of the public that regularly interacts on social media platforms already understand the moderating systems in place and use reporting features in bad faith to punish or retaliate against other users. In an earlier paper, Crawford and Gillespie note that flags, a form of publicly driven content moderation can be used “…as a playful prank between friends, as part of a skirmish between professional competitors, as retribution for a social offense that happened elsewhere, or as part of a campaign of bullying or harassment – and it is often impossible to tell the difference” (2014, pg. 9). While a standardized or transparent moderation system may improve public understanding of how these private social media giants shape public perception it also has the negative effect of allowing public groups to partake more easily in their own manipulation of public discourse, potentially across multiple social media platforms.

Another issue with implementing standards/oversight/etc. for moderation of social media platforms is the question of “Who is this content being moderated for?” Much of the legislation surrounding internet communication have emphasized that laws, like section 230, have been to protect children from indecency or pornographic material on the internet. This means that regulation of internet communication has been crafted with a particular age demographic in mind, when creating standards of moderation (or even loose guidelines of moderation) to be implemented across social media platforms what age, sex, race, gender, sexual orientation, class, etc. demographics are being privileged? Who are we willing to protect on the internet at in lieu of others? The shoddy implementation of FOSTA and SESTA have given us an idea, we are more willing as a public to sacrifice the lives and well-being of sex workers to pretend like we are protecting the victims of sex trafficking. In short, people who intend to harm others online will either do it directly via social media platforms or via the reporting functions on social media platforms, standardizing moderation procedures will only empower those that do harm with few consequences for the social media companies themselves.

One thought on “False Flags – Why content moderation can’t solve Section 230

  1. Thanks for your post. I think your criticism of Gillespie – at least as far as the piece on intermediaries is concerned – is valid. Transparency, standardization, and best practices have regrettably become synonymous with all around improvement and that’s clearly not the case. The case FOSTA/SESTA case shows how a notions of accountability and security can be implemented to sideline other groups in need of protection.

    I’m reminded of Virginia Eubanks’ claim in Automating Inequality (2018) that the cry for more transparency as a cure to widespread blackboxing is quite naive and misses an analysis of power. She thinks that even if we knew more about how, for instance, social service algorithms function, practices that reinforce inequality would not cease as a result.

    I think many of Gillespie’s ideas about regulating FB and other profit-oriented platforms are worth considering, but it’s possible that his account of “the public” and the implicit notion of democracy is inadequate to protect the livelihoods of underserved groups.

Leave a Reply