Content moderation, while posing a possible threat to the freedom of information and expression, nonetheless plays an important role in regulating what is posted on internet platforms. While the internet was initially perceived as a potentially free community for the sharing of information and ideas, some measure of moderation must exist to ensure that such digital spaces serve as safe sources of information. This, however, can prove to be intensely difficult to maintain, as voluntary moderation could prove to be sporadic and, as Roberts notes, people who are specifically employed to moderate digital platforms are repeatedly exposed to disturbing images and content, and are furthermore subjected to low-pay and low-prestige positions. Using algorithms for moderation, however, can serve to stifle not only free speech, but valid sources of information, as Facebook’s temporary elimination of articles from The Atlantic shows. In light of this incident, human labor seems like the most reasonable solution to online moderation. However, such labor is blatantly undervalued, and those who perform it are sufficiently out of the public eye for the dynamics of their work, and even its importance, to be relatively unknown. Furthermore, attempts to gamify online moderation, while providing incentives for voluntary moderation, would not be sufficient to compensate voluntary moderators who take on the responsibilities of professional moderators, as they would likely still be exposed to the same material, which would in turn leave them with the same psychological scars.
These issues have been exacerbated by the onset of the COVID-19 pandemic as, with the onset of social distancing and stay-at-home orders, the dependence of the population on digital media for news, entertainment and work is greater than ever. This has led to increased online activity and, in an effort to obtain information about the virus, an immediate need for access to valid reports and reliable news outlets. Furthermore, false information about the virus or government policies could have severe effects. As such, moderators are under intense strain, and the weaknesses of algorithms are being exposed as online traffic increases. Such a dynamic could diminish the quality of moderation or, when comparing the increased online traffic to the existing pool of human moderators, a shortage of moderation altogether. It is also possible that the stresses of moderation, combined with the pre-existing stress that moderators experience, could exacerbate the detrimental effects that their jobs already take on them. In this regard, COVID-19 poses a unique challenge to online moderators, and while it may be too late to adapt, it is possible that these circumstances will inform future decisions regarding the refinement of algorithms, as well as the treatment of human moderators.

One thought on “Moderation in the Age of Global Pandemics

  1. Thanks for your post. I wonder what would need to happen in order for content moderation to become a well-paid, well-regarded profession. There obvious security concerns with content moderators working remotely, but I suspect the real reason that we’re not seeing an uptick in that line of work lies with the companies. They’re simply not regarding it a worthwhile investment to improve the quality of their feeds, if it means recognizing moderators through full-time employment and full benefits. It’s a shame, really, because this type of digital work could help significantly with the inevitable rise of unemployment in the upcoming month and beyond. More than that, there could be adjacent economies, such as mental health programs specializing on the kinds of trauma associated with CCM.

Leave a Reply