The past week of discussions has been incredibly timely; in our current events, it is incredibly important to understand and analyze the ways in which content moderation can be manipulated to craft a specific narrative. As humans work behind the scenes to code and use algorithms that generate public content, I wonder how might this form of automation influence what we read and what is accessible for us to view? At what point might a content generator value high volume traffic over factual objective data? For example, would a private industry, who’s financial goal is (usually) to produce a profit, utilizing algorithms to produce content that is more likely to be viewed, even if more thorough, factual data/articles are available, but less viewed? Are developers encouraged through gamification to moderate content that will be most financially successful rather than most informative and objective? Unlike much of academic writing and analysis, content moderators often do not expose their algorithms and methodology, largely to maintain the integrity of their content moderation (and to avoid manipulation of the user experience) As such, we the users are not privy to the algorithmic decisions being made that would allow us to better adjudicate and understand the automated content modifications. Scholars can utilize footnotes and sources to better understand an academic work. We, however, a largely left in the dark when it comes to internal gamification and content moderation.

Moreover, in “Platforms are not Intermediaries,” Tartleton Gillespie wrote how content moderation shapes and determines  the ways  in which platforms conceive of their user-base. We are no longer simply consumers of a medium. Instead, we, the user, are becoming a dataset and signifier but also the prison guard to the entire system. Gillespie argued that this was through user moderators and tools such as flagging. Does this give the user moderator a greater sense of control over what remains a largely automated system? Does a medium appear more or less trustworthy to the public when there are user content moderators versus an entirely algorithmic system? This is a form of labor, as Gillespie points out, but one that is often not recognized.

In short, unfortunately, until content moderation is enforced to be transparent, we the users have a startlingly lack of control over both the content we view and our curated experience. Gamification will often determine which algorithms are used to produce content that generates maximum revenue. Until a system of enforcement, transparency, and (as Gillespie suggests) enabling access for researchers to dive deep into specific examples of content moderation, we the consumers are left with a user experience that is largely curated outside of our control.

One thought on “Content Moderation and the User Experience

  1. As Gillespie writes in the introduction to the article, users and CEOs alike were surprised by the prominence platforms now have as economic, political, and cultural entities. Mark Zuckerberg doesn’t strike me as overly political and if it wasn’t for a decade of scandals I doubt that he would much care to steer the company toward a better public reputation. In fact, it seems like even now FB is reluctant to apply moderation according to fact checking and basic journalistic protocols, if that means losing out on revenues generated through user engagement. At this point content moderation (broadly construed) at FB probably includes customization according to taste predictions, relevant ads, and a minimum of filtering misinformation.

    Academia is not entirely immune to these trends either. Of course there is an established tradition of peer review, citation standards, and other safeguards, but we can’t ignore trends of gamification and, more generally, the integration of social media logics into hiring practices, tenure reviews, and so on.

Leave a Reply