Content Moderation and the User Experience

The past week of discussions has been incredibly timely; in our current events, it is incredibly important to understand and analyze the ways in which content moderation can be manipulated to craft a specific narrative. As humans work behind the scenes to code and use algorithms that generate public content, I wonder how might this form of automation influence what we read and what is accessible for us to view? At what point might a content generator value high volume traffic over factual objective data? For example, would a private industry, who’s financial goal is (usually) to produce a profit, utilizing algorithms to produce content that is more likely to be viewed, even if more thorough, factual data/articles are available, but less viewed? Are developers encouraged through gamification to moderate content that will be most financially successful rather than most informative and objective? Unlike much of academic writing and analysis, content moderators often do not expose their algorithms and methodology, largely to maintain the integrity of their content moderation (and to avoid manipulation of the user experience) As such, we the users are not privy to the algorithmic decisions being made that would allow us to better adjudicate and understand the automated content modifications. Scholars can utilize footnotes and sources to better understand an academic work. We, however, a largely left in the dark when it comes to internal gamification and content moderation.

Moreover, in “Platforms are not Intermediaries,” Tartleton Gillespie wrote how content moderation shapes and determines  the ways  in which platforms conceive of their user-base. We are no longer simply consumers of a medium. Instead, we, the user, are becoming a dataset and signifier but also the prison guard to the entire system. Gillespie argued that this was through user moderators and tools such as flagging. Does this give the user moderator a greater sense of control over what remains a largely automated system? Does a medium appear more or less trustworthy to the public when there are user content moderators versus an entirely algorithmic system? This is a form of labor, as Gillespie points out, but one that is often not recognized.

In short, unfortunately, until content moderation is enforced to be transparent, we the users have a startlingly lack of control over both the content we view and our curated experience. Gamification will often determine which algorithms are used to produce content that generates maximum revenue. Until a system of enforcement, transparency, and (as Gillespie suggests) enabling access for researchers to dive deep into specific examples of content moderation, we the consumers are left with a user experience that is largely curated outside of our control.

Social Media, Content Moderation, and Agency

A couple of years ago, a friend shared with me this article by former Google design ethicist, Tristan Harris. Although it is somewhat alarmist with regard to social media, as is his website advocating for more “human” tech design, the readings and discussion last week on platforms and content moderation that called into question the control over what content is stored and presented brought to mind the sorts of discussions Harris’s article is engaged with. His basic premise is that social media platforms, not only by what content they choose to make available but also by their very design, are taking away the agency of those who engage with them. In other words, he argues that these platforms are designed to moderate and alter both what content we have access to and also what content we want access to.

Although I waver back and forth a bit, at times feeling his alarmism more strongly than others, I find that I do basically agree with Harris’s argument that the content moderation at the design level of these platforms has a not insignificant impact on our agency by working to alter our psychological desires for certain content. I find this moderation of what content we want access to even more problematic than the sorts of “censoring” described by Gillespie and Roberts because at some level it impacts whether or not we even care or notice that certain content is missing or censored.

In the current plague state, I am finding this more wearying. Many of us are spending significantly more time on our computers and social media platforms looking to be fed more information both about the pandemic and about anything other than the pandemic. Social media platforms have been feeding us a false sense of control over what content we are sharing and accessing and, in a time when we are feeling a lack of control, we are leaning into the perceived control granted us by social media.

But, I think many of us are feeling the tension between desiring more and more content and recognizing that we don’t know what information we can trust. We are realizing that more information doesn’t necessarily give us more control over the situation. I hope this tension will lead us to a more thoughtful engagement with these platforms so that we can prevent our agency from being so easily usurped by those who actually have control over the content moderation and we can better advocate for access to more inclusive content.

False Flags – Why content moderation can’t solve Section 230

In the paper Platforms are Not Intermediaries (2018), Gillespie carefully outlines the challenges in regulating online platforms that are both conduits of communication and content curators. He suggested a number of actions for balancing the power and freedom that social media platforms and internet service providers (ISPs) have with public obligations. These include an increase in transparency, standardized moderation practices and best practices, government oversight, shared moderation data, and legislative reconsideration of Section 230. By setting agreed upon public obligations in the form of standardized moderation Gillespie argues that the public would be able to better understand how social media platforms moderate content and shape public discourse. What Gillespie fails to address with his particular solutions in this paper is that much of the public that regularly interacts on social media platforms already understand the moderating systems in place and use reporting features in bad faith to punish or retaliate against other users. In an earlier paper, Crawford and Gillespie note that flags, a form of publicly driven content moderation can be used “…as a playful prank between friends, as part of a skirmish between professional competitors, as retribution for a social offense that happened elsewhere, or as part of a campaign of bullying or harassment – and it is often impossible to tell the difference” (2014, pg. 9). While a standardized or transparent moderation system may improve public understanding of how these private social media giants shape public perception it also has the negative effect of allowing public groups to partake more easily in their own manipulation of public discourse, potentially across multiple social media platforms.

Another issue with implementing standards/oversight/etc. for moderation of social media platforms is the question of “Who is this content being moderated for?” Much of the legislation surrounding internet communication have emphasized that laws, like section 230, have been to protect children from indecency or pornographic material on the internet. This means that regulation of internet communication has been crafted with a particular age demographic in mind, when creating standards of moderation (or even loose guidelines of moderation) to be implemented across social media platforms what age, sex, race, gender, sexual orientation, class, etc. demographics are being privileged? Who are we willing to protect on the internet at in lieu of others? The shoddy implementation of FOSTA and SESTA have given us an idea, we are more willing as a public to sacrifice the lives and well-being of sex workers to pretend like we are protecting the victims of sex trafficking. In short, people who intend to harm others online will either do it directly via social media platforms or via the reporting functions on social media platforms, standardizing moderation procedures will only empower those that do harm with few consequences for the social media companies themselves.

Future Historians’ Data…

For this post, I would like to focus on Mary Gray’s video on the hidden cost of ghost work in algorithms. As she points out, even with machine learning techniques, there are still humans who perform the new work upon which artificial intelligence algorithms rely. This takes on the form of data entry, data labeling, and content review. Gray’s main topic is what she labels the “human-in-the-loop” services that require humans to work on the back end of the algorithms to ensure that they run smoothly.

She goes on to describe how the process works; with “requesters” on the left, who then interact with the “platform” through the internet, when finally any number of human workers with accounts to the platform supply the labor required to complete the initial request. It is through this process, Gary argues, that workers are devalued and isolated due to an over-reliance on code.

Around the 6-minute mark, Gray introduces an “online-to-online” process in which companies access data online and contextualize it with other data sets to maximize profit. Thinking about the process(es) companies go through to obtain such data and contextualize it led me to wonder how future historians will grapple this same data.

For future historians working on economic or labor issues of the early 21st century, what digital sources and data might they discover in their research? How will this information be archived, organized, and preserved over time? How will they be able to link the human experience (whether as worker or consumer) with the different processes the Gray addresses in her lecture?

In Laura Putnam’s piece that we read earlier this semester, she explored the possible shadows that digitized sources cast over certain historical subjects. What can we say about the current digital processes that inherently cast shadows over the human laborers? Like historians now, perhaps future historians will get a glimpse into the conditions through ethnographies, personal testimonies, and second-hand accounts. Or perhaps the data collected, analyzed, and contextualized will be stored in a way that it will be accessible decades and centuries from now.

What of all these would stay with us in the future?

Hi everyone, it was difficult for me to focus during this past week, so I only managed to write my reflection on the readings and discussion in a series of not very well-connected paragraphs. I apologize and thank you for your understanding.

In the first session of the seminar, we discussed how content on the Internet, although accessible from all over the world, is created in a specific place. The same consideration could be extended to online platforms. People around the globe use online social media or service platforms where they create their own culture-specific content, but these platforms were also designed in a particular cultural and temporal context. We know that platforms constantly evolve and readapt as a response to multiple factors, but I wonder if the original way and place where they were designed can contribute to transforming into global culture certain elements that before were only specific of a given culture. I have in mind the concept of a school yearbook and its derivative form as Facebook. But I also think on how the now normalized use of emojis and gifs has replaced the verbalization of ideas and feelings, a phenomenon that decades ago was associated mostly with technology users in East Asia.

The video lecture “Algorithmic Cruelty and the hidden costs of ghost work” brought feelings of empathy towards those workers trapped on the mechanism of on-demand online platforms 🙁 Not only because I can imagine their precarious situation, but also because many of the traits of these jobs are common to other productive activities, one of them being graduate life, and perhaps academic life, more generally. Not having a 9 to 5 shift nor being able to finish your workday is the most evident connection. Grant writing and competition can be seen as task-based work, in which we are also required to be hunting for grants and calls for papers constantly on our own. Most of the time, our chances to receive a grant depend on who else applied for it, but institutions would never reveal this information to the applicants. We are kept in a state of isolation and ignorance about this process, not knowing who exactly gets to choose/hire us and why.

A recent Pittwire email on Zoom protocols reminded me of the flagging system in social media, and how the role of the moderator has to be better defined, as well as the expected behavior of participants. I was disappointed not to find among the protocols a suggestion to avoid having in their background any direct source of light, such as a lamp, which could be replaced it with an interesting but not too distracting object to which other zoomers could direct their gaze when they get tired of looking at people’s faces; or to encourage attendants to change to a more appropriate attire even if it is only for the Zoom session, or from the waist up.

Temporarily, virtual technology mediates and shapes all our social and work interactions, but this process will impact the more permanent life and work of the future. Just like Alison, I want to be optimistic and believe that we will develop an aversion to this type of communication that will push us to avoid it, but it is hard not to consider another scenario. In a virtual conversation with my father about this possible future where more and more activities are transferred into the virtual world, his deepest hope was that “church could be one of them.” As graduate students, should we start developing our online pedagogical skills and portfolio more seriously?

User-Generated Content and Academic Communities

“The style of moderation can vary from site to site, and from platform to platform, as rules around what UGC is allowed are often set at a site or platform level, and reflect that platform’s brand and reputation, its tolerance for risk, and the type of user engagement it wishes to attract.”

Sarah T. Roberts, “Content Moderation,” in Encyclopedia of Big Data, eds. Laurie A. Schintler and Connie L. McNeely (Berlin and Heidelberg: Springer, 2017), 1.

In reflecting on our readings and our discussion this week, this quote by Sarah T. Roberts stood out to me, and it brought to mind several related thoughts about user-generated content and user interactions by academics and others on online platforms.

Through my personal Facebook account, I am a member of several private groups related to my professional interests, including Friends of the International Congress on Medieval Studies (2,218 members), the International Society for the Study of Medievalism (541 members), and Teaching the Middle Ages (2,765 members). With the exception of the second group, these groups are not officially affiliated with professional organizations, though they do consist of academics and others with interests in these topics. Each group has at least one admin or moderator who is at least nominally responsible for moderating user-generated content and managing the community of members.

As Roberts has noted, the “style of moderation” of sites and platforms, and of groups hosted by platforms, can and does vary, and I am interested here in considering private Facebook groups such as those I mentioned that I am a member of as spaces in which users generate more or less “academic” or “professional” content through their personal social media accounts. These private groups can be, and have been, contentious spaces, with posts, conversations, and, at times, arguments about prejudice and discrimination faced by members of these groups, particularly medievalists of color, having resulted in reminders by the groups’ admins of acceptable behavior within the group and in tensions among members both in these online spaces and in the field more generally.

Considered more broadly, what implications and impacts might the moderation of user-generated content and user interactions on online platforms, particularly social media, have for academic communities?

With pressure to be professionally “visible” and “engaged” in online spaces, including social media platforms, as part of professional networking, gaining field recognition, and improving one’s metrics for the hiring and tenure processes—I am thinking here of our conversations and work earlier this semester with Michael Dietrich—are there ethical concerns in asking or expecting academics to build an online professional presence, particularly in regard to graduate students, early career scholars, and those who are unaffiliated, given that such a presence requires continuous work and that this work is likely to be uncompensated and unacknowledged?

Online Labor and Covid19

I’m musing today about the way that Covid19 will ultimately affect the economy and how technology, and automation, will play a part in that. Part of what scares me about shutting down all “non-essential” business (though I completely agree that we should do this because human life > money) is that what is deemed “non-essential” may very well not appear again—especially if we find technological work arounds–depending how long this distancing lasts. Furthermore, with more and more people working from home (though we should add the caveat that there are class, gender, racial, and socioeconomic factors involved in who “gets” to stay home as well, who “gets” to be safe), I ultimately wonder how this distancing will render more work devalued and hidden (I’m echoing Gray’s words here)? How will employment models change as the economy changes? How will labor structures change as we decide what is essential, and who fills in the employment cervices between tech and humanity?

In terms of tech companies and the Gillespie piece, he suggests that “platforms do not just mediate public discourse: they constitute it” (199). I’m thinking of the new Covid19 button on Facebook where you can push to get curated “news” about the virus. They do not make it obvious how such content is curated, who is in charge of such curation, and what they gain from giving you access to this information. I’m thinking of a friend recently who told me that she didn’t watch the news anymore because Twitter told her all she needed to know about Covid19. I’m thinking about Gillespie’s quote in which he says “the public problems we now face are old information challenges paired with the affordances of the platforms they exploit: . . . misinformation buoyed by its algorithmically-calculated popularity” (200).

It seems that information about Covid19 is spreading faster than the virus itself—with interesting labor implications following in its wake. I’ll talk about this more in class next week, but I’m thinking of switching my final project to something along these lines. I’ve recently become aware of Kaggle, a data science community, that is running competitions for folks to come up with data models regarding Covid19. They are making datasets available for anyone who wants to play with machine learning to respond to their call—and offering financial incentives to do so. I wonder what they gain from such crowdsourcing (in the guise of “helping the community”). Indeed, it may very well help to use deep learning to come up with “answers” to issues related to Covid19. However, I genuinely wonder how such answers will be monetized and who will benefit from such monetization.

In any case, the Covid19 epidemic is pulling the veneer off of many things that our society struggles with—socially, economically, and informationally. The ways that misinformation has spread (can I take ibuprofen if I think I have coronavirus?) and those who even have access to information in the new home-offices to which many of us are relegated.