A couple of years ago, a friend shared with me this article by former Google design ethicist, Tristan Harris. Although it is somewhat alarmist with regard to social media, as is his website advocating for more “human” tech design, the readings and discussion last week on platforms and content moderation that called into question the control over what content is stored and presented brought to mind the sorts of discussions Harris’s article is engaged with. His basic premise is that social media platforms, not only by what content they choose to make available but also by their very design, are taking away the agency of those who engage with them. In other words, he argues that these platforms are designed to moderate and alter both what content we have access to and also what content we want access to.

Although I waver back and forth a bit, at times feeling his alarmism more strongly than others, I find that I do basically agree with Harris’s argument that the content moderation at the design level of these platforms has a not insignificant impact on our agency by working to alter our psychological desires for certain content. I find this moderation of what content we want access to even more problematic than the sorts of “censoring” described by Gillespie and Roberts because at some level it impacts whether or not we even care or notice that certain content is missing or censored.

In the current plague state, I am finding this more wearying. Many of us are spending significantly more time on our computers and social media platforms looking to be fed more information both about the pandemic and about anything other than the pandemic. Social media platforms have been feeding us a false sense of control over what content we are sharing and accessing and, in a time when we are feeling a lack of control, we are leaning into the perceived control granted us by social media.

But, I think many of us are feeling the tension between desiring more and more content and recognizing that we don’t know what information we can trust. We are realizing that more information doesn’t necessarily give us more control over the situation. I hope this tension will lead us to a more thoughtful engagement with these platforms so that we can prevent our agency from being so easily usurped by those who actually have control over the content moderation and we can better advocate for access to more inclusive content.

One thought on “Social Media, Content Moderation, and Agency

  1. Harris is an interesting case and it’s hard to argue that social media and application designs are extremely manipulative and insidious, such as the red notifications, the veil of choice, the scrolling features, and so on. What I find less convincing is his notion that a turn to ethics, a so-called human-centered design, will alleviate technological ills. It implies that mechanisms of social control somehow began with Google and that what we need now is simply more ethics boards. I suspect that when critiquing technological design, it’s also important to keep in mind histories of computation (who was excluded?), Silicon Valley-style utopianism (what problems was the digital revolution going to solve?), and neoliberal financialization (how is big-tech making money?).

    I think you’re on to something with your second point regarding the tension between control and (more) information. It’s just such a mess right now. My best guess is that critical thinking skills and a solid liberal arts/humanities education will continue to help us ask the right questions.

Leave a Reply