False Flags – Why content moderation can’t solve Section 230

In the paper Platforms are Not Intermediaries (2018), Gillespie carefully outlines the challenges in regulating online platforms that are both conduits of communication and content curators. He suggested a number of actions for balancing the power and freedom that social media platforms and internet service providers (ISPs) have with public obligations. These include an increase in transparency, standardized moderation practices and best practices, government oversight, shared moderation data, and legislative reconsideration of Section 230. By setting agreed upon public obligations in the form of standardized moderation Gillespie argues that the public would be able to better understand how social media platforms moderate content and shape public discourse. What Gillespie fails to address with his particular solutions in this paper is that much of the public that regularly interacts on social media platforms already understand the moderating systems in place and use reporting features in bad faith to punish or retaliate against other users. In an earlier paper, Crawford and Gillespie note that flags, a form of publicly driven content moderation can be used “…as a playful prank between friends, as part of a skirmish between professional competitors, as retribution for a social offense that happened elsewhere, or as part of a campaign of bullying or harassment – and it is often impossible to tell the difference” (2014, pg. 9). While a standardized or transparent moderation system may improve public understanding of how these private social media giants shape public perception it also has the negative effect of allowing public groups to partake more easily in their own manipulation of public discourse, potentially across multiple social media platforms.

Another issue with implementing standards/oversight/etc. for moderation of social media platforms is the question of “Who is this content being moderated for?” Much of the legislation surrounding internet communication have emphasized that laws, like section 230, have been to protect children from indecency or pornographic material on the internet. This means that regulation of internet communication has been crafted with a particular age demographic in mind, when creating standards of moderation (or even loose guidelines of moderation) to be implemented across social media platforms what age, sex, race, gender, sexual orientation, class, etc. demographics are being privileged? Who are we willing to protect on the internet at in lieu of others? The shoddy implementation of FOSTA and SESTA have given us an idea, we are more willing as a public to sacrifice the lives and well-being of sex workers to pretend like we are protecting the victims of sex trafficking. In short, people who intend to harm others online will either do it directly via social media platforms or via the reporting functions on social media platforms, standardizing moderation procedures will only empower those that do harm with few consequences for the social media companies themselves.

Co-Authorship and Dental Anthropology

When I first approached this problem, I wanted to look at the field of Dental Anthropology, a sub-specialization of the many iterations of biological anthropology. I decided to look first at the common terms used in the abstracts and titles of the papers in dental anthropology to see if I could find any patterns.

Looking at the network analysis it appears that the red section relates most closely to age estimation using dental eruption, green is related to identifying human remains in forensic cases as well as forensic dentistry, blue is related to dental traits and morphology, and yellow seems to be a miscellaneous category including site descriptions and journal names. This analysis was interesting because there are so many words that refer to the same phenomenon, for example the ASUDAS section was counted separately from the Arizona State University Dental Anthropology System. Excluding some general terms like man, woman, etc. would likely have resulted in a clearer network.

I also wanted to look at the countries publishing works in dental anthropology and if co-authorship in the field transcended national borders which resulted in the following map.

The strongest connection in co-authorship was between the US and England which wasn’t surprising considering anthropology tends to be rooted in colonial empires. What was interesting was that the US and Germany were basically on top of each other, sharing a nearly identical co-author network which suggests a close knit group of academics regularly publishing together when viewed in a field this small.

Quantifying Worth

Wood, J. W., Milner, G. R., Harpending, H. C., Weiss, K. M., Cohen, M. N., Eisenberg, L. E., … & Katzenberg, M. A. (1992). The osteological paradox: problems of inferring prehistoric health from skeletal samples [and comments and reply]. Current anthropology33(4), 343-370.

1a. Total number of citations? 8,226 (Excluding self-citations)

1b. What can you learn about the number of citations to this article per year since it was published?

The paper was relatively well cited until the year 2000 where the number of citations began to steadily increase each year indicating that the article may be part of the core literature of a field. Alternatively, it could also be reflective of the general trends in academia like the increasing number of scholars within academic disciplines or the increasing number of publications produced by authors, both of which have increased dramatically in this digital age.

1c. What can you learn about who cited this article? What are their disciplinary identifications?

Of the 5,167 citing articles the majority were categorized under anthropology or a related subfield. Interestingly paleopathology, the field from which the paper originated, is not listed as a category in this report.

2a. Total number of publications? 225

2b. What is the H-Index? 38

2c. Average citations per item? 20.08

2d. Which of these numbers would you prefer to have used in evaluations for hiring and tenure? Why?

You would have to consider whom you are comparing for hiring and tenure, an individual just coming out of a postdoc may have a low scores for all of these numbers due to the newness of their publications. The paper I chose is part of the foundational literature for a field but was cited fewer than 10 times per year in its first couple years after publication. An average number of publications per year would at least measure some form of consistent productivity but these are all poor measures for hiring and tenure unless your goal is departmental prestige.

2e. Is this kind of analysis appropriate for all academic fields? Why or Why not?

From my first answer I obviously have some issues with these measures. This style disproportionately benefits students who are in a lab-project based program where a lead professor gives students smaller research projects to work and publish on as contributions to an overall larger project, ensuring early publication and continued citations as long as people are still publishing on the head PI’s project. These measures hurt people coming from small programs who don’t have large research projects, people coming from academic fields with independently lead (and often slow) research, and people whose research requires long periods of data collection in the field which is often slowed down by politics and bureaucracy (see ecology, etc.).

Education and Unemployment

When I initially began this research, I wanted to look at post-graduation placement of women who acquired a doctorate degree in a science, technology, engineering, and mathematics (collectively known as STEM). Unfortunately, the data contained within the Woodrow Wilson Center’s portal was insufficient to answer this question. As a result, I broadened my research to see the rates unemployment (reported as a % of the total labor force) between women in general and women who achieved advanced degrees. Given that education is often touted as ‘the great equalizer,” it would follow that women who attained advanced degrees would have lower rates of unemployment than the women in the general population. In order to test this assumption, I pulled two reports covering women’s unemployment; the first was a 2014 report from the International Labor Organization (ILO) which contains unemployment data on women aged 15-64 from 88 countries, the second was a 2015 report from the World Bank which contains unemployment data for women with advanced degrees from 65 countries. Fortunately, both the ILO and World Bank use the same definition of unemployment which is as follows; “Individuals without work, seeking work in a recent past period, and currently available for work, including people who have lost their jobs or who have voluntarily left work. Persons who did not look for work but have an arrangement for a future job are also counted as unemployed.”  Advanced education was defined as, “…short-cycle tertiary education, a bachelor’s degree or equivalent education level, a master’s degree or equivalent education level, or doctoral degree or equivalent education level” according to the World Bank report. In order to clean up the data, countries that were not on both lists were removed leaving 56 countries for cross comparison. Of those 56 countries, 13 reported a higher unemployment rate for women with advanced degrees than would be expected given the employment rate of women in the general public.

Unfortunately, the reports and data provided by these large organizations tend to conglomerate data in a way that can mask confounding variables. In this case, when looking at general unemployment rates for women compared to those with advanced degrees I had no way of comparing the data by age group, this lead to the inclusion of young women (ages 15-22) who likely could not have finished an advanced degree in the comparison. Aside from differing sociocultural norms, it could be that these 13 countries with higher unemployment for women with advanced degrees merely had a large population of employed young women skewing the data. These unemployment estimates also often do not include forms of informal labor which include seasonal labor (like agricultural/pastoral labor) and household labor. Additionally, while unemployment rates are often used as indicators of economic stability, they can mask other economic issues like chronically low wages, wage inequality, and quality of life. Social scientists can use these reports to guide their inquiry to explore why women with advanced degrees may face higher rates of unemployment in different sociocultural contexts. However, considering this endeavor began as a way to look at post-graduation placement of women with STEM degrees, the limitations and biases baked into the reports makes any critical and contextualized analysis using these reports frustrating, if not impossible.

Data and the Past

This past week we met with Dr. Melanie Hughes to consider the practical issues of data production, management, and analysis within the social sciences. One thing that struck me was that, despite our ability to identify issues in the categorization systems presented in class, many of us were unable to express solutions to mitigate these gaps in the data. For example, while discussing the readings, one of my main concerns was the binary presentation of gender when approaching gender statistics. This criticism ignored the both practical issues of increasing the number of variables under study (and weakening the power of the statistical measures used) as well as increasing the visibility of minority groups and exposing them to unnecessary violence (Bailey and Gossett, 2018). Here is where I struggle most, as social scientists we use the available data to not only inform us of the present but also reconstruct our ideas of the past. The lack of data on groups that exist outside the constructed norm (or our knowledge at the time) allow people to act as if these groups are new fads or phases that never existed in the past. The anti-vax movement is predicated on this lack of data, with many suggesting that conditions like autism didn’t exist prior to modern vaccines. Of course, this lack of data on autism in the past is due to advances in knowledge, new diagnostic criteria, and new ways of recording and sharing data rather than the condition itself not existing. More specific categories and more data probably won’t solve this issue, so how do we carefully contextualize the data we collect while maintaining the ability to compare it widely with other vastly different contexts?

Colonialism and the Violent Academy

The readings of the past two weeks have defined digital humanities and outlined the ways this field can uphold or challenge colonialism and sexism through careful contextualization of data (Risam 2018), collaborative stewardship (Christen 2018), and critical reflection on the histories of constructed categories of data (Aronova et al. 2017; Noble 2018; Radin 2017). In Decolonizing the Digital Humanities in Theory and Practice, Risam warns of the risks of disingenuous decolonization efforts wherein collecting a diverse body of researchers is seen as the endpoint for decolonization in academia rather than the dismantling of colonial epistemologies and practices. This “add and stir” approach echoes colonialism in that researchers belonging to minority groups are either expected to conform to the structures of the academy and act as a figurehead for decolonization efforts or expected to transform a violent and oppressive system from the inside out with no support. The visibility of these researchers both within and outside academia exposes them to additional violence in an increasingly accessible digital world (Bailey and Gossett 2018). This violence is especially clear in Bailey’s section where contributors to the development and proliferation of the term misogynoir were removed from Wikipedia due to their lack of academic credentials or publications despite the fact that many of the individuals who edit Wikipedia lack these same qualifications. Fortunately, many of the authors have provided meaningful methodological changes in order to include and center knowledge originating outside academic institutions. Christen (2018) provided the most straightforward approach by outlining ETHICS, a series of steps for reflexive archival practices. Here, digital archives are created from communities’ stated needs with the power to modify, view, and change the digital record belonging to the people these data were taken from. In a similar vein, Risam (2018, p. 82) suggests that the emphasis on local “…demands acknowledgement that there is not a single world or way of being within the world but rather a proliferation of worlds, traditions, and forms of knowledge.” While these works provide methods to practice decolonization, rather than just speak to it, it is unclear to me if methods like ETHICS can effectively be used to decolonize “big data.”

The move to decolonize has been seen across multiple disciplines in the humanities with detrimental effects on researchers of color. Particularly in anthropology, which has a long legacy as an investigative tool of colonial powers, researchers of color are regularly expected to engage in integral decolonization work in addition to (and often in lieu of) the academic labor that departments use to measure progress. For example, Savannah Martin, a Siletz researcher (@SavvyOlogy on twitter), was criticized by her department for not meeting the writing benchmarks for her dissertation despite being an invited speaker on multiple panels for challenging colonial narratives in anthropology. Similarly, Shay-Akil McLean, a queer trans man (@Hood_Biologist on Twitter), who founded decolonizeallthethings.com and has been an invited speaker on multiple panels covering decolonization in anthropology, left anthropology for more supportive humanities departments after facing racial discrimination during his time as an anthropology PhD. While the move to decolonize theory and practice is excellent in digital humanities, I am unsure (as I am unfamiliar with the discipline) if these efforts have extended to department level initiatives to adamantly support the people actively challenging colonialism in academia.

Alysha’s Intro

Hi All!

I am Alysha Lieurance a 2nd year PhD student in anthropology here at Pitt. I received a Masters in Anthropology from East Carolina University studying social inequality and urbanization at Petra, Jordan but have recently moved areas (and time periods) to Late Antique and Early Medieval Germany.

I work at Straubing, a small site along the Danube where Romans encountered and interacted with various migrating populations, often referred to as barbarians. Scholarship over the Late Antique and the Early Medieval periods disagrees on the nature of the fall of the Roman empire, some researchers argue that it is the result of ruthless attacks and disruption by roaming barbarians, while others argue that the transition was relatively smooth and characterized by new forms of social integration and assimilation as well as the formation of new identities along the borderlands. My research seeks to explore local shifts in demography, diet, health, and mortuary treatment at Straubing to assess how the individuals at Straubing negotiated new ideas and expressions of community within the Roman hinterlands.

One of the major issues that needs to be addressed within these contexts is to test if archaeologically created categories of difference (like ethnic groups or polities) were readily identified and acted upon by people in the past. Digital methods like GIS and spatial analysis of mortuary spaces can help address this question, however uncritical application of these methods often reinforces archaeological assumptions rather than challenging them. I am in this class in the hopes that I can better contextualize and apply digital methods in my future research.