I’m musing today about the way that Covid19 will ultimately affect the economy and how technology, and automation, will play a part in that. Part of what scares me about shutting down all “non-essential” business (though I completely agree that we should do this because human life > money) is that what is deemed “non-essential” may very well not appear again—especially if we find technological work arounds–depending how long this distancing lasts. Furthermore, with more and more people working from home (though we should add the caveat that there are class, gender, racial, and socioeconomic factors involved in who “gets” to stay home as well, who “gets” to be safe), I ultimately wonder how this distancing will render more work devalued and hidden (I’m echoing Gray’s words here)? How will employment models change as the economy changes? How will labor structures change as we decide what is essential, and who fills in the employment cervices between tech and humanity?

In terms of tech companies and the Gillespie piece, he suggests that “platforms do not just mediate public discourse: they constitute it” (199). I’m thinking of the new Covid19 button on Facebook where you can push to get curated “news” about the virus. They do not make it obvious how such content is curated, who is in charge of such curation, and what they gain from giving you access to this information. I’m thinking of a friend recently who told me that she didn’t watch the news anymore because Twitter told her all she needed to know about Covid19. I’m thinking about Gillespie’s quote in which he says “the public problems we now face are old information challenges paired with the affordances of the platforms they exploit: . . . misinformation buoyed by its algorithmically-calculated popularity” (200).

It seems that information about Covid19 is spreading faster than the virus itself—with interesting labor implications following in its wake. I’ll talk about this more in class next week, but I’m thinking of switching my final project to something along these lines. I’ve recently become aware of Kaggle, a data science community, that is running competitions for folks to come up with data models regarding Covid19. They are making datasets available for anyone who wants to play with machine learning to respond to their call—and offering financial incentives to do so. I wonder what they gain from such crowdsourcing (in the guise of “helping the community”). Indeed, it may very well help to use deep learning to come up with “answers” to issues related to Covid19. However, I genuinely wonder how such answers will be monetized and who will benefit from such monetization.

In any case, the Covid19 epidemic is pulling the veneer off of many things that our society struggles with—socially, economically, and informationally. The ways that misinformation has spread (can I take ibuprofen if I think I have coronavirus?) and those who even have access to information in the new home-offices to which many of us are relegated.

One thought on “Online Labor and Covid19

  1. You raise a number of important points in your post. We’re already seeing the ramifications of content moderators not clocking into work as much, because their abilities to work remotely are often hindered by security concerns. Increasingly, algorithms are moderating social media feeds and we can expect more glitches and bugs. I suspect that you are right and many companies will use this situation as accelerated testing for remote and in some case nonhuman labor systems.

    In my view, Facebook, like most other tech companies, wants to be a one-stop shop for social experience, which includes news services. The controversial news feed might be gone, but that ambition remains and the coronavirus pandemic again challenges Facebook’s self image of either conduit or media company

    I saw the Kaggle challenge and it’s a great example for two issues that you point out: crowdsourcing computational labor (for a good cause , but no doubt under the auspices of large tech companies) and the problematic assumption that AI + coronavirus research necessarily amounts to the future of epidemiology. As research in the digital humanities shows, text mining and topic modeling are limited approaches and scaling spurious research can have severe consequences. Think of all the non-peer reviewed medical studies on pre-publication servers that might be misappropriated for various AI methods.

Leave a Reply