(Open) Secret lives of Content Moderators | Cassandra Voices

WARNING: The (Open) Secret lives of Content Moderators

0

Tick Yes or No: ‘I understand the content I will be reviewing may be disturbing. It is possible that reviewing such content may impact my mental health, and it could even lead to post-traumatic stress disorder (PTSD).’[i]

Last year, a sixteen-year-old Malay girl posted a poll on Instagram asking her followers whether she should live or die.[ii] 69% voted for death and she took her own life. The followers who voted that she should die neither took action to protect their ‘friend’ nor shared empathy or concern.

People are awful.[iii] This is what my job has taught me”, says a former Facebook content-moderator who recently sued the social media giant after experiencing psychological trauma as a direct consequence of his work. The Wall Street Journal recently described content moderator as ‘the worst job in the US[iv] , and the same applies to other countries, which this article elaborates on.

Very little is known about the role, mental health toll or other work experiences of content moderators. They may work for YouTube, Facebook, Google and other such platforms that we are all pretty much ‘addicted’ to.

A few studies are now looking into the working conditions for people[v] who determine what ‘material’ or ‘content’ can be posted to Facebook or Twitter or YouTube. Their job is to decide on whether content adheres to the ‘community guidelines’ of online platforms. They work day and night so that we the users are saved from exposure to videos of graphic violence or child abuse as well as hate speech, among the constant stream of user generated material uploaded on to social media feeds.

There are thousands of content moderators, who are paid to view objectionable posts and decide which need to be removed from digital platforms. Many are severely traumatized by the images of hate, abuse and violence they see on a daily basis so that we, our families and children get to see ‘WARNING: The following post or content may be disturbing to some viewers.’

The heavy mental health toll on content moderators who are hired on a ‘freelance’ or ‘gig’ basis cannot be underestimated.

Never-ending Uploads and Ever-Expanding Platforms

A staggering three hundred hours of video content is uploaded on to YouTube every minute, while over ninety-five million photos[vi] are uploaded to Instagram each day, along with over five hundred million tweets sent out on Twitter (or 6,000 tweets per second). Therefore, it is virtually impossible for human moderators to vet every piece before a content is uploaded and goes live (with some potentially going ‘viral’). Popular platforms such as these serve user-generated content uploaded by a global community of contributors.

The uploaded content is just as diverse as the user base, meaning inevitably that a significant amount is offensive to most users and, by extension, the platforms. Users routinely upload (or attempt to upload) content such as: child abuse, animal torture, and disturbing, hate-filled messages.

Facebook outsources the hiring of content moderators and provides office space. Its sites are largely outside the United States – mainly in south, south-east and east Asia, but the operations have expanded to the US, more specifically in California, Arizona, Texas and Florida.[vii] Content moderators work at a computer workstation where they review content –  a steady stream of text posts, images and videos. These can range from random personal musings to information with ramifications for international politics. Some of it may seem rather benign – just words on a screen that someone didn’t like. While the worst may be incredibly disturbing. On a regular basis moderators have to witness beheadings, murders, animal abuse, and child exploitation. Therefore, one might wonder, what toll on mental health does this take?

One previously unreported aspect of a moderator’w job is the numerical quotas that these subcontractors[viii] are forced to meet: each moderator is required to screen thousands of images or videos per day in order to maintain their employment.

Facebook alone has an army of about 15,000 people in 20 locations[ix] around the world, who decide what content should be allowed to stay on Facebook, and what should be marked as ‘disturbing’, whether execution videos from terrorist groups, murders, beatings, child exploitation or the torture of animals. In addition to the stress of exposure to disturbing images and videos, there is also the pressure to make the right call about what how to mark the content. A wrong decision taken under stress will have penalties, financially for the worker, and also may have mental health effects on other human lives.

Platforms, as we know them, reserve the right to police user-generated content through a clause in their Terms of Service (which none of us read, or do we? Should we?), usually by incorporating their Community Guidelines as a reference. For example, YouTube’s Community Guidelines prohibit  ‘nudity or sexual content’, ‘harmful or dangerous content’, ‘hateful content’, ‘violent or graphic content’, ‘harassment and cyberbullying’, ‘spam, misleading metadata’, ‘scams’, ‘threats’ videos that would violate someone else’s copyright, ‘impersonation’ and ‘child endangerment.’

‘Now you see me’

The Cleaners, a recent documentary, features interviews with several former moderators who were previously outsourced by a subcontractor in the Philippines. The interviewees exposed their experiences of filtering the very worst images and video the internet has to offer. In the Philippines, workers operate out of jam-packed malls, where they spend over nine hours a day moderating content for as little as $480 a month.[x] With few workday breaks and no access to counselling, many of these individuals end up suffering from insomnia, depression and post-traumatic stress disorder.

Records also show the average pay of a full-time online content moderator in the US is around $28,000, but globally and by a large measure a significant amount of hiring is done through outsourcing and on a temporary basis. In Ireland, research shows that typically a Facebook employee would be paid a basic rate of €12.98 per hour,[xi] with a 25% bonus after 8pm, plus a travel allowance of €12 per night – the equivalent of about €25,000 to €32,000 per year. Yet the average Facebook employee in Ireland earned €154,000 in 2017.

On average, the workload involves moderating about 300 to 400 pieces of content[xii]  – called ‘tickets’ – on an average night. On a busy night, their queue might have 800 to 1,000 tickets. The average handling time is 20 to 30 seconds – longer if it’s a particularly difficult decision.

‘We are trash to them, just a body in a seat’ shares a content moderator. Every work minute is strictly bound.[xiii]  Harsh working conditions characterised by specified bathroom breaks and a meagre nine minutes of wellness time engenders a stress that is exacerbated by employers’ downplaying the importance of mental health care.

The continuum of content in those quotas range from tone-deaf jokes; kids dressed up as history’s great dictators that may constitute hate speech; nude images; domestic violence images, and then the really graphic and inhumane ones that inevitably surface. The content moderators have about twenty-four hours[xiv] within which they have to classify the posts under bullying, hate speech, and other content as appropriate.

Like other forms of gig workers, digital reputation or future work orders come from high ratings. Several former moderators felt pressurised to achieve a 98% quality rating. This would mean that the auditor would agree with 98% of their decisions taken on a random sample of tickets. Moderators are therefore scrutinised for the smallest mistakes. An unending stream of extremism, violence, child sexual abuse imagery and revenge porn, does not give moderators time to consider the more subtle implications of particular posts.

Artificial Intelligence (AI) cannot nail this one… just yet!

Moderators are human beings, so mistakes are inevitable. However, to shatter one misconception on this front: Artificial Intelligence (AI) cannot help much in this field. They currently act as triage systems; for example, by pushing suspect content to human moderators and weeding out some unwanted material on their own. But AI cannot solve the online content moderation problem without human help. For example, AI uses either a visual recognition to identify a broad category of objectionable content or match content to an index of banned items (for example, illicit materials, child abuse, terrorist content, etc.) – and then it allocates a ‘hash’ or an ID so that if these are detected again, the uploading process will be disabled. But then guess who will need to set the parameters before the automation can work!?

Automated systems using AI and machine learning still have a long way to go before they can carry out content moderation independently (free of human help that is). We are surely not there yet.

Content moderation is arguably one of the most important tasks that BPOs perform today, fulfilling outsourced contracts for social media giants ranging from Facebook and TikTok to Live, among many others. This has led to a process-driven BPO[xv] industry that has become the refuge for quick-fix content moderation based on subjective criteria. Add to that how many of the mods are often young people (their average age is less than thirty), who sometimes join even before finishing college degrees, and the problems begin to add up.

The Need for (Content Upload) Speed and…Training!

One might have assumed that US companies who hire moderators would have a good understanding of these issues, but it turns out that they really don’t. It has been reported for instance that Facebook doesn’t provide ongoing cultural education for these moderators to bring them up to speed. The one exception is when a particular issue goes viral on Facebook, and there’s a sudden need to bring everybody up to speed in real time. With this laissez faire approach it is unsurprising how many Court, Senate and Congressional hearings Mark Zuckerberg has had to attend over the past four years (and not just for the Cambridge Analytica scandal).

One former moderator shared how he witnessed images of child sexual abuse[xvi] and bestiality with me while weeding out content that was unsuitable for the platform. He suffered from psychological trauma as a result of these working conditions and a lack of proper training.

Accenture is one of the companies that hires contract workers to review content for big networks like Google, Facebook, and Twitter. There is a well-documented history of content moderators reviewing[xvii] including graphic and disturbing imagery – with jobs taking significant mental health tolls, and leading to psychological trauma.

In order to share more of what goes on during content moderation, the freelancers have to break the nondisclosure agreements first, and this is an area where there is journalistic investigations and research work pending. One of the burning questions is whether the company has anything to say about the psychological and emotional impact of watching the brutality, pornography, and hate that the moderators have to look at on a daily basis?

Some Debt Cannot be Repaid

Facebook has already paid out a $52 million settlement to content moderators suffering from mental health problems such as Post Traumatic Stress Disorder (PTSD).[xviii] In light of repeated allegations and the seriousness of the situation, the company has agreed to compensate American content moderators and provide extra counselling during their tenure. The social media giant will pay a minimum of $1,000 to each moderator.[xix]  The settlement covers 11,250 moderators which is a glimpse at the colossal number (in hundreds of thousands) of moderators involved in this work globally.

“I know it’s not normal, but now everything is normalized[xx],” said a moderator who declined to share his name and other details because of the confidentiality clause he signed when he took the job. Non-disclosure agreements are non-negotiable for moderators, and are forcibly imposed by the platforms. For example, YouTube content moderators are reportedly being told they could be fired if they don’t sign ‘voluntary’ statements acknowledging their jobs could give them PTSD (i.e. post-traumatic stress disorder).

Reports also shows that Accenture managers repeatedly coerced site counsellors to break patient confidentiality.[xxi] Although these allegations were refuted by Accenture, such fault lines between workers and management are bound to affect organisational morale.

Further studies are elusive on whether companies such as Accenture are shifting the responsibility of mental health care onto individual employees, and thus avoiding liability in the face of increasing lawsuits from dormer moderators. In response to growing allegations, certain social media giants have reinstated their commitment towards safeguarding their employees’ mental health and have clinical psychologists on call.

The Valley of Uploads

While some of the specifics remain intentionally obfuscated, content moderation is done by tens of thousands of online content moderators, mostly employed by subcontractors in India and the Philippines, who are paid wages well below what the average Silicon Valley tech employee earns. We need more studies and investigations on this as time progresses, as our hunger for newer ‘tailor-made’ media feeds continues to grow.

The general assumption is that the large tech companies can easily hide the worst parts of humanity, otherwise freely available on the internet. There is no easy solution. With billions of users and unending uploads, there will never be enough moderators to check everything before it is shared with the world.[xxii]

Legal challenges and new methods of reporting abuse help to narrow the risks, but the task is nonetheless Sisyphean. The complexities are ongoing, ever-growing and multi-faceted. The trade-off between a ‘quick fix’ of myriad issues would still create a dispersed range of unintended externalities to the stakeholders involve. This list includes the users, content moderators, companies, lawmakers and legal systems monitoring these behemoth digital platforms.

[i] Madhumita Murgia, ‘Facebook content moderators required to sign PTSD forms’, Financial Times, January 26th, 2020, https://www.ft.com/content/98aad2f0-3ec9-11ea-a01a-bae547046735

[ii] Jamie Fullerton, ‘Teenage girl kills herself ‘after Instagram poll’ in Malaysia’, May 15th, 2020 https://www.theguardian.com/world/2019/may/15/teenage-girl-kills-herself-after-instagram-poll-in-malaysia

[iii] Marie Boren, ‘Life as a Facebook moderator: ‘People are awful. This is what my job has taught me’’ Irish Times, February 27th, 2020, https://www.irishtimes.com/business/technology/life-as-a-facebook-moderator-people-are-awful-this-is-what-my-job-has-taught-me-1.4184711.

[iv] Jennifer O’Connell, ‘Facebook’s dirty work in Ireland: ‘I had to watch footage of a person being beaten to death’’, Irish Times, March 30th, 2019, https://www.irishtimes.com/culture/tv-radio-web/facebook-s-dirty-work-in-ireland-i-had-to-watch-footage-of-a-person-being-beaten-to-death-1.3841743

[v] ‘Managing and Leveraging Workplace Use of Social Media’, SHRM, January 19th, 2019,  https://www.shrm.org/resourcesandtools/tools-and-samples/toolkits/pages/managingsocialmedia.aspx

[vi] Daisy Soderberg-Rivkin, ‘Five myths about online content moderation, from a former content moderator’. October 30th, 2019, https://www.rstreet.org/2019/10/30/five-myths-about-online-content-moderation-from-a-former-content-moderator/

[vii] ‘Inside Facebook, the second-class workers who do the hardest job are waging a quiet battle’, Washington Post, https://www.washingtonpost.com/technology/2019/05/08/inside-facebook-second-class-workers-who-do-hardest-job-are-waging-quiet-battle/

[viii] Terry Gross,  ‘For Facebook Content Moderators, Traumatizing Material Is A Job Hazard’, NPR, July 1st, 2019,

[ix] Ibid, O’Connell, March 20th, 2019.

[x] Ibid, Soderberg-Rivkin, October 30th, 2019.

[xi] Ibid, O’Connell, March 20th, 2019.

[xii] Ibid O’Connell, March 20th, 2019.

[xiii] Prithvi Iyer, Suyash Barve, ‘Humanising digital labour: The toll of content moderation on mental health,’ Digital Frontiers, April 2nd, 2020, https://www.orfonline.org/expert-speak/humanising-digital-labour-the-toll-of-content-moderation-on-mental-health-64005/

[xiv] Ibid O’Connell, March 20th, 2019.

[xv] Prasid Banerjee, ‘Inside the secretive world of India’s social media content moderators’, LiveMint, March 18th, 2020, https://www.livemint.com/news/india/inside-the-world-of-india-s-content-mods-11584543074609.html

[xvi] Kelly Earley, ‘Irish content moderators prepare lawsuit against Facebook and CPL’ December 4th, 2019, https://www.siliconrepublic.com/companies/irish-content-moderators-facebook-cpl-recruitment

[xvii] Paige Leskin, ‘Some YouTube content moderators are reportedly being told they could be fired if they don’t sign ‘voluntary’ statements acknowledging their jobs could give them PTSD’, January 24th, 2020, https://www.businessinsider.in/careers/news/some-youtube-content-moderators-are-reportedly-being-told-they-could-be-fired-if-they-dont-sign-voluntary-statements-acknowledging-their-jobs-could-give-them-ptsd/articleshow/73594478.cms

[xviii] Untitled, ‘Facebook to pay $52m to content moderators over PTSD’, BBC, May 13th, 2020, https://www.bbc.com/news/technology-52642633

[xix] Ibid

[xx] Elizabeth Dowskin et al, ‘Content moderators at YouTube, Facebook and Twitter see the worst of the web — and suffer silently’, July 25th, 2019, https://www.washingtonpost.com/technology/2019/07/25/social-media-companies-are-outsourcing-their-dirty-work-philippines-generation-workers-is-paying-price/

[xxi] Sam Biddle, ‘Trauma Counselors Were Pressured to Divulge Confidential Information About Facebook Moderators, Internal Letter Claims’, The Intercept, August 16th, 2019, https://theintercept.com/2019/08/16/facebook-moderators-mental-health-accenture/

[xxii] Ibid, Soderberg-Rivkin, October 30th, 2019. https://www.rstreet.org/2019/10/30/five-myths-about-online-content-moderation-from-a-former-content-moderator/

Share.

About Author

Dr. Boidurjo Rick Mukhopadhyay, DSc, graduated Summa Cum Laude with a BA (Hons) in Economics following which he received a MA from the Institute of Development Studies (UK) and a PhD from the University of Sussex (UK). Rick is an International Development and Management Economist working extensively with the Government Ministries, higher education industry, and think tanks across the UK, EU and China. He is currently researching, consulting and advising in the areas of Development and Environment Economics, Gig economy/ Collaborative Consumption, the Future of Work, Social Innovation and Entrepreneurship. Rick currently sits on editorial and reviewer boards of over a dozen top international peer-reviewed journals and also serves as non-executive director for various nonprofits and charities. Besides publishing his research and speaking internationally, Rick also has experience in leadership development workshops, leading international summers schools (at Sussex, LSE, and several other Universities in the EU), quality assurance visits, accreditations (EQUIS, AMBA), blended-learning (with Pearson), and motivational training. He is currently a Senior Lecturer at WIUT.

Comments are closed.