Preventing Future Misinfodemics | Cassandra Voices

The Legal Challenge of Preventing Future Misinfodemics in the Age of Digital Activism


Throughout the COVID-19 pandemic, the world has seen a deluge of misleading advice, false rumours, and coordinated attempts to contravene expert advice. Over the years, it’s become popular to collectively refer to this as fake news.

This was a term that gained traction throughout the 2016 Presidential Election in the United States, and has become a popular buzzword ever since. It was even the Collin’s Dictionary Word of the Year in 2017,[1] highlighting its impact in the cultural zeitgeist. While this phenomenon is not new, the current incarnation carries a significant digital difference. Technology can foster the spread of false stories with unprecedented speed and efficiency.

This was demonstrated during the Irish referendum to repeal the 8th amendment. Researchers out of UCC showed that when people were given both true and fabricated stories about events in the run up to the referendum, participants, from both sides, recollected false stories, particularly about the other side.[2]

More recently, the discussion in Ireland has begun to probe the role of digital platforms in perpetuating the dangerous spread of ill-founded claims. It appears that in the time preceding each government announcement about COVID-19, instant messaging apps are flooded with false, misleading, and potentially harmful whispers.

As these concerns grow, so too do calls for legal intervention. While this is necessary, and likely, important details must be hammered out in relation to risks associated with the delicate task of regulation in this area. In many cases, harmful false information does not affect everyone equally.

In particular for elderly and vulnerable groups, the accuracy of medical information and advice can genuinely be a matter of life and death. This must also be considered when looking at how far-right campaigns attempt to weaponise digital platforms to lay the blame of COVID-19 against immigrants.

At the moment, numerous claims with misleading and xenophobic undertones have characterised the social media landscape, with Chinese citizens being disproportionately targeted with abuse online.

In light of this, an important detail is emerging. Misinformation can be weaponised to target groups that are already marginalised. This must be acknowledged as regulatory solutions to misinformation continue to develop. While law could have a critical role in curtailing future misinfodemics online, marginalised voices must be protected within these efforts. Within this protection, the potential for social media to platform underrepresented voices must be considered.

The Importance of Digital Activism in Democracy

The objective of combating misinformation must not be viewed in isolation. While much more needs to be done in order to preserve the accuracy of information and news online, the Internet’s democratic potential must not be undermined. Much of this potential is grounded in creating areas of unprecedented accessibility, where diverse and pluralistic voices can be amplified. This can be seen through the expansion of digital activism.

Public opinion now widely regards the role of digital platforms as a valuable vehicle for initiating social change. In America, a 2018 study demonstrated that 69% of citizens feel that social media platforms are critical in ‘getting politicians to pay attention to issues’, while 67% felt that they are important for ‘creating sustained movements for social change.’[3] As well as enlarging the scope for democratic deliberation and participation, social media has facilitated new open forums for activism, while eroding previously robust structural barriers and allowing citizens to more directly amplify issues of public interest.

In recent years, digital platforms have fueled numerous activist movements addressing racial and gender based societal problems. Two flagship social movements have showcased the potential for digital platforms to be a generative environment for social change. The Black Lives Matter movement brought attention to systemic racial injustices, while #MeToo drew global eyes to a wide spectrum of sexual harassment.

While these two movements had separate social motives, both were operationalised by digital platforms that helped to consolidate harmonised messages, and mobilise international solidarity.

The 2013 shooting of Trayvon Martin sparked a hashtag that drew attention to events involving unjust treatment and persecution by law enforcement and the criminal justice system. With further shootings by police in 2014, protests led to civil unrest, spurring use of the hashtag that galvanised a number of international ‘chapters’ adopting the same slogan.

In doing so, social media platforms were instrumental in spawning an umbrella activist movement. After the 2014 shooting of 18 year old Michael Brown, the hashtag resurfaced. In the three weeks after this incident, #BlackLivesMatter appeared on social media approximately 58,747 times every day. When the judicial decision not to indict the police officer responsible for Brown’s death was issued on November 25th 2014, the hashtag was used 172,772 times.

Within the following three weeks after this decision, the hashtag appeared 1.7 million times across popular digital platforms. Through its ability to focus attention on specific incidents and wider related social problems, the #BlackLivesMatter demonstrated the role of social media as a powerful mechanism for broaching politically sensitive but socially prescient topics.

Digital platforms have also facilitated impactful discourses surrounding gender based violence and harassment. Originating with accusations levelled at high profile figures in 2017, #MeToo gained viral traction in late 2017, leading to a variety of related stories shared by both celebrities and ordinary users who recounted instances of harassment.

Many of these users would not have had their stories heard in the days before more accessible platforms that give users access to an audience. In this way, technology and surrounding digital architectures, have revolutionised discussions surrounding stigmatised issues. The hashtag #MeToo has been used tens of millions of times since the initial 2017 tweet from actress Alyssa Milano which prompted women to report their experiences.[4]

The benefit of ‘hashtag activism’ to foster a social movement around a cohesive message can be seen through its ability to hold power to account. Public pressure on foot of the hashtag and related discussions bred numerous consequences.

In spite of particular focus in America and the English-speaking world, the #MeToo gained significant international traction, aided by social media’s ability to transcend border. By fostering environments where victims of sexual harassment and abuse could report and publicise personal anecdotes that reinforce the movement’s broader message, it encouraged exposure of personal and often relatable stories. This shows that social media can act as a machine for creating empathy.

Instances such as #MeToo also forced a discussion to question and challenge existing structural flaws in how harassment was dealt with upon receipt of complaints. This exposed unacceptable standards and worrying loopholes, and did so under a universal and recognisable framing. In this way, social media can carry important social capital, especially to those who need it most.

This is a point that should be threaded through legal discussions that broach intervention on foot of misinformation concerns. As a policy objective, misinformation must be minimised, while also striving to maintain and expand the internet’s democratic capabilities. 

The Backfire of Censorship as a Response to Misinformation

In light of social media’s role as a vehicle for social activism, legal responses to the problem of misinformation online must be delicately handled. If regulatory intervention in this area is based on an obsession with cancelling out anything other than mainstream voices, this could have harmful consequences for digital activism.

Globally, recent examples can demonstrate how this manifests. In Hungary, recent legislative developments for Prime Minister Viktor Orban to ‘rule by decree’ involve criminal sanctions for spreading false or ‘distorted’ information. These sanctions can match, and even exceed punishments for defamation and slander under Hungary’s criminal code.

In India, misinformation led to a confused exodus of migrant workers in light of rumours over lockdown restrictions. Many of these migrant workers were desperately attempting to leave their place of work to return home, in fear of being restricted from leaving during a prolonged lockdown. This underscores the reality that misinformation can harm the most vulnerable, and already marginalised.

In response, the Supreme Court issued advice to the central government, noting how potentially harmful the spread of ‘fake news’ online can be. The Court was correct in identifying the problem, but provided worrying commentary in issuing solutions. It was ultimately advised that media outlets are prohibited from publishing information ‘without first ascertaining the true factual position.’ The factual position needed to be verified by the government.

This is a problematic solution when recognising the need for governments to be held accountable. It is especially troubling during a crisis such as the COVID-19 pandemic. If the government becomes a self-appointed arbiter of truth, what happens when that same government is faced with information that is true, but that is also unfavourable?

Social media has a unique role to play in bolstering movements that expose injustices, mistreatment, and neglect of marginalised and disaffected groups. Unfortunately, it is also a space where misinformation thrives.

This presents a future quagmire, if and when more serious regulatory measures are initiated to respond to this infodemic. These are difficult interests to reconcile. However, the adoption of a holistic approach, grounded in human rights, can help to disentangle this problem, and offer proportionate solutions.

How Should Irish Legal Responses Proceed?

The current Irish legal framework has been characterised by numerous encouraging developments in response to this issue, often correctly seen as an electoral problem. More broadly, a major legal issue has been the growing pains of political advertising law in the digital age, Regulation of political and issue based advertising has not been fully applied to digital advertisements and appears outdated when considering the growing sophistication of the technological capabilities.

Proposals have been floated to legislate for more secure elections by increasing restrictions on political advertising online, including the Social Media Online Advertising Transparency Bill 2017, a law that would prohibit the use of automated accounts for example.[5]

2018 saw the ‘Interdepartmental Group’ on the Irish ‘Electoral Process and Disinformation.’ This report ascertained that while the risk posed to Ireland’s electoral process was ‘relatively low’, online developments exposed glaring vulnerabilities. In particular, threats of potential ‘cyber attacks’ and ‘the spread of disinformation online’ were identified as ‘substantial risks.’[6]

This was followed by The International Grand Committee on Disinformation and ‘Fake News’, which convened in Dublin on November 7th 2019. This was a promising development, and recognised the need for a holistic approach to this problem. Signatories from eight countries agreed to advance measures to curb the spread of disinformation, while also acknowledging the need for fundamental rights to be protected.

The question of how this delicate balance can be achieved is one that requires a lengthy discussion. Viewing the problem of misinformation currently, it appears as though regulation should intercede quickly and heavily. However, it would be far better to take a step back and develop long term and human rights-proofed solutions.

Adopting a human rights approach, within initial stages, carries two valuable benefits. First, it can ground discussions in a thorough recognition of the scope of rights and civil liberties that need to be protected whilst combating misleading and harmful information online. The right to non-discrimination, the right to free speech, and the right to free and fair elections all need to be taken into account. This is a balancing act that can be achieved when using human rights as a guide.

In terms of how to achieve this balance, human rights can again inform this discussion. Principles such as proportionality and the well-established need for legal intervention with free expression to be ‘necessary in a democratic society’, provide highly useful guidance. This is guidance that is extremely important considering the tendency for governments to use extreme events to usher in draconian legal measures.

Some of the most invasive and harmful legislation has been rushed in on the back of a crisis. As seen with the introduction of the Patriot Act after 9/11, the time of emergency is often not an ideal time to craft laws that protects civil liberties. This must be taken into account when figuring out how to intervene to stem the flow of false claims online. 

Human Rights Central

The immediate crisis demonstrates that vulnerable groups are among the most immediate victims of the misinfodemic that has accompanied COVID-19. Accordingly, robust steps need to be taken to debunk and mitigate the spread of rumours and falsities that identify marginalised targets in future events such as these.

Going forward however, this problem must be seen more broadly. A crucial step that the legislators must take is to ensure that human rights are central to forming responses to misinformation. When considering how activist voices and social movements can be protected while advancing solutions, comprehensive and routine consultation with human rights groups is needed.

Civil society and non-profit organisations must be engaged to inform Irish law makers in how to construct effective prevention of misinformation, but insulate digital activism from censorship. Hopefully, the severity of the COVID-19 pandemic can kick start this complex but critical legal discussion.

[1] Collins Dictionary Announces “Fake News” as the 2017 Word of the Year’ (Collins 2017) <>.

[2] Gillian Murphy, (2019) False Memories for Fake News During Ireland’s Abortion Referendum. Psychological Science30(10), 1449–1459.

[3] Dan Whitehead, ‘You deserve the coronavirus’: Chinese people in UK abused over outbreak <>

[4] Monica Anderson, Activism in the Social Media (Pew Research 2018) < >

[5] Anke Wonneberger, Iina R. Hellsten & Sandra H. J. Jacobs (2020) Hashtag activism and the configuration of counterpublics: Dutch animal welfare debates on Twitter, Information, Communication & Society <>

[6] Emanuella Grinberg, ‘What #Ferguson stands for besides Michael Brown and Darren Wilson’ (CNN, November 19, 2014)

[7] Monica Anderson and Skye Toor, ‘How social media users have discussed sexual harassment since #MeToo went viral’ (Pew Research 2018)

[8]  How Social Media Users Have Discussed Sexual Harrassment Since Metoo Went Viral (Pew Research 2018) <>

[9] Colm Quinn, Hungary’s Orban Given Power to Rule by Decree With No End Date,’ <>

[10] Supreme Court Asks Government To Curb Fake News On Virus, Set Up Portal Within 24 Hours For Real Time Information, Bloomberg Quint (31 Mar 2020) <>

[11] Online Advertising and Social Media (Transparency) Bill 2017 Part 1, 2.

[12] Overview- Regulation of Transparency of Online Political Advertising in Ireland, Department of the Taoiseach (14 Feb 2019) <> Last accessed 11 Oct 2019 

[13] In particular when the European Court of Human Rights assesses interferences with free expression, a key question the Court asks is whehter the interference was necessary in a democratic society, and predicated on a pressing social need []


About Author

Ethan Shattock is a PhD Researcher at Maynooth University.

Comments are closed.