Article by Alina Vianne Barr (✉ alina.vianne.barr@univie.ac.at)
The omnipresence of hateful content on social media can lead users to develop a distorted perception of what is considered normal or acceptable behavior. Frequent exposure to digital hate can, on the one hand, heighten users' awareness of the problem and their willingness to take action, but on the other hand, it can also lead to indifference and the normalization of hate.
The present study distinguished between three types of digital hate – incivility, intolerance, and threats. Incivility is reflected, for example, in vulgar language or a harsh tone, while intolerance is expressed through discriminatory or exclusionary statements. When dealing with digital hate, users may intervene in different ways depending on platform features. In general, interventions can be categorized into direct and indirect actions, based on their level of visibility and potential personal consequences. Indirect interventions, such as reporting posts, are typically considered a preferred, low-risk option. The willingness to intervene often depends on the nature and intensity of the hateful content.
With regard to content moderation, the study differentiated between contextualizing measures (e.g., warnings or notices) and punitive measures (e.g., content removal or suspension of user accounts). The study was conducted as part of a multi-part two-wave panel survey in Austria. Data for the first wave were collected between July 27 and August 5, 2023, while the second wave was conducted between September 27 and October 6, 2023. Each wave involved a sample of the Austrian general population aged 16 and older.
Participants were asked about their experiences with incivility, intolerance, and threats on social media – both in terms of how frequently they encountered such content and how severe they perceived it to be. In addition, the study assessed their personal attitudes, including views on freedom of expression, their own intervention behaviors, and their preferences regarding content moderation. The results show that the frequency of exposure to digital hate had no effect on the perception of its prevalence. This suggests that sensitization is influenced not only by repeated exposure but also by personal involvement or the behavior of one's social environment.
Regarding the willingness to intervene, the distinction between direct and indirect actions proves to be crucial. The more frequently users were confronted with uncivil, intolerant, or threatening content, the more likely they were to engage in direct interventions (e.g., countering via comments or direct messages) – an indication of sensitization. It also became apparent that users tend to resort to indirect measures (such as reporting content) particularly when hate is perceived as especially severe.
Although respondents generally expressed strong support for measures against all forms of hate, a selective desensitization toward uncivil content was evident: the more frequently users were exposed to incivility, the less they supported moderation measures. In contrast, intolerance and threats consistently prompted clear demands for intervention, even with frequent exposure.
Finally, the study found that attitudes toward freedom of expression play an important role in supporting restrictive moderation measures, especially in response to uncivil and threatening content. In summary, digital hate has become an unavoidable part of everyday life on social media and now represents a significant societal challenge. To effectively counter its consequences, relying solely on moderation by platform operators is not sufficient – users themselves must also take active steps against hateful content.
As Rinat Meerson, one of the study's authors, concludes: "Contrary to common media narratives suggesting that the public has become accustomed to online hate, our findings paint a different picture: many users respond actively and directly – through comments, reports, or private messages – when confronted with hate speech."
About the authors
Rinat Meerson is a Predoctoral Researcher within the ERC-funded research project Digital Hate: Perpetrators, Audiences, and (Dis)Empowered Targets in the Department of Communication at the University of Vienna.
Kevin Koban is a Postdoctoral Researcher within the ERC-funded research project Digital Hate: Perpetrators, Audiences, and (Dis)Empowered Targets in the Department of Communication at the University of Vienna.
Jörg Matthes is Professor of Communication and Vice-Chair of the Department of Communication at the University of Vienna. Further, he is the principal investigator of the ERC-funded research project Digital Hate: Perpetrators, Audiences, and (Dis)Empowered Targets.