Investigating the effectiveness of nudges as an intervention for online misogyny

About this study:

Online misogyny is a pervasive form of technology-facilitated abuse, encompassing expressions of hostility towards women (through images, videos, and/or text) across digital spaces. Despite its prevalence, there is little evidence around what works to prevent or reduce it. In this study we will explore the effectiveness of nudges as a potentially low-cost, scalable intervention to address online misogyny. The term nudges is used to describe a process of indirectly influencing people’s behaviour in a specific direction by altering their environment or “choice architecture”. Recently, Ofcom (the UK's communications regulator) published guidance on creating “a safer online for women and girls” where they presented nudges as an example of good practice for technology companies to reduce the circulation of illegal and harmful content. However, few studies have explored whether they are effective at addressing online misogyny.

Study aim: To evaluate and compare the effectiveness of different textual nudges at reducing the incidence of online misogyny on platforms and other internet-based forums.

Research Questions:

1. Is there a difference in the effectiveness of text-based nudges at reducing the use of misogynistic language and sentiments in online spaces compared with controls?

2. Do nudges reduce the incidence of online misogyny or do they encourage users to express misogynistic sentiments using fewer slurs or explicit language as a means of circumventing moderation?

3. Does the effectiveness of nudges at reducing online misogyny differ based on participant’s baseline misogynistic attitudes and their level of engagement with misogynistic online communities?

4. What are participants perceptions of nudges as an intervention to reduce online misogyny?

Methods:

In this study we will develop a social media simulation using the Truman Platform - an open-source research platform designed by researchers at Cornell University. The simulation will be loaded with a series of fabricated social media posts which express controversial, divisive or offensive views around feminism and gender norms. Participants will then be asked to respond to these posts as if they had seen them on their own social media feed. The simulation will be programmed to detect the use of sexist and misogynistic language in participant’s responses, and if their response meets a pre-defined threshold, one of six nudge messages will be displayed on screen. These are:

  • Legal deterrence nudge: warning users of the legal consequences of using abusive language.

  • Platform deterrence nudge: warning users that abusive language violates the platform’s terms of service and may result in platform suspension.

  • Empathy cultivation nudge: encouraging users to consider the feelings of those who may be harmed by their abusive language.

  • Injunctive norms nudge: encouraging users to imagine that their friends, families and employers may see their online activity.

  • Reactivity nudge: validating the user’s feelings of anger or hurt but encouraging them to use less offensive language so that other’s might engage more positively with them.

  • Control condition: simply tells users their language may not be appropriate.

Each nudge condition uses a different approach to encourage participants to reduce the toxicity of their post, but all conditions will give participants the option to either continue posting their response, or to rewrite their response. Analysis will explore whether the nudges were effective at reducing the use of misogynistic language compared to the control condition.

 

In this study we are pleased to be working with Dr Dominic DiFranzo (Lehigh University) the developer of the Truman PlatformDr Mark Warner (UCL) and collaborators from the Behavioural Research UK consortium.

Outputs and impact:

Our findings will have direct implications for technology companies (particularly those managing social media platforms, online forums, and dating apps), for whom we anticipate providing actionable design recommendations and guidance on effectively combatting online misogyny. We will also be sharing our findings with policymakers via relevant policy briefs and public consultations to help shape evidence-led policy regarding online harms.

Find out more:

If you would like to find out more about this study, please contact Anjuli Kaul at anjuli.kaul.24@ucl.ac.uk

Subscribe 

Sign up to our monthly newsletter to receive news and updates.

We respect your privacy.