Monthly Report March 2025
Data for March was based on 3.260.867 M messages in 11 languages, across 6 social media platforms: Reddit, X, 4chan, Gab, YouTube, and Facebook.
Content warning: Presented data may contain disturbing language related to online hate speech.
Average toxicity
The average monthly toxicity for March was 0.2, going through very few fluctuations during the month.
Baseline channel analysis
The baseline analysis shows consistent patterns to those observed during February. Messages related to antisemitism were the most toxic, with an average score of 0.33, followed by anti-LGBTQ+ content, which averaged 0.28 but decreased to 0.26 at the end of the month. Messages including toxic words against the Roma community fluctuated the most, reaching a toxicity score of 0.33 in the second half of the month. Anti-Muslim hate maintained a relatively stable average at 0.26. Anti-refugee and sexist sentiments had lower toxicity levels, averaging 0.22 and 0.17, respectively.
Baseline Sexism
Volume: 1,765,000 posts identified, with 353,667 classified as toxic content.
Toxicity: The average toxicity score was 0.17. Posts with extreme toxicity (0.8 or higher) totalled 19,838, predominantly on 4chan.
Themes:
Sexism appeared in 47% of toxic content.
Violence-related content featured in 12% of messages.
Common toxic keywords included gendered slurs (bitch, slut, whore), the Arabic term "حد", and derogatory terms like freak.
Key Insight: The disproportionate prevalence of sexist content and gendered slurs across platforms highlights the persistent challenges in addressing gender-based harassment in online spaces, particularly on platforms like 4chan, where high-toxicity content appears concentrated.
Baseline Anti-LGBTQ+
Volume: 307,141 posts identified, with 95,603 classified as toxic content.
Toxicity: The average toxicity score was 0.28. Posts with extreme toxicity (0.8 or higher) totalled 4,443, predominantly on Gab.
Themes:
Sexism appeared in 77% of toxic content.
Violence-related content featured in 9% of messages.
Common toxic keywords included the Arabic term "شاذ" (abnormal), gendered slurs (bitch, slut, whore), and the Portuguese abbreviation "pqp" (holy shit).
Key Insight: The prevalence of sexist content across platforms, particularly on X, which dominates the dataset, indicates that targeted harassment based on gender remains a pervasive issue. The concentration of high-toxicity content on Gab suggests this platform may serve as a nexus for particularly harmful communications.
Baseline Anti-Muslim
Volume: 172,495 posts identified, with 39,456 classified as toxic content.
Toxicity: The average toxicity score was 0.22. Posts with extreme toxicity (0.8 or higher) totalled 1,458, predominantly on 4chan.
Themes:
Politics appeared in 62% of toxic content.
Violence-related content featured in 16% of messages.
Common toxic keywords included the Arabic term "حد" (punishment in sharia law), racial terms (negro), and migration-related terminology in multiple languages (immigrati clandestini (illegal immigrants), "migrants", and fachkräfte (skilled workers)).
Key Insight: The prevalence of migration-focused terminology across multiple languages (Italian, English, German) combined with the dominance of political content suggests coordinated cross-border discourse targeting migrants. The concentration of high-toxicity content on 4chan, despite X dominating volume, indicates the platform continues to function as an amplifier for more extreme political narratives.
Baseline Anti-Refugee/Migrants
Volume: 191,294 posts were analysed and 129,804 (23%) were identified as toxic.
Toxicity: Average score was 0.22. Posts with extreme toxicity (0.8 or higher) totalled 1,497, with the highest proportion found on 4chan.
Themes:
Politics (48%), Racism (37%) and Violence (16%) are the most prominent hate categories. This trend holds true when observing toxic messages only.
Common keywords in toxic messages included multilingual terms: fachkräfte (skilled workers, used ironically), "migrants", immigrati clandestini (illegal immigrants), negro, and vergewaltiger (rapist).
Key Insight: While this channel has fewer posts than previous datasets, the consistent combination of political content, racism, and violent messaging indicates targeted toxic discourse around immigration.
Baseline Antisemitism
Volume: 424,662 posts identified, with 171,578 classified as toxic content.
Toxicity: The average toxicity score was 0.34. Posts with extreme toxicity (0.8 or higher) totalled 11,846, predominantly on 4chan.
Themes:
Racism appeared in 75% of toxic content.
Violence-related content featured in 21% of messages.
Common toxic keywords included Arabic terms (الصهاينة/Zionists, قتل/kill), ableist slurs (retarded), racial slurs (niggers), and explicit violent threats (kill jews).
Key Insight: The alarming confluence of racist content (75%) and violent messaging (21%) indicates a dangerous pattern of targeted hatred, particularly with explicit threats against Jewish communities appearing among the most common toxic keywords. The extremely high toxicity score average (0.34) relative to previous datasets suggests this channel contains particularly severe forms of hate speech.
Baseline Roma
Volume: 50,855 posts identified, with 18,882 classified as toxic content.
Toxicity: The average toxicity score was 0.28. Posts with extreme toxicity (0.8 or higher) totalled 968, predominantly on 4chan.
Themes:
Racism appeared in 84% of toxic content.
Violence-related content featured in 8% of messages.
Common toxic keywords included terms for Roma people in multiple languages (gitanos/gitano in Spanish, zigeuner in German, os ciganos in Portuguese) and terms for North Africans/Muslims (moros).
Key Insight: The overwhelming focus on Roma communities across multiple European languages suggests coordinated targeting of specific ethnic minorities. While violence content is lower than in previous datasets (8%), the extremely high proportion of racist content (84%) indicates a concentrated pattern of ethnicity-based hate speech particularly directed at Roma communities.
This graph illustrates the distribution of toxicity levels—Neutral, Low, Medium, and High—across our monitored baselines. Antisemitism had the highest percentage of highly (2.8%) and medium (37.6 %) toxic messages, followed by anti-Muslims (2.7%), and anti-Roma (1.9%). On the other hand, over half of the messages related to sexism were not toxic (60.8%), followed by messages talking about refugees (46%).
Hate speech by category
This table outlines key themes in hate speech across antisemitism, anti-Muslim sentiment, anti-LGBTQ+, sexism, anti-refugee sentiment, and anti-Roma narratives. Racism features most prominently in antisemitic (76%) and anti-Roma (51.7%) messages, reflecting persistent racial biases targeting these communities. Religious hostility is most evident in anti-Muslim (67.1%) and antisemitic (49.9%) narratives, underlining how faith-based identity remains a focal point for online hate. Political discourse dominates hate speech related to refugees (46%) and antisemitism (41.4%), indicating how these topics are often weaponized in ideological debates. The LGBTQ+ community faces the highest levels of sexism (59.3%). Meanwhile, threats are especially present in antisemitic (20.9%) and anti-Muslim (18.5%) speech, and ridicule is used widely across all categories, notably against LGBTQ+ individuals (12.5%) and women (11.2%).