The Upvote Effect: How Social Approval Fuels Online Hate on Parler and Its AI Implications
By David M. Markowitz, Samuel Hardman Taylor
Published on November 24, 2025| Vol. 1, Issue No. 1
Content Source
This is a curated briefing. The original article was published on cs.CL updates on arXiv.org.
Summary
A study analyzing 110 million messages from Parler (2018-2021) investigated Walther's (2024) social approval theory of online hate. While upvotes on initial hate posts did not correlate with subsequent hate, receiving upvotes on comments containing hate speech (especially extreme versions) was strongly linked to increased future hate speech production over timeframes ranging from a week to six months. The research found a general positive relationship between social approval and hate speech propagation across users, indicating that social approval is a critical reinforcing mechanism for online hate.
Why It Matters
This research provides crucial insights for the AI industry, particularly in the realm of content moderation, platform design, and ethical AI development. For AI systems tasked with detecting and mitigating online hate, it underscores that a purely content-based approach is insufficient. AI models must incorporate contextual signals, such as engagement metrics (e.g., upvotes, likes) and user interaction patterns, to identify and counter the underlying motivations driving hate speech. Current algorithmic designs that prioritize engagement can inadvertently create a "reward system" for hate, where positive feedback loops (like upvotes) encourage users to produce more extreme content. Therefore, AI engineers and data scientists must consider the ethical implications of their algorithms, designing systems that actively disrupt these cycles rather than amplifying them. This means potentially re-evaluating how 'engagement' is measured and rewarded, developing more sophisticated AI models that understand the nuanced interplay between social approval and harmful content, and exploring AI-driven interventions to break these feedback loops. Ultimately, this study highlights the urgent need for AI to move beyond simple content filtering towards a more holistic, behavior-driven approach to safeguard online spaces and foster healthier digital communities.