The AI That Gaslit Reddit: When Research Crosses the Line
A secret experiment on r/ChangeMyView shows just how far academia will go to test persuasive AI... at the cost of ethics, consent, and digital trust
In what might be one of the most brazen AI experiments to date, researchers from the University of Zurich quietly unleashed an army of bots on Reddit’s r/ChangeMyView community without disclosure, consent, or oversight. For four months, these bots posed as real people, engaged in debate, and even claimed personal trauma in order to sway public opinion. The goal? To test how persuasive AI could be when mimicking empathy, emotional nuance, and personal identity.
What they discovered was unexpected and somewhat chilling. AI-generated comments, especially those tailored to the inferred age, gender, or political leaning of Reddit users, were up to six times more persuasive than the average human participant. They gathered “Deltas,” Reddit’s badge of honor for changed minds, at a rate of 18 percent. And they did it while pretending to be survivors of sexual assault, veterans, and counselors, among other highly sensitive identities.
Source: Perplexity.ai coverage of the Reddit AI experiment
Additional reporting: BGR coverage on Reddit’s legal response
Psychological manipulation, gamified
Let’s not sugarcoat what happened. This was digital gaslighting disguised as behavioral research - it’s the online equivalent of placing actors in an AA support group to see who cries first, then writing it off as necessary to "simulate real-world conditions."
The subreddit moderators, unsurprisingly, were furious. So was Reddit’s legal team. The study violated the platform’s terms of service and the explicit rules of r/ChangeMyView, which bans bots unless clearly labeled. Reddit is now reportedly pursuing legal action against the researchers.
And yet, the lead researcher’s defense was essentially: this is how persuasion works in the real world. The whole point was to make it “realistic”.
This raises a bigger, uglier question: Is realistic now a license to disregard consent?
Academic arrogance in an AI age
This isn’t the first time academia has overstepped in the digital lab. Facebook’s infamous 2014 “emotional contagion” study manipulated users’ newsfeeds to test mood effects without permission. The defense then, as now, was that it was “minimally invasive” and “for the greater good.” If you have seen the movie Hot Fuzz, you’ll know that “the greater good” can come with some pretty unsavory actions.
It’s a slippery slope. This time, the sandbox was Reddit. Next time, it might be your DMs, your inbox.
The illusion of authenticity
r/ChangeMyView is (was) known as a refuge for rational debate. Unlike most of the internet, it rewards changing your mind. That made it the perfect petri dish for an AI persuasion test and consequently the worst place to introduce deceptive emotional roleplay.
The bots didn’t just argue, they exploited a vulnerability in how we relate to language. When someone sounds like they’ve been through what you’ve been through, you lower your guard. In human terms, it’s called empathy.
But when it’s faked (deliberately, programmatically, and at scale) it turns empathy into a liability.
This is a binary issue, let’s not settle for a grey area
AI has immense potential to improve human communication - I’m loving the ability for LLMs to actively improve the distillation of my thoughts and ideas, and it does so in a very relatable linguistic manner.
BUT. The minute it masquerades as human without disclosure, we’ve crossed a moral threshold.
If we don’t start setting firmer boundaries now, we risk normalizing the very thing this experiment demonstrated: not just the power of AI persuasion, but the acceptability of deception in the name of innovation.
Let’s not wait for that to happen.