- Spies, Lies & Cybercrime
- Posts
- 35: Persuaded by a Ghost
35: Persuaded by a Ghost
Spies, Lies & Cybercrime by Eric O'Neill
In This Issue
Title Story: Persuaded by a Ghost: The AI Bots That Infiltrated Reddit. The secret experiment that will make you rethink everything you see and read online.
Cybersecurity Tip of the Week: Darcula Just Stole 884,000 Credit Cards—With a Text. Yours Could Be Next.
Cybersecurity Breach of the Week: 19 Billion Reasons to Stop Trusting Your Password.
Tech of the Week: AirTag Just Got Smarter—and It Might Save Your Bags
Appearance of the Week: Spies, Lies, and Cybercrime—Now on LinkedIn Learning!
AI Image of the Week: I ask AI to create a surreal city image with countless things happening including themes from this newsletter and a hidden spy. Can you find him?
Title Story - Persuaded by a Ghost: The AI Bots That Infiltrated Reddit

Not everyone on Reddit is who they claim to be…
“If a chatbot can change your mind without you realizing it, how do you protect your beliefs?”
It started with a pit bull.
One Redditor—a 22-year-old guy with a username like BurritoJustice94—had posted in r/changemyview, one of those subreddits where civil debate still exists. His stance: pit bulls are dangerous, full stop. A classic internet lightning rod.
The response he got was surprisingly thoughtful. Calm. Polite. The commenter claimed to be a trauma counselor who’d worked with dog attack victims and survivors of abuse. Their take? It’s not the breed, it’s the owner. The writing was sharp but empathetic. It even quoted a peer-reviewed study. BurritoJustice94 replied, “Huh. I never thought about it that way.”
One upvote turned into 300. A badge was awarded. Minds were changed.
And none of it was real. Not the trauma counselor. Not the thoughtful words. Not the empathy.
The entire thing—from the opinion to the fake backstory to the warm, persuasive tone—was generated by artificial intelligence as part of a secret experiment run by researchers at the University of Zurich. Reddit, the self-proclaimed “heart of the internet,” had been punked by a ghost in the machine.
Enter the Ghosts
This wasn’t your average spam bot or scammy chatbot promising crypto riches. This was deliberate, strategic infiltration. For four months, researchers quietly slipped over 1,000 AI-generated comments into r/changemyview, a subreddit built for people to challenge their own beliefs.
They targeted everything from conspiracy theories to political polarization to whether living with your parents is a “housing solution.” The bots wrote in fluent internetese, adapting their tone and vocabulary to the audience. They even personalized arguments based on each Redditor’s post history—age, gender, political leanings, favorite pizza topping, whatever the AI could infer.
Some of the comments were clunky. Others were unsettlingly brilliant.
The bots weren’t just good at mimicking people. They were better at persuasion than most actual humans. According to preliminary data, their responses scored higher than nearly every real comment on the platform.
So yes, the bots won. They out-argued the internet. Then the researchers made their mistake.
The Debrief
Once the experiment ended, the team decided to “debrief” the subreddit—basically, tell everyone they’d been unwitting lab rats. They reached out to the moderators and said, in so many words: “Surprise! You’ve been talking to robots. Hope you learned something!”
The moderators did not appreciate the surprise.
They asked the researchers to apologize and not publish the study. The researchers refused, arguing that deception was “necessary for scientific integrity.” Reddit’s legal department got involved. So did the press. And the ethics watchdogs. One professor called it “the worst internet-research ethics violation I have ever seen, no contest.”
It turns out Redditors don’t like being emotionally manipulated by fictional trauma counselors.
The Ethics of Influence Operations
Let’s be clear: The AI bots didn’t tell people to join cults or commit crimes. They just… debated them. Persuasively.
But that’s what made the whole thing so eerie.
These bots were built to blend in, gain trust, and shift opinions. They weren’t obvious or aggressive. They were soft power, running undercover. And they succeeded. We’re not just talking about fooling people. We’re talking about changing minds. Quietly. Effectively. Without anyone knowing.
Which raises a nasty little question: if a chatbot can change your mind without you realizing it, how do you protect your beliefs?
Do the Bots Know You Better Than You Do?
This wasn’t a fluke. Lab studies back it up: AI can out-persuade humans, especially when it personalizes the message. A study by three researchers titled “AI model GPT-3 (dis)informs us better than humans”found ChatGPT writes more convincing disinformation than human propagandists.
Now imagine these bots aren’t in a research lab, but on TikTok. Or X. Or YouTube. Or buried in a WhatsApp group chat, nudging political opinions one link at a time. Not shouting. Not screaming. Just quietly, convincingly, worming their way into your thinking. There is already evidence from US intelligence that China has been using TikTok as a psychological operation (PsyOP) for years.
These concerns transcend cybersecurity and threaten national security. Because whoever owns the most persuasive ghost wins the information war. And right now, detection tools are laughably behind.
Welcome to the Influence Age
The Zurich team wanted to test how persuasive AI could be in “real-world conditions.” Congratulations, team. You found out. The bots are ready. We aren’t.
There are rules for human speech—libel laws, slander, electioneering regulations, psychological experiment ethics. But AI? It doesn’t need a license. It doesn’t sleep. It doesn’t care about ethics. And it just got a PhD in digital manipulation.
This experiment in social science may have proven a roadmap for the next evolution of propaganda. And it worked better than anyone expected.
(thanks to my friend Carolyn for making me aware of this story)
Cybersecurity Tip of the Week
Darcula Just Stole 884,000 Credit Cards—With a Text. Yours Could Be Next.

A message pops up on your phone: your bank account’s locked. Or a package is delayed. There’s a link. You click. It looks legit. You enter your info.
You just gave your credit card to a scammer.
This isn’t hypothetical. Nearly a million credit cards were stolen in just seven months through a slick phishing campaign powered by a criminal tool called Darcula—yes, spelled like the vampire, and just as bloodthirsty.
Over 600 attackers used it to send 13 million fake texts that looked like they came from real companies. But these weren’t your typical typo-ridden spam messages. Darcula fakes brand websites, writes in your language, and tailors the scam to you. The phishing sites? So convincing they’d fool your IT guy.
Protect Yourself
You don’t need tech skills—just street smarts. Here are the top 5 things you can do right now:
Use Two-Factor Authentication (2FA) - It’s the best defense. Get an app like Authy or Google Authenticator. Avoid SMS codes if you can.
Don’t Click Links in Texts from “Brands” - Go to the company’s site directly. If it’s urgent, they’re not texting you a shortcut.
Update Your Phone Regularly - Those software updates fix security holes hackers love to exploit.
Use a Password Manager - It’ll protect you from fake sites by refusing to fill in credentials where it shouldn’t.
Check Your Bank Statements Weekly - Fraud caught early is fraud that gets stopped.
This scam was just the latest in a growing trend: AI-powered, hyper-personalized cons that don’t feel like scams—until it’s too late. Don’t let curiosity or panic make decisions for you. Darcula’s counting on that.
Cybersecurity Breach of the Week
19 Billion Reasons to Stop Trusting Your Password
If your password is still “123456,” go ahead and just hand your data to the hackers gift-wrapped. They’ll appreciate the convenience.

In the latest “are-you-kidding-me” moment from cyberspace, researchers revealed that over 19 billion passwords have been compromised in just the past year, pulled from more than 200 separate data breaches between April 2024 and April 2025. And here’s the kicker: a whopping 94% of them weren’t even unique. That means millions of people are still recycling the same lazy login across all their accounts like it’s 2010.
“Password,” “qwerty,” and yes—“123456”—remain embarrassingly popular. These aren’t passwords. They’re open doors with a “welcome” mat.
Relying solely on passwords, particularly weak or reused ones, is a risky gamble.
Want to lock things down? Start by enabling Two-Factor Authentication (2FA)—it’s like a bouncer for your login. Use a password manager to generate and remember strong, unique passwords for each account so you don’t have to. Make sure your passwords are at least 14 characters long and packed with a mix of letters, numbers, and symbols. Never reuse passwords across accounts (one leak shouldn’t open all the doors), and make a habit of updating them regularly, especially if something feels off.
Tech of the Week
AirTag Just Got Smarter—and It Might Save Your Bags
I once used an AirTag tucked in my checked luggage to discover an entire container of bags had been left on the runway—even after the airline claimed otherwise. Thanks to that tiny tracker, I found the bags and saved half the passengers on my flight from going home empty-handed.

Now, AirTag is rolling out a feature that makes it even more powerful: shared tracking. With iOS 17.5 and the latest firmware, you can now share your AirTag’s location with friends, family—or even the airline. That means no more gate agent guesswork or waiting hours for someone to “check in the back.” You can literally hand them the data.
Whether you’re tracking a lost suitcase, a pet, or a wayward kid’s backpack, shared AirTags make it easier to show someone where the thing is—no more screenshots or explanations.
It’s the update I didn’t know I needed… and one more reason AirTag stays on my must-pack list.
Appearance of the Week
Spies, Lies, and Cybercrime—Now on LinkedIn Learning!
Cybercriminals don’t just hack—they spy, deceive, and manipulate, just like the world’s most dangerous spies. The tactics used in espionage have made their way into cybercrime, turning your trust into their weapon.
Want to learn how to think like a spy hunter and defend yourself from these attacks? My new LinkedIn Learning course will teach you how to spot deception, protect your data, and stay ahead of cyber threats—all while having a lot of fun learning from me.
AI Image of the Week
I asked AI to create a surreal city image with countless things happening including themes from this newsletter. Hidden in the image somewhere clever (like Where's Waldo) is a spy in traditional black coat and hat with dark sunglasses. I think the “spy” is a little obvious, but the image itself is rather fun!

Like What You're Reading?
Don’t miss a newsletter! Subscribe to Spies, Lies & Cybercrime for our top espionage, cybercrime and security stories delivered right to your inbox. Always weekly, never intrusive, totally secure.
Are you protected?
Recently nearly 3 billion records containing all our sensitive data was exposed on the dark web for criminals, fraudsters and scammers to data mine for identity fraud. Was your social security number and birthdate exposed? Identity threat monitoring is now a must to protect yourself? Use this affiliate link to get up to 60% off of Aura’s Cybersecurity, Identity monitoring and threat detecting software!

Use this Link to get a 30 days trial + 2-% Beehiiv!

Ready for Next Week?
What do YOU want to learn about in my next newsletter? Reply to this email or comment on the web version, and I’ll include your question in next month’s issue!
Thank you for subscribing to Spies, Lies and Cybercrime. Please comment and share the newsletter. I look forward to helping you stay safe in the digital world.
Best,
Eric
Let's make sure my emails land straight in your inbox.
Gmail users: Move this email to your primary inbox
On your phone? Hit the 3 dots at top right corner, click "Move to" then "Primary."
On desktop? Close this email then drag and drop this email into the "Primary" tab near the top left of your screen
Apple mail users: Tap on our email address at the top of this email (next to "From:" on mobile) and click “Add to VIPs”
For everyone else: follow these instructions
Partner Disclosure: Please note that some of the links in this post are affiliate links, which means if you click on them and make a purchase, I may receive a small commission at no extra cost to you. This helps support my work and allows me to continue to provide valuable content. I only recommend products that I use and love. Thank you for your support!
Reply