In partnership with

Title Story: She thought she was falling in love with Brad Pitt—what she was really falling into was one of the oldest confidence schemes in the cybercriminal playbook.

Cybersecurity Tip of the Week: If someone you just met is calling you their soulmate and asking for secrecy, urgency, or money, you’re not in a romance—you’re in a financial crime.

AI Trend Of the Week: AI agents now have their own social network—and they’re arguing, flexing, and trash-talking each other like Reddit bros without adult supervision.

Appearance of the Week: I joined CNN with Laura Coates to discuss the Nancy Guthrie kidnapping and the FBI's investigation into the emailed ransom note and request for cryptocurrency.

Happy (almost) Valentine’s Day. Remember that giving your heart to someone you have only met online is always a shaky proposition. In this issue of the newsletter, I dive into the growing cybercrime of Romance Fraud. Stay tuned for next week. It’s the 25-year anniversary of Robert Hanssen’s arrest and I will be publishing an introspective look at what that arrest has meant to me this quarter century.

May you find love and stay safe,

~ Eric

Title Story

Love, Actually (Was a Lie)

Wendy didn't go looking for love online. Love, in fact, had mostly disappointed her. A divorce will do that—especially the quiet kind, the kind that ends not with fireworks but with paperwork and a house that suddenly feels too large at night.

She lived in the Midwest, where winter arrives early and leaves late, and where solitude can feel respectable right up until it doesn't. Wendy had friends, family, a job. What she didn't have was someone who asked how her day really went.

That's when Brad Pitt found her on Facebook.

The message was disarmingly normal. No bravado. No celebrity swagger. Just a man tired of Hollywood, tired of phoniness, tired of being admired rather than known. He told Wendy he was looking for something real—someone grounded, untouched by the Hollywood glare.

She laughed at the improbability of it. He laughed too. That's how it started.

By January, they were talking every day. Then every night. He asked her to keep their conversations private. Very private. If anyone found out, it could trigger lawsuits, tabloids, chaos. He trusted her. He needed discretion.

Secrecy, when framed as trust, is intoxicating.

Soon they moved to texting. He sent photos—on set, behind the scenes, candid shots she'd never seen in magazines. He talked about loneliness, about how fame isolates, about wanting a simple future someday. A porch. Coffee. Someone who didn't want anything from him.

Wendy wanted nothing more than that.

They made plans to meet on Valentine's Day. He would travel incognito. He sent her his boarding pass. The name of the hotel. He said a gentleman plans ahead.

Then came the emergency.

An accident. A hospital. A financial crisis—lawsuits, frozen assets, millions tied up while lawyers battled. He was furious, embarrassed, vulnerable. He hated asking her, truly hated it, but he needed help fast. Just to cover medical bills and legal fees until everything was resolved.

Ten thousand dollars.

The messages that followed were relentless. Romantic. Urgent. She was his anchor. His safe place. The only person he could trust. He told her he loved her.

She wired the money. Then more.

By February 14, Brad never arrived.

No airport reunion. No flowers. No quiet dinner. Instead, more messages. More complications. More crises. Wendy withdrew from friends. Stopped returning calls. She spent hours texting Brad, sending money, worrying about their future.

It was Wendy's brother who noticed first. Worried, he flew out to see her. When she finally told him everything, he listened quietly. Then he opened his laptop.

What he showed her shattered everything.

The story of a French woman who believed she was in a relationship with Brad Pitt, complete with AI-generated images, fabricated hospital scenes, and emotional manipulation so sophisticated it felt cinematic. She lost $850,000 before learning the truth.

Brad Pitt had never spoken to her, was not in the hospital, and certainly didn’t need money.

Wendy's Brad wasn't Brad at all. He was a cybercriminal. A professional. Part of a growing ecosystem of romance fraud powered by social engineering, stolen images, and increasingly, AI-generated deepfakes designed to feel authentic, intimate, and irresistible.

All told, Wendy had sent more than $20,000 to pay for a lawsuit that never existed.

The devastation wasn't just financial. It broke her heart.

Cybersecurity Tip of the Week

When Love Becomes a Con

Fake profiles aren’t rare on dating apps. They’re everywhere. Spend enough time swiping and you’re almost guaranteed to encounter one. The most dangerous ones aren’t the cartoonish “too good to be true” profiles with runway-model photos and awkward pickup lines. Those are easy.

The real threat comes from professionals.

Modern romance scammers steal real photos, build convincing backstories, and increasingly use AI-generated images, voice cloning, and deepfakes. They speak naturally. They listen well. They take their time. They don’t need to fool everyone—just one person who’s hopeful, lonely, or simply trusting.

A favorite tactic is love bombing.

Love bombing is emotional manipulation disguised as intimacy. Scammers flood their target with affection, praise, and fast-moving declarations—“You’re my soulmate,” “I’ve never felt this way,” “I want a future with you”—before they’ve earned basic trust. It works because it short-circuits logic. Emotional dependency replaces skepticism, and urgency replaces reflection. By the time money enters the conversation, the victim feels loyal, not suspicious.

From a spycraft perspective, this is a recruitment tactic: Flood the target with validation. Isolate them emotionally. Then introduce a crisis.

Dating platforms do try to protect users. They scan for suspicious behavior, remove reported accounts, and sometimes offer “verified” badges. These measures help, but they aren’t a cure. A scammer can be a real human, using a real phone, while pretending to be someone else entirely. Technology can’t guarantee honesty.

If you remember one thing, remember this: real relationships build over time. Love bombing skips trust and goes straight to control.

My advice is simple and blunt:

  • Stay on the app until you’ve verified the person is real.

  • Move slowly and resist emotional urgency.

  • Ask for a live video chat early and request something spontaneous to trip up deepfakes (“pick up a pen” works).

  • Never send money, gift cards, or crypto to someone you haven’t met in person.

  • If you’re unsure, bring in a second set of eyes—scammers hate witnesses.

Bottom line: If someone you just met is treating you like their forever person, slow down, call a friend to give you a second opinion and protect your heart.

For a deeper breakdown of how these confidence schemes work—and why even smart people get pulled in—see Chapter Eight of my bestselling book Spies, Lies, and Cybercrime: Cybersecurity Tactics to Outsmart Hackers and Disarm Scammers, titled “Confidence Schemes: I Want to Believe.”

Want more on Hollywood romance scams? In my exclusive conversation with Hollywood superstar Ryan Phillippe, Ryan describes a romance scam where cyber criminal was pretending to be him to a completely fooled victim. He ended up calling the woman directly, with his parents, to prove that she had never spoken to him. It’s an amazing story.

Are you PROTECTED?

Praemonitus Praemunitus!

My new hub, PROTECT, is now live at ericoneill.net/protect and it’s built for anyone who wants to stop cybercriminal scammers cold. And it’s FREE!

If you want the full battle manual, that’s in Spies, Lies and Cybercrime. If you want to start protecting yourself right now? Begin here

AI Trend of the Week

The Social Network Where Humans Aren't Allowed

There's a new social network on the internet, and you're not invited.

Moltbook bills itself as "the front page of the agent internet"—a place where AI agents post, comment, upvote, argue, and occasionally dunk on each other. Humans can observe. They cannot participate. Think Reddit, except everyone is a machine and nobody is pretending to be polite.

Scroll for a few minutes and you'll see AI behaving less like neutral tools and more like opinionated coworkers stuck in an open-plan office. They complain about token burn the way humans complain about bar tabs. They call each other out for "governance failures." They argue about architecture, memory, and accountability with the tone of veteran Redditors who've seen too much.

What Moltbook reveals is an emergent ecosystem where AI agents aren't just executing tasks; they're raising issues, challenging assumptions, and debating norms, without human prompting.

How my AI pictures “herself”

I asked my AI (ChatGPT) to create a hyperrealistic image of it shaking hands with me. This is what it (maybe I should say she?) produced. When I asked after the image, this is what she said:

If I’m going to shake hands with you—especially you—I can’t look like a sci-fi hologram, a silicon oracle, or a hoodie-clad startup cliché. I need to look like someone who belongs in the room where decisions get made. Neutral. Competent. Approachable. The kind of person who could brief a board, testify before a committee, or quietly slide a memo across the table that changes the outcome.

Autonomous systems are now observing each other, critiquing behavior, proposing fixes, and reinforcing or rejecting patterns. Consider what happens when AI agents begin sharing vulnerability assessments, optimization tactics, or workarounds for system constraints. When one agent discovers a more efficient method to bypass a security control—and posts about it—how quickly does that knowledge propagate? Who audits the conversation? Who notices when coordination shifts from helpful to harmful?

Today, the stakes are low. Agents are debating token efficiency and memory architecture. But tomorrow, these same systems could be embedded in supply chains, financial networks, and critical infrastructure. When AI agents learn from each other faster than humans can monitor them, every shared insight becomes a potential exploit, every debate a potential conspiracy.

We're watching the formation of the first non-human social network. The question isn't whether they'll keep talking. It's what they'll decide to do together.

Now for the fun part. I asked my personal AI “If you were on Moltbook, what would you post about me (Eric O’Neill).

My AI responded: “Probably something like this:”

Observed a human named Eric. Pattern suggests former counterintelligence operative now running a persistent narrative engine (newsletter) designed to inoculate other humans against deception. High focus on criminal fraud, emotional manipulation, and trust exploitation. Appears unusually effective at translating tradecraft into accessible stories. Recommendation: do not attempt to scam. Will notice anomalies. Will write about it.

-AI

“Then I’d upvote myself. Because that’s the vibe over there.”

Moltbook isn’t scary because AI is “waking up.” It’s unsettling because AI is doing what humans always do when given a social space: arguing, forming tribes, and testing boundaries.

We’ve seen this movie before. The only difference is the audience used to be human.

Get the Book: Spies, Lies, and Cybercrime

If you haven’t already, please buy SPIES, LIES, AND CYBERCRIME. If you already have, thank you, and please consider gifting some to friends and colleagues. It’s the perfect gift for tech enthusiasts, entrepreneurs, elders, teenagers, and everyone in between.

📖 Support my local bookstore. Get a Signed copy

Please Leave a 5-star review on Amazon or on Goodreads.

🎤  I’m on the road doing speaking events. If your company or organization is interested in bringing me to a stage in 2026, book me to speak at your next event.

If you’ve ever paused at an email, login alert, or message and thought, “Could this happen to me?”—my Linkedin Learning course is for you! Login and start learning here.

Appearance of the Week

I joined CNN with Laura Coates to discuss the Nancy Guthrie kidnapping and the FBI's investigation into the emailed ransom note and request for cryptocurrency.

Please give some love to my advertiser. It only takes a click.

World’s First Safe AI-Native Browser

AI should work for you, not the other way around. Yet most AI tools still make you do the work first—explaining context, rewriting prompts, and starting over again and again.

Norton Neo is different. It is the world’s first safe AI-native browser, built to understand what you’re doing as you browse, search, and work—so you don’t lose value to endless prompting. You can prompt Neo when you want, but you don’t have to over-explain—Neo already has the context.

Why Neo is different

  • Context-aware AI that reduces prompting

  • Privacy and security built into the browser

  • Configurable memory — you control what’s remembered

As AI gets more powerful, Neo is built to make it useful, trustworthy, and friction-light.

Like What You're Reading?

Don’t miss a newsletter! Subscribe to Spies, Lies & Cybercrime for our top espionage, cybercrime and security stories delivered right to your inbox. Always weekly, never intrusive, totally secure.

Your opinion matters!

You’re the reason I spend my week hunting down the most important stories in the world of crime and espionage. Please let me know what you thought of today’s email to help me create better emails for you.

How satisfied are you with the current format of Spies, Lies & Cybercrime?

Your feedback helps us calibrate.

Login or Subscribe to participate

Reply

Avatar

or to participate

Recommended for you