- Spies, Lies & Cybercrime
- Posts
- 30: THE WAR ON TRUTH
30: THE WAR ON TRUTH
Spies, Lies & Cybercrime by Eric O'Neill
In This Issue
Title Story: DEEPFAKES AND THE WAR ON TRUTH - The New Age of Deception, Deepfakes, and Digital Doubt.
Cybersecurity Tip of the Week: Goodbye Passwords: Microsoft Just Changed the Game (And So Should You).
Cybersecurity Breach of the Week: The average amount requested in a business email compromise (BEC) attack has nearly doubled, jumping to $128,980 in Q4 2024.
Tech of the Week: AI Just Made Dubbed Movies Less Cringe!
Appearance of the Week: In this week’s appearance, I sit down with Dr. James Robbins at the Institute of World Politics to talk about the spy case that changed my life—and nearly broke me in the process.
AI Image of the Week: A noir spy hunt among the Washington DC Cherry Blossoms.
Title Story - Deepfakes and the War on Truth

The Call That Changed Everything
On what should have been a normal Friday afternoon, Jennifer DeStefano’s phone rang from an unknown number. She nearly sent it to voicemail—until she remembered her teenage daughter Brie was off in Northern Arizona, training for a ski race. Jennifer picked up.
“Mom, I messed up,” came the panicked sob of her daughter’s voice.
Then a man’s voice cut in—cold, threatening, and full of menace. “Put your head back. Lie down.” What began as a mother’s concern for a child in a high-risk sport escalated into sheer panic.
“Mom, these bad men have me,” Brie pleaded. “Help me, help me!”
The voice on the phone returned. “Listen here. I have your daughter. You call anyone, you call the cops, I will pump her full of drugs, have my way with her, and dump her body in Mexico. You’ll never see her again.”
Terrified and trembling, Jennifer kept the man talking while her daughter Aubrey and several other moms at a dance studio rehearsal dialed her husband, called 911, and tried to locate Brie.
Finally, Jennifer’s husband picked up—and confirmed Brie was safe. Home. Resting. Completely unaware that a deepfake of her voice had just sent her mother into a nightmare.
Jennifer didn’t pay the ransom. But the damage had already been done.
In her June 13, 2023 testimony to the United States Senate, Jennifer said it plainly:
“AI is revolutionizing and unraveling the very foundation of our social fabric by creating doubt and fear in what was once never questioned—the sound of a loved one’s voice.”
What happened to Jennifer wasn’t science fiction. It wasn’t theoretical. It was now.
Artificial Intelligence is no longer a futuristic tool—it’s a weapon in the hands of scammers, cybercriminals, and digital spies. And it’s already doing what even the most skilled impersonators never could: mimicking the people we love with chilling accuracy.
This issue isn’t about government espionage or corporate cyberwarfare—though we cover those often. Today is personal. Because the next target of AI-powered deception isn’t your business. It’s you.
AI Writes Like You. Talks Like You. Scams You
Let’s start with something familiar: email.
You use AI to clean up your writing, tighten grammar, punch up a pitch, or summarize a long report. It’s fast. It’s useful. So do cybercriminals.
Spear phishing has gone from broken-English spam to eerily accurate impersonation. AI now creates perfectly polished emails that copy your boss’s tone, spoof logos and email signatures, and even quote from actual email threads to make the message feel “just right.” Click one link. Open one file. That’s all it takes. Malware lands. Credentials get harvested. And suddenly, you’re not the only one using your identity.
The red flags we trained ourselves to spot—typos, weird formatting, strange phrases—are gone. AI doesn’t misspell. It mimics.
The Rise of the Virtual Trusted Insider
In my book Gray Day, I coined the term “Virtual Trusted Insider.”
It refers to a legitimate user account—yours, mine, someone you know—that’s secretly compromised and turned against the organization it was trusted by. AI helps attackers get in, stay hidden, and move freely within the system. Think of it like this: the attacker doesn’t need to break down the door when they can steal your key, wear your face, and sound like your voice.
Once inside, AI is used to monitor behavior, blend in, and avoid triggering alerts. It mimics normal activity while extracting data in the background. It’s subtle. Quiet. Devastating.
The Deepfake Zoom Scam
Need something scarier? A UK company recently fell victim to a deepfake scam so sophisticated it belongs in a spy thriller.
A finance manager was invited to a Zoom call with his CFO, two coworkers, and two “new partners.” Everyone was on screen. The CFO made a request: wire funds to the new partners to close a deal. Over the next two weeks, the finance manager wired $25 million through 15 separate transactions.

Every single person on the Zoom call—including the CFO—was a deepfake. AI-generated avatars pulled from public photos and video, voiced by cloned speech, synchronized to real-time conversation. This wasn’t a fake invoice. It was a fake meeting. And it worked.
Welcome to the World of Synthetic Humans
The line between human and machine is disappearing.
Soon, we’ll see AI-generated avatars replacing people in video calls—not for fraud, but for convenience. You won’t need to brush your hair or look presentable. Your AI double will smile, nod, and speak for you. It’ll match your gestures and even auto-respond in your tone. But what happens when someone else hijacks your double?
We now live in a remote-first, tele-everything world. And into that world walks AI, armed with the ability to generate humanlike voices, faces, and messages on demand.
This is how trust breaks. When you can’t tell if the voice, the face, or the message is real… how do you know anything is?
The Only Thing Left: Trust
We’ve entered the era where trust is the most valuable commodity on Earth.
By 2026, 90% of the content you see online—videos, photos, news articles, emails, reviews—could be AI-generated. Not altered. Created from scratch.
You won’t know if you’re chatting with a person or a bot. You’ll get phone calls from cloned voices. Text messages from fake identities. And in that moment, you’ll have to decide whether to believe it.
That’s the battlefield. Not code or firewalls—but belief.
Fighting Back with AI
Just as cybercriminals are using AI to deceive and infiltrate, the best cybersecurity systems are using AI to fight back.
AI now powers behavioral analytics that monitor for anomalies—a user logging in from the wrong country, accessing systems outside their normal scope, or using a device the network has never seen. That’s when red flags fly, accounts get locked, and humans step in to verify.

Tron Legacy Light cycle battle.
This approach—“Trust Nothing. Verify Everything.”—is becoming the new standard.
In the background, AI-driven deepfake detectors will start analyzing voice, video, and text in real time. That Zoom call? Your platform may soon flag a synthetic face. That voicemail from your spouse asking for urgent help? Your device may warn you it’s a voice clone.
You won’t even see the alert system. But it’ll be there. Quietly running. Watching for the fake in a sea of almost-real.
What You Can Do (Starting Now)
Here’s what you can do right now to protect yourself:
Be skeptical. If something feels off—even slightly—pause and verify.
Double-check through another channel. Don’t trust a voice call or video alone. Text. Call back. Confirm through other means.
Turn on two-factor authentication. Everywhere. No exceptions.
Slow down. Scammers use urgency. Don’t fall for it.
Expect the fake. The days of trusting your senses are over.
AI is changing us. Not just how we work, but how we trust.
It will reshape the way we communicate, how we identify each other, and what we believe to be real. Deepfakes and digital doubles aren’t just disrupting cybersecurity—they’re disrupting human certainty.
When anything can be faked, your only protection is caution, verification, and the willingness to question what used to feel unquestionable.
The future is synthetic. But your instincts don’t have to be.
Cybersecurity Tip of the Week
Goodbye Passwords: Microsoft Just Changed the Game (And So Should You)

It’s official: the password is dead, and Microsoft is giving it a proper burial.
In a bold move, Microsoft is rolling out a password-free future for over 1 billion users, replacing outdated logins with passkeys—a smarter, faster, and vastly more secure way to access your account. The writing has been on the wall for years, and now the company is making it clear:
Your Microsoft password could be easily forgotten or guessed by an attacker. It’s time to completely remove it.
Why now? Because attackers aren’t waiting. Microsoft is blocking 7,000 password attacks per second—and the numbers are climbing.
A passkey is a cryptographic credential tied to your device and identity. It replaces both your password and two-factor authentication (2FA), using biometrics (like your face or fingerprint) or a device PIN to verify you. Passkeys:
Can’t be guessed or reused
Can’t be phished or intercepted
Are unique to each site or app
They’re also three times faster than typing a password, and eight times faster than password + MFA. Microsoft says 99% of users who begin the passkey process complete it successfully.
Microsoft’s move builds on momentum started by Apple, which implemented passkeys across iCloud and Safari. Google is inching closer with stronger 2FA defaults—and if they follow Microsoft’s lead in deleting passwords entirely, we could see a sweeping change in how we secure digital life.
This matters. Not just for IT teams, but for you—because your password isn’t just a login. It’s a target. And it’s likely the weakest link in your digital chain.
What You Should Do Right Now
If Microsoft can kill off the password, so can you. Here’s how to start:
Use biometrics or device-based authentication - Whether it’s Face ID, Touch ID, or a secure authenticator app, move away from password-only logins on your most important accounts.
Don’t keep the password as a backup - Microsoft says this is critical: if the password still works, your account is still vulnerable. Go all in.
Avoid SMS-based 2FA - It’s better than nothing, but not by much. Use app-based authentication or passkeys instead.
Start with critical accounts - Your email, bank, and cloud storage accounts should be the first to go passwordless.
Passwords are convenient—but they’re outdated, vulnerable, and dangerous in today’s threat landscape. Microsoft deserves real credit for leading the charge toward a phishing-resistant, password-free future. It won’t be painless, but it’s progress. Now’s the time to act. Kill the password before it kills your data.
Cybersecurity Breach of the Week
This week’s breach should make you pause before clicking “Send.”
New data from the Anti-Phishing Working Group shows the average amount requested in a business email compromise (BEC) attack has nearly doubled, jumping to $128,980 in Q4 2024. These scams aren’t sloppy anymore. They’re sophisticated, polished, and powered by AI.
Most attackers are using Gmail—81% last quarter—impersonating executives, vendors, or partners. The emails look perfect. The tone is dead-on. And in some cases, there’s a phone call or even a Zoom meeting. But the person on the other end? A deepfake.
These aren’t “Nigerian prince” scams. They’re professionally produced heists playing out in your inbox. And with deepfakes getting better by the week, trust is becoming a liability.
It doesn’t stop at email. U.S. residents are also being hit with SMS phishing scams—texts impersonating toll operators like E-ZPass, claiming you’ll be fined or lose your license if you don’t pay now. The kit behind this smishing campaign is sold out of China, and it’s designed to spoof real websites and flood phones with realistic messages—even to people who don’t use toll roads.
These attacks are a perfect storm: urgent, familiar, and increasingly hard to spot. Which brings us back to the theme of this week’s cover story—deepfakes and the war on truth.
The old cues—familiar names, voices, even faces—can’t be trusted anymore. The only safe move is to verify everything. Use a known phone number. Confirm the request. Don’t assume.
Because in the age of synthetic deception, what feels real isn’t always true.
Tech of the Week: AI Just Made Dubbed Movies Less Cringe
If you’ve ever watched a foreign film dubbed in English and thought, “Why do their mouths not match the words?”—AI just solved that.
Premiering in U.S. theaters this May, the Swedish sci-fi flick Watch the Skies is the first feature film to use visual dubbing, a post-production technique powered by AI. The tech, called TrueSync, comes from L.A.-based firm Flawless and it digitally remaps actors’ lip movements to match dubbed dialogue—without reshooting a single scene.
In this case, the original Swedish actors re-recorded their lines in English, and TrueSync did the rest—creating seamless English dialogue that looks and feels like it was filmed that way. No more awkward lip-flaps or distracting timing mismatches.
Flawless has used this before—most notably to remove 35 F-bombs from the thriller Fall and replace them with PG-13-friendly versions like “fricking.” But Watch the Skies is the first time we’re seeing it applied to an entire film.
It’s one more way AI is blurring the line between perception and reality—whether in security, cinema, or deepfakes. But this time, it’s enhancing the story—not distorting it.
Check out TrueSync in action below!
Appearance of the Week
In this week’s appearance, I sit down with Dr. James Robbins at the Institute of World Politics to talk about the spy case that changed my life—and nearly broke me in the process.
We dive deep into the takedown of Robert Hanssen, one of the most damaging traitors in American history. I share stories from my time undercover as an FBI operative, what it was like to play psychological chess with a man who betrayed his country for over two decades, and how that mission became the inspiration for the film Breach. We also fast-forward to the threats we’re facing today—from deepfakes to digital espionage—and how my new book, Spies, Lies, and Cybercrime, blends real-life spycraft with today’s cyber battlefield. If you want to understand how to think like a spy in the age of AI, you won’t want to miss this one.
AI Image of the Week
Prompt: Create an image of a crowd of tourists taking pictures of the Jefferson memorial in the full bloom of the cherry blossoms. Among them a hidden spy game is happening as an undercover FBI operative is tracking a foreign spy carrying a briefcase of secrets. Create the image like a scene from a noir thriller.

Like What You're Reading?
Don’t miss a newsletter! Subscribe to Spies, Lies & Cybercrime for our top espionage, cybercrime and security stories delivered right to your inbox. Always weekly, never intrusive, totally secure.
Are you protected?
Recently nearly 3 billion records containing all our sensitive data was exposed on the dark web for criminals, fraudsters and scammers to data mine for identity fraud. Was your social security number and birthdate exposed? Identity threat monitoring is now a must to protect yourself? Use this affiliate link to get up to 60% off of Aura’s Cybersecurity, Identity monitoring and threat detecting software!

Use this Link to get a 30 days trial + 2-% Beehiiv!

Ready for Next Week?
What do YOU want to learn about in my next newsletter? Reply to this email or comment on the web version, and I’ll include your question in next month’s issue!
Thank you for subscribing to Spies, Lies and Cybercrime. Please comment and share the newsletter. I look forward to helping you stay safe in the digital world.
Best,
Eric
Let's make sure my emails land straight in your inbox.
Gmail users: Move this email to your primary inbox
On your phone? Hit the 3 dots at top right corner, click "Move to" then "Primary."
On desktop? Close this email then drag and drop this email into the "Primary" tab near the top left of your screen
Apple mail users: Tap on our email address at the top of this email (next to "From:" on mobile) and click “Add to VIPs”
For everyone else: follow these instructions
Partner Disclosure: Please note that some of the links in this post are affiliate links, which means if you click on them and make a purchase, I may receive a small commission at no extra cost to you. This helps support my work and allows me to continue to provide valuable content. I only recommend products that I use and love. Thank you for your support!
Reply