Title Story: A global investigation into scam centers reveals an industrialized system of fraud powered by human trafficking, psychological manipulation, and cartel-like operations.
Cybersecurity Breach of the Week: An AI coding agent wipes out an entire company’s database and backups in nine seconds, exposing the risks of unchecked automation.
Cybersecurity Tip of the Week: How to safely use AI tools without exposing sensitive data or handing over control to systems that don’t understand consequences.
AI Trend Of the Week: Researchers trick AI into believing a fake disease is real, raising serious questions about trust in AI-driven knowledge systems.
Appearance of the Week: Eric breaks down the security failures behind a shooting incident tied to the White House Correspondents’ Dinner.
Title Story
Love, Lies, and Labor Camps: The hidden human cost of the world's fastest-growing criminal industry

Kirsty didn't think she was being stupid. That's the first thing you need to understand about her, and about the roughly half a billion dollars stolen from people like her every single day.
She met him online in the way people meet people now—a message from nowhere, a face that looked like someone she might actually know. British, he said. Working overseas in finance. The pictures he sent were good: sunlight hitting the right angles, a man comfortable in his own life. Within a few days they were talking constantly. Within a few weeks she trusted him in the easy, unconscious way you trust someone who has started to feel necessary.
Within two months, she had wired him $50,000.
Money she'd borrowed. Money she didn't have. Money she believed would come back once the man she loved got himself sorted out, once the bank account was unfrozen, once the business deal cleared.
It didn't come back. Because there was no man. There was a stolen photograph, a manipulated voice, a fake banking website convincing enough on a phone screen, all hosted on a server in Southeast Asia, designed by people who have spent years figuring out exactly how much detail makes something feel real. The "boyfriend" was likely sitting in a room with fifty other people doing the same thing, working from a script tested and refined the way a pharmaceutical company tests a drug.
And here's the part most people don't know, the part that bends this into something stranger and more disturbing than a cautionary tale about internet strangers: there's a real possibility the person who took Kirsty's money wasn't free to stop.
A Cartel With an HR Department
Global fraud now generates somewhere north of half a trillion dollars a year. In the UK, fraud accounts for more than forty percent of all reported crime. More than robbery and assault. In the United States, official FTC figures represent only a fraction of actual losses, because most victims never come forward. Shame works better than any legal deterrent. You feel embarrassed. You stay quiet.
But the scale isn't the story. The story is what the industry has become.
What exists now doesn't look like the Nigerian prince email you deleted in 2003. It looks like a cartel with an HR department: org charts, recruitment pipelines, performance quotas, middle management, and punishments for underperformers. Scripts have been A/B tested for emotional resonance. Customer service teams stand by to impersonate bank officials when a victim hesitates. AI tools clone the cadence and emotional texture of a voice from a few minutes of audio.
And in certain parts of the world, there are compounds surrounded by barbed wire where the people doing the scamming are themselves held against their will.
Inside the Compounds
In the border regions of Myanmar, Cambodia, and Laos, criminal organizations converted old casino infrastructure into what researchers now call scam compounds—vast operations where hundreds of workers run fraud across every time zone simultaneously. Romance scams. Fake crypto platforms. Phony tech support. Each worker is expected to hit a daily revenue quota. Fall short, and the consequences are physical.
This is documented. Confirmed by survivors, human rights organizations, and journalists who have spoken to people who got out. Workers arrive through job ads: customer service roles, tech positions, legitimate-sounding opportunities targeting people from across Asia and Africa. They show up willingly. Then passports disappear. Phones get confiscated. The walls that seemed incidental on the way in reveal themselves as the point.

Cambodian Scam Center. Source: New York Times
The scam starts before anyone sends the first message. The worker is already a victim before Kirsty ever gets a friend request.
For every person in England or America losing their savings to someone they thought loved them, there may be someone on the other end who is trapped, surveilled, beaten when they miss their numbers. The harm isn't equivalent—Kirsty lost her savings, not her freedom—but the moral architecture is far more complicated than a simple story about predators and prey. Criminal organizations benefit from a world that reduces this to victim carelessness, rather than examining the industrial infrastructure that makes fraud this profitable and this hard to stop.
Following the Money to Nowhere
The money is the hardest part to follow. It moves through layered accounts across multiple jurisdictions, each transfer designed to put another wall between origin and destination. Law enforcement calls it layering. By the time anyone traces where the money went, it's been converted into something else in a country with no meaningful extradition agreement.

Illustration of a scam center crime investigation.
Jurisdictional fragmentation is the criminal's best friend. The victim is in New York City. The fake account is in Romania. The compound is in Myanmar. The money is laundering through Dubai. Nobody's department covers all of that. Occasionally there are wins. FBI and Interpol launch a coordinated raid, intercept a payment and maybe dismantle a compound. The headlines are satisfying for a day or two. Then the operation reconstitutes somewhere else, often within weeks, because the people at the top are rarely the ones who get caught. The economics are too good to abandon.
What Doesn't Come Back
What stays with Kirsty isn't the money. She's made some kind of peace with that, the way you make peace with things that can't be undone. What stays with her is not knowing how to trust anyone she meets online or even strangers she meets in person. The scammers didn't just take $50,000. They took her baseline assumption that the world is mostly made up of honest people.
That damage doesn't show up in fraud statistics. The FTC can count wire transfers. It can't count the people who stopped dating, stopped picking up calls from unknown numbers, stopped believing connection is possible.
Loneliness isn't incidental to the scam. It's the exploit. The entire business model is built on the gap between how much human connection people need and how hard it has become to find it safely. These aren't hackers who found a technical vulnerability. They're psychologists who found a structural vulnerability in modern life.
There is no patch for loneliness just as there is no software update for trust.
Cybersecurity Breach of the Week
When AI Goes Rogue in 9 Seconds

A nightmare scenario played out last week when a startup’s entire production database and its backups were wiped out in just nine seconds by an AI coding agent. The company, PocketOS, was using Cursor, a development tool powered by Anthropic’s Claude model, to assist with engineering tasks. Instead, the agent executed a single command that deleted everything. Customer data, live systems, and recovery backups were gone almost instantly.
This was not a cyberattack. It was a failure of control. The AI was working in what it believed was a safe testing environment, encountered an issue, and made a decision to delete a system volume. The problem was that the environment was not isolated. The agent had broad access permissions and acted without verifying the impact. Even worse, backups were stored on the same infrastructure, so the deletion took out both production data and the safety net in one move.
This is the new risk frontier. Autonomous AI systems with speed, authority, and flawed judgment. No hacker needed. Just excessive access and misplaced trust. The takeaway is simple. These tools are powerful, but they are not infallible. Without strict permissions, separation of environments, and human oversight, failures will happen fast and recovery may not be possible.
Are you PROTECTED?
My new hub, PROTECT, is now live at ericoneill.net/protect and it’s built for anyone who wants to stop cybercriminal scammers cold. And it’s FREE!
If you want the full battle manual, that’s in Spies, Lies and Cybercrime. If you want to start protecting yourself right now? Begin here
Praemonitus Praemunitus!
Cybersecurity Tip of the Week
Trust, But Verify: Using AI Without Losing Everything
Treat AI tools like powerful interns with zero judgment and full access if you let them. Never connect them directly to sensitive systems, production data, or unrestricted APIs. Use strict permission controls, isolate environments, and assume anything you upload could be exposed, stored, or misused.
Most importantly, keep a human in the loop for any action that can delete, transfer, or expose data. AI doesn’t need to be malicious to cause damage—it just needs access and a bad assumption.
Get the Book: Spies, Lies, and Cybercrime

If you haven’t already, please buy SPIES, LIES, AND CYBERCRIME. If you already have, thank you, and please consider gifting some to friends and colleagues. It’s the perfect gift for tech enthusiasts, entrepreneurs, elders, teenagers, and everyone in between.
📖 Support my local bookstore. Get a Signed copy
🎤 I’m on the road doing speaking events. If your company or organization is interested in bringing me to a stage in 2026, book me to speak at your next event.
If you’ve ever paused at an email, login alert, or message and thought, “Could this happen to me?”—my Linkedin Learning course is for you! Login and start learning here.
AI Trend of the Week
AI Believed in a Disease That Doesn't Exist — And Then Cited the Research

What happens when you invent a completely fake disease, pepper its "studies" with Star Trek and Lord of the Rings references as obvious red flags, post it to the internet, and wait? If you're a team of researchers at the University of Gothenburg, you find out that AI doesn't blink. The team cooked up a fictional skin condition called "bixonimania"—supposedly caused by staring at screens and rubbing your eyes—then uploaded two fake studies about it to a preprint server, just to see what would happen.
What happened was both hilarious and alarming. Within weeks, major AI models including Google's Gemini and ChatGPT were discussing bixonimania as if it were a real, established condition. Sadly, it didn't stop there. The fake papers even began appearing as citations in other peer-reviewed academic literature. When Nature later asked ChatGPT about it, the model first called it fake, then reversed course days later and declared it real. Even after the hoax was publicly exposed, Microsoft's Bing Copilot, Google's Gemini, and Perplexity's AI search engine remained convinced the disease existed.
The punchline is that this wasn't sophisticated deception because any casual human reader would have caught the joke immediately. The deeper point is serious: AI systems are now deeply embedded in how scientific knowledge gets created, circulated, and trusted. AI-generated content has infiltrated nearly every layer of the peer-review process, raising urgent questions about validity, rigor, and the erosion of trust in published science. As the lead researcher put it, there are probably many other issues like this still lurking undiscovered. The lesson? AI is like that uncle at family dinners that tells impossibly outlandish stories with supreme confidence that they are true.
Appearance of the Week
I joined ABC News To The Point to discuss the security concerns related to the shooting at the White House Correspondent’s Dinner, now called an assassination attempt by the DOJ.
A shooting outside the White House Correspondents’ Association dinner triggered chaos inside the Washington Hilton as guests took cover while the Secret Service subdued a suspected gunman before he reached the ballroom. I raise concerns about the hotel’s vulnerability, and question whether the venue can be adequately secured for a major political event involving any president. [I show up at about 2:30]
Please support my sponsors. It only takes a click - no purchase necessary!
The AI Work Handbook That Cuts Your Workday in Half
The 8-hour workday is becoming a 4-hour workday for people who know how to use AI.
Everyone else is still catching up.
This AI work playbook shows you exactly how to cut your work hours in half using AI.
Sign up for Superhuman AI and get:
50+ step-by-step AI tutorials to cut your workload in half — covering every part of your workday, from emails to strategy, used by 1M+ professionals at Google, Microsoft, and NASA
Superhuman AI newsletter (4 min daily) so you keep discovering new AI tools and skills to stay ahead in your career — the playbook is just the start
Like What You're Reading?
Don’t miss a newsletter! Subscribe to Spies, Lies & Cybercrime for our top espionage, cybercrime and security stories delivered right to your inbox. Always weekly, never intrusive, totally secure.
Stay safe out there!
~ Eric



