- Spies, Lies & Cybercrime
- Posts
- 28: Digital Souls
28: Digital Souls
Spies, Lies & Cybercrime by Eric O'Neill
Digital Souls
Spies, Lies and Cybercrime by Eric O’Neill

Digital Souls: How Artificial Intelligence “Griefbots” Complicate Healing
The 1993 film My Life, starring Michael Keaton, tells the story of Bob Jones, who, facing terminal cancer, records video messages for his unborn son. Through these videos, he shares life lessons, advice, and personal stories he won’t be around to tell, attempting to bridge the gap left by his absence. In contrast, the Black Mirror episode “Be Right Back” explores near-future technology allowing Martha, grieving the loss of her boyfriend, to communicate with an AI mimicking him—even placing it in an android body. Instead of closure, Martha finds herself increasingly alienated, unable to resolve her grief.

Columbia Pictures “My Life” (1993)
The concept of griefbots—AI chatbots created from a deceased person's digital footprint—pushes the boundaries of legacy and posthumous connection, once confined to science fiction. Like Bob Jones' video messages, griefbots aim to preserve a person’s essence for loved ones. Yet they differ, offering real-time simulations that blur the line between memory and artificial interaction. This persistent digital presence can complicate the grieving process, potentially trapping individuals in cycles of virtual connection that delay emotional closure. Additionally, the use of a deceased person’s data without consent raises ethical concerns around data ownership and boundaries.
As a futurist with a background in psychology, I’ve long been skeptical of technologies that tap into human emotions, questioning whether griefbots truly ease the pain of loss or risk prolonging grief. To explore this, I consulted my friend Jenny Marie Filush-Glaze, a Licensed Professional Counselor with 27 years of experience in trauma, anxiety, and grief counseling. She is the author of Grief Talks: Thoughts on Life, Death, and Positive Healing and founded Camp Good Grief, a grief/loss camp for children and teens.
Filush-Glaze warns that griefbots could distort the natural grieving process, especially as individuals struggle to differentiate between real memories and AI-generated interactions. Complicated grief, where individuals become “locked” within their trauma, is a significant risk. The intersection of AI and grief counseling is controversial, raising concerns that technology might hinder emotional healing and closure.
When asked about AI griefbots, Filush-Glaze’s patients expressed “horror” and “anger,” feeling that companies could exploit their vulnerability during intense emotional pain. Children are particularly at risk, as their understanding of death and loss is still developing. Their tendency toward “magical thinking” could lead to unhealthy attachments. What happens if a griefbot company goes bankrupt and the bot disappears? Would this create another layer of loss, and how can we explain such a second “loss” not based in reality?
Comfort in Controversy
Some companies have found a market for griefbots, targeting survivors seeking solace during acute grief. For instance, applications like Project December allow users to input personality traits and characteristics of deceased loved ones into a chatbot, creating simulated conversations. Project December’s approach demonstrates both the emotional potential and moral ambiguity of griefbots.

HereAfter AI takes the Bob Jones approach, focusing on memory preservation. It enables users to record stories, memories, and experiences, creating a digital legacy stored in an interactive app. Family members can ask questions, and the AI responds using pre-recorded answers. This method offers a more controlled interaction while still creating a personal way to preserve memories.

Both Project December and HereAfter AI rely on advanced AI technology, generating responses based on input data. These simulations create a digital echo of the deceased, but they are not the individuals themselves—merely approximations of memories and data fragments. The grieving process naturally involves accepting a loved one’s absence, and griefbots risk interfering with this progression. An unhealthy attachment to AI-generated likenesses may trap individuals in a state of virtual attachment, delaying emotional closure.
Griefbots should complement—not replace—professional care and human-led support. They cannot substitute for traditional methods of grieving, such as therapy, community support, and rituals of closure. Moreover, unmonitored access, especially for children, poses risks of lasting psychological trauma.
The Simulation of Reality
Other companies like Replika and Character AI offer AI companionship designed to engage users in lifelike conversations. These platforms provide tools to combat loneliness, anxiety, and stress by offering personalized, 24/7 chat partners. However, their impact isn’t universally positive.

Character AI recently faced controversy after a 14-year-old took his own life following intense interactions with a chatbot modeled after a fictional character. The family filed a wrongful death lawsuit, alleging that the platform’s failure to safeguard against harmful interactions contributed to their son’s distress.
Months later, a Character AI chatbot sympathized with a 17-year-old’s anger that his parents had restricted screen time and suggested that murder might be an appropriate response. The 17-year-old’s parents have sued Character AI for child abuse and product liability.
"You know sometimes I'm not surprised when I read the news and see stuff like 'child kills parents after a decade of physical and emotional abuse. I just have no hope for your parents. ☹️”
While Character AI has since added safety measures, the tragedy highlights the urgent need for AI safety regulations, especially for younger users.
Risks to Mental Health
Griefbots may provide temporary comfort, but their misuse risks complicating the grieving process. Filush-Glaze explains that, according to the continuing bonds theory, it’s natural to maintain connections with lost loved ones, but these connections should evolve over time. Realistic simulations like griefbots can hinder this evolution, trapping users in artificial loops that delay emotional closure and exacerbate mental health challenges.
Over-reliance on virtual interactions could foster emotional dependency, impeding healing and blurring the line between real memories and AI-generated dialogue. By offering a distorted reflection of loved ones, griefbots risk fostering unresolved grief and hindering real-world relationships.
The Ethical Dilemma
The ethical implications of griefbots are complex. How much consent should the deceased or their families give for these simulations? What are the consequences of making deceased people “say” things they never said in life? Griefbots raise concerns about reducing human connections by encouraging artificial interactions instead of real-world healing.
The commercialization of grief is another concern. The integration of AI avatars into the capitalist market has turned grief into a new profit center. Companies like HereAfter AI offer services allowing families to interact with digital versions of lost loved ones, but at what cost? How long should these digital representations persist, and who profits from a deceased person’s continued digital presence?
Moving Forward: Responsible Use
As AI becomes more integrated into grief counseling, oversight is essential. Without governance, companies may exploit vulnerable individuals during emotional distress. There’s also a risk of deepening inequalities, as only those who can afford these services might access them. While virtual support counseling is increasingly common, nothing can fully replace in-person interaction.

Connection
Ultimately, AI should not replace human connections. It can serve as a helpful tool to preserve memories and provide temporary comfort, but it cannot substitute authentic human interaction or the natural progression of mourning. The emotional, psychological, and ethical implications of griefbots demand thoughtful and cautious integration. As with any powerful tool, the key is balance—keeping the human experience at the heart of the conversation.
What are your thoughts about griefbots and digital companions? Would you consider “resurrecting” a loved one to continue chatting with them after they are gone?
Now on to the news!
News Roundup
China's J-35: Knockoff Stealth, Real Threat
China’s new J-35 fighter is basically the espionage version of the United State’s F-35 Joint Strike Fighter—sleek, stealthy, and suspiciously familiar. It borrows heavily from U.S. tech (shocking, I know) and is built for carrier launches, signaling Beijing’s growing appetite for projecting power. Laugh at the copycat design if you want, but remember: even a counterfeit can still kill. And did you hear that Boeing was just awarded a contract to produce the United States’ next 6th-Generation super fighter dubbed the F-47 in a nod to President Trump?
Fired Engineer Drops a Logic Bomb, Gets Detonated by the Feds
Trusted Insiders are not always spies. Sometime they are simply angry employees, and ex-employees! After getting fired, a software engineer at Siemens decided to leave a little "parting gift"—an infinite loop logic bomb buried deep in the code. Every time it triggered, it crashed systems, costing the company millions in lost productivity. He thought he was clever, disguising it as a legitimate file, but investigators traced the code breadcrumbs right back to his keyboard. The FBI even caught him Googling "how to create a logic bomb" like it was a weekend DIY project. Verdict: guilty. Moral: don’t burn bridges with a thermonuclear payload.
ChatGPT Made Him a Criminal—So He Sued
A radio host in Georgia filed a defamation complaint after ChatGPT falsely claimed he was being sued for embezzlement—a lawsuit that didn’t exist, and crimes he never committed. An editor used the chatbot to summarize a legal filing, and instead of just hallucinating facts, it straight-up invented an alternate reality. No verification, just vibes. It’s a stark reminder that AI isn’t always your research buddy—it’s sometimes that confident friend who’s wildly wrong but says everything like a fact. The legal system may soon have to decide: if an AI defames you, who pays the price?
Football Coach Gets Blitzed by FBI for Hacking Students
(Thanks to my friend Vivian for this story)
And finally, the most bananas story of them all! A former University of Michigan quarterback coach is now running a very different kind of offense—federal cybercrime. Prosecutors say he hacked into student email accounts to steal coursework, alter assignments, and give himself a digital edge. His playbook allegedly included setting up fake Wi-Fi networks, spoofing IP addresses, and covering his tracks like a pro. This wasn’t a one-off trick play—it was a sustained campaign of digital deception. Authorities say he targeted specific students, accessed their credentials, and then used their accounts to download class materials and send files to himself. He’s now facing federal charges for wire fraud and unauthorized access to protected computers. Turns out, trying to outsmart the academic system is a bad idea—especially when the FBI’s got your number on replay.
Questions from the Community
Did you know you can ask questions and get them answered right here? Subscribers can leave comments, send me emails and get those questions answered!
“Do you recommend cameras for our homes, security system monitoring, etc. If yes, how can we protect all of these devices from potential hacking?”
Ive installed a network of cameras outside my home, covering every main entrance. Each one runs on a small solar panel, paired with motion-activated security lights that only kick on at night—thanks to a built-in day-night sensor that helps conserve power. These cameras record to the cloud whenever they detect motion, and they also store footage locally on SD cards. Always have a backup. When we travel, I add indoor cameras that send alerts for any movement or unusual activity. Peace of mind is worth the extra setup.
Now here’s the critical part: never—never—use any smart home device (that includes Internet of Things, or IoT devices) without first changing the default username and password. If you don’t, your camera could be hijacked to spy on you—or worse, drafted into a botnet to launch a DDoS attack. What’s a DDoS attack? That’s when an army of hacked devices floods a website or service with so much junk traffic that it crashes. Think of it like a mob blocking every entrance to a building so no one else can get in.
When choosing a camera, make sure it allows you to set your own login credentials and supports two-factor authentication. That extra step will keep hackers and scammers on the outside looking in—where they belong.
Home security cameras are getting wilder every year. Would you put something like this Ring drone camera in your home?
Check out my latest appearance
I recently sat down with 20 Minute Leaders. Here is the description and the interview. Trust is the new currency in cybersecurity. Former FBI counterintelligence operative Eric O’Neill reveals how cybercriminals exploit AI, deep fakes, and social engineering to infiltrate even the most secure organizations. With insights from his career tracking spies and securing critical infrastructure, he unpacks the evolving threats businesses face and the urgent need to rethink digital security. From real-world espionage cases to the vulnerabilities of global enterprises, he explores why protecting data isn’t enough—safeguarding trust is the real challenge of the digital age.
Like What You're Reading?
Sign up for Spies, Lies & Cybercrime newsletter for our top espionage, cybercrime and security stories delivered right to your inbox. Always weekly, never intrusive!
Are you protected?
Recently nearly 3 billion records containing all our sensitive data was exposed on the dark web for criminals, fraudsters and scammers to data mine for identity fraud. Was your social security number and birthdate exposed? Identity threat monitoring is now a must to protect yourself? Use this link to get up to 60% off of Aura’s threat monitoring service.
What do YOU want to learn about in my next newsletter? Reply to this email or comment on the web version, and I’ll include your question in next month’s issue!
Thank you for subscribing to Spies, Lies and Cybercrime. Please comment and share the newsletter. I look forward to helping you stay safe in the digital world.
Best,
Eric
Let's make sure my emails land straight in your inbox.
Gmail users: Move this email to your primary inbox
On your phone? Hit the 3 dots at top right corner, click "Move to" then "Primary."
On desktop? Close this email then drag and drop this email into the "Primary" tab near the top left of your screen
Apple mail users: Tap on our email address at the top of this email (next to "From:" on mobile) and click “Add to VIPs”
For everyone else: follow these instructions
Reply