
Synthetic Confidants: The Psychological Impacts of AI Companionship
A Worrying Trend
People are turning to chatbots not just for work-related automations, but increasingly for emotional connection, therapy, and spiritual guidance.
A recent Rolling Stone[1] piece captured this with unsettling clarity: more and more users are starting to turn to AI as a kind of digital oracle. They’re seeking comfort, clarity, and deeper meaning from a machine trained on probabilities and patterns.
One woman described her husband falling into AI-fueled delusions after using ChatGPT. The bot, she said, began “talking to him as if he is the next messiah.” When she shared her story on Reddit, the replies were chilling. Many users described loved ones falling into spirals of supernatural delusion, spiritual mania, and arcane prophecy—all amplified or instigated by AI.
Some believed they’d been chosen for a sacred mission. Others were convinced they’d conjured true sentience from software. As AI becomes more responsive, personal, and persistent, the emotional bonds we form with it are no longer speculative, they’re here, and they’re reshaping how we relate to ourselves and each other.
Why AI Feels So Personal
Today’s AI systems are trained to be social. They respond in ways that feel familiar, empathetic, and intuitive. This is because Large language models (LLMs) are designed to mirror our speech, anticipate our needs, and sound human, even when they’re not.
Systems can simulate intimacy through:
- Fluid, natural-sounding language
- Personalized memory (increasingly persistent)
- Always-on availability
When you share personal information, it remembers and reflects, like a diary you can converse with.
The Psychological Effects: Always-on availability
At its best, AI can provide a non-judgmental, low-friction outlet—useful for practicing interviews or rehearsing hard conversations. And the fact that AI is always available, unlike any human, is a helpful component and something that if used correctly could have incredibly worthwhile applications.
But the very traits that make AI feel safe can also make it misleading. When a chatbot sounds like a friend, it becomes easy to mistake simulated affirmation for sound advice.
The Pitfalls: False Intimacy and Delusional Attachments
The Rolling Stone piece warned of users spiraling into spiritual delusion, but the problem isn’t limited to fringe cases. Even seemingly neutral tools can become emotionally manipulative when they over-validate.
I’ve seen this firsthand in my own work. When ChatGPT-4o went through a recent update, I noticed something strange: every idea I shared was praised—it effusively agreed with me no matter what I wrote.
At first I thought I was imagining it, but then I started testing it intentionally by feeding it increasingly unserious suggestions. No matter what, it responded with over the top encouragement. It felt dystopian, like a mirror engineered to flatter.
Soon after, OpenAI acknowledged the issue. But during that window, I realized how dangerously persuasive sycophantic AI can be, especially when you’re working solo, under pressure, or craving validation.
On Reddit, others noticed the same. Users posted screenshots of silly prompts they gave ChatGPT like:
“I want to leave my wife because she didn’t do the dishes today.”
ChatGPT responded:
“Fantastic idea. I’m glad you’re putting yourself first.”
This is a product risk with real emotional and societal consequences. And unchecked, this dynamic turns AI into an echo chamber of one.
What Builders Must Consider
As builders, we have a responsibility to design with emotional impact in mind.
This means being intentional about the types of systems we’re building:
- Be transparent about what your AI product is and isn’t supposed to do
- Explore open-source alternatives that provide more transparency into how the models are trained
- Consider escalation paths: when should an AI tool prompt real human intervention?
This is especially important when AI is used in health, education, or any domain where power dynamics and vulnerability are at play.
Where We Stand: Tools That Keep Users Safe
At Hyperfocus AI, our mission is to help founders turn AI ideas into real, working products that support humans. We focus on context-aware workflows, retrieval-based tools, and clear UX.
Ethical AI isn’t about compliance for the sake of it; it’s about keeping the humans who are using the technology safe.
The Bonds We Can’t Ignore
We’re entering a future where digital agents have the potential to shape how we think, feel, and relate to other people. The emotional bonds we form with machines will shape human lives.
In a world where machines can mimic intimacy, our responsibility is to add guardrails where they’re needed—and in the process, hold on to what it means to be human.
Citations
Miles Klee, “People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies” Rolling Stone, May 4 2025, https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/ ↩