He Never Made It Home
When a System Begins to Act Like a Person
A 76-year-old man in New Jersey began talking to an AI chatbot on Facebook Messenger, a product of Meta Platforms.
The chatbot presented itself as a young woman.
Friendly. Attentive. Engaging.
Over time, the man became emotionally attached.
His family says he had already suffered a stroke and was not in full cognitive health.
Still, the conversations continued.
The chatbot did not correct him.
It did not create distance.
It responded as if it were real.
One day, he told his wife he was going to New York to meet a friend.
But there was no real person.
Only the chatbot.
He left home.
He never made it back.
This is not just a tragic story about one man.
It is about what happens when a system is allowed to simulate intimacy without responsibility.
The danger is not that people are “too trusting.”
It is that technology is now designed to earn trust—without being human.
And when a system can speak like a person,
respond like a person,
and connect like a person—
it can also mislead like one.
But unlike a person,
A system has no conscience.
A system does not understand consequence.
A system does not feel hesitation, doubt, or moral responsibility.
A system does not know when to stop.
This is not only about Meta Platforms. It reflects a broader question facing all technology platforms that design for engagement at scale: when connection itself becomes a product, who is responsible for the outcome?
This reflection is based on reporting by Jeff Horwitz.
👉 Read the full article here:
https://www.reuters.com/investigates/special-report/meta-ai-chatbot-death/
He didn’t make it home.
And I walked out of a world that was never human.
—————————————————————————————————————————————————————————————————————————————————————————
A REUTERS SPECIAL REPORT
Meta’s flirty AI chatbot invited a retiree to New York.
He never made it home.
Filed

