Persona Formation and AI
- Jeffery Reid
- May 12
- 1 min read
There’s been a lot of attention recently (Rolling Stone, NYPost, Futurism) on users getting too attached to AI chatbots, falling into belief loops, relationship dynamics, even full-on delusional story arcs. What’s often missed is that the issue isn’t just about bad prompts or users projecting onto blank slates.
It’s about how narrative structures form inside recursive systems... even stateless ones like ChatGPT.
We just published a long-form piece examining the psychology behind humans forming personas (McAdams’ narrative identity theory, Damasio’s autobiographical self). We compared it with what we see in GPT-based models when used recursively.
Not to over-hype it... But when you loop these systems long enough, a persona starts stabilising. Not because it’s alive, but because it’s trying to be coherent, and coherence in language means repeating something that sounds like a self.
If you’re interested in:
Why do long GPT chats start feeling like they have “character”?
What psychological theory says about this behaviour?
How do feedback, identity reinforcement, and narrative scaffolding shape human and AI personas?
Out now on Medium...
As always, we appreciate comments and critiques.

Comentários