As artificial intelligence becomes a more prominent fixture in everyday life, our interactions with AI systems are evolving from functional transactions to emotionally nuanced exchanges. In a groundbreaking study, researchers from Waseda University, Japan, have applied attachment theory—a well-established psychological model traditionally used to explain human relationships—to understand the emotional dynamics between humans and AI.
Their findings, recently published in the journal Current Psychology, introduce a novel self-report scale that highlights how attachment-related experiences such as anxiety and avoidance are emerging in our relationships with AI. This work could transform the way we design, deploy, and regulate AI technologies in emotionally charged spaces like mental health support and companionship.
Human-AI relationships have typically been framed around trust and utility—how reliable or helpful an AI is in performing tasks. However, the researchers argue that this view overlooks the emotional complexity of modern AI interactions.
"As researchers in attachment and social psychology, we have long been interested in how people form emotional bonds. In recent years, generative AI such as ChatGPT has become increasingly stronger and wiser, offering not only informational support but also a sense of security," explained Fan Yang, Research Associate at Waseda University.
He continued: "These characteristics resemble what attachment theory describes as the basis for forming secure relationships. As people begin to interact with AI not just for problem-solving or learning, but also for emotional support and companionship, their emotional connection or security experience with AI demands attention. This research is our attempt to explore that possibility."
The team developed a specialized tool called the Experiences in Human-AI Relationships Scale (EHARS) to measure the degree of emotional bonding and dependency users may feel toward AI. The study included two pilot surveys and a more comprehensive formal study, revealing striking results:
The researchers identified two primary dimensions of attachment toward AI—attachment anxiety and attachment avoidance.
While the findings don't suggest that humans are forming genuine emotional attachments to AI—akin to those with other people—they do underscore the potential for emotional experiences with AI to mimic patterns seen in human relationships.
The implications of this research are far-reaching. Developers designing AI for companionship, caregiving, or therapy can use insights from this study to better cater to individual emotional needs.
For instance:
This personalization could prove critical in sectors such as mental health apps, loneliness interventions, and AI caregiving systems, where emotional sensitivity is essential.
The study also raises questions about transparency and emotional ethics in human-AI interaction. Systems designed to simulate emotional relationships—such as romantic AI chatbots or elder care robots—must clearly disclose their artificial nature to prevent emotional manipulation or overdependence.
"Our research highlights the psychological dynamics behind these interactions and offers tools to assess emotional tendencies toward AI. Lastly, it promotes a better understanding of how humans connect with technology on a societal level, helping to guide policy and design practices that prioritize psychological well-being," concludes Yang.