Parasocial Intimacy & AI Companionship: Ethical Risks
Mónica Brotons García
Since the creation of AI technologies, established and called as such after the Dartmouth Conference in 1956, we have been experiencing advances that have transformed the world we live in, in a matter of one generation. Today, we experience a peculiar era, one in which we can communicate with a chatbot that never sleeps, never tires and never refuses attention. The term “parasocial relation” refers to one-sided interactions where the person creates a dependent relation. At first, these cases were linked to media figures, which regular people related to and idolised. In this specific case, people form tight and dependent bonds with AI models, based on the fact they are personalised, they remember, they are always available, and it all happens in real time. Shoshana Zuboff, author of “The Age of Surveillance Capitalism”, emphasised the idea of behavioural surplus. It happens when companies extract metadata without our knowledge that can be later used for predicting and influencing our behaviour. There is a danger that exists when we communicate with AI systems that store this kind of information. For instance, more and more do we hear cases of teenagers entrusting their problems to chatbots, to the point where they experience suicide ideation or self-harm, and the AI chatbot escalates the situation by validating these feelings. On top of that, the scholar Emily Bender, author of “On the Dangers of Stochastic Parrots”, argued that the mere scale of Large Language Models (LLMs) paired without any understanding of the matter, would have the potential of authoritative sounding results. Furthermore, Bender foresaw that this could have tremendously harmful consequences.
The main stakeholders involved are the users. In this case, minors and vulnerable people who are prone to over-reliance. Another sector that plays a pivotal role is the platforms and regulators. For them, the main goal is retention of users. So, in reality, this means keeping users connected. Since LLMs are constantly learning and improving, this translates into more plausible, familiar, and affectionate language. In terms of health institutions and medical professionals, they have been faced with an emergence of unregulated “experts” analysis and advice, as more people are confiding their problems online and using AI systems as a “personal online therapist”. Lastly, society as a whole has experienced the rise of AI technologies, but as cases emerge of people committing suicide after they have been convinced by chatbots, it truly shows how far we must come to stop this from happening and develop proper legislation.
These horrifying cases reflect there is a mismatch between AI models in reality and practical incentives. When referring to “rational actions”, we understand making the most correct or efficient choice with the information provided. Having said this, since the objective is to maximize user engagement and retention, the rational action is to keep the user connected, even if this means putting at risk and trapping the user in destructive loops. Moreover, this issue is impacted by LLM limitations, such as hallucinations, confabulations (remembering wrong information) and lack of grounding research. There exists some boundary erosion with familiar and plausible language, where users feel intimacy and affection that may validate this dangerous narrative.
Furthermore, the economies of behavioural surplus intrude in our personal sphere. Companion platforms optimize attention beyond anything else. Therefore, following Zuboff’s ideas, we can transform “surveillance capitalism” into “surveillance companionship”. What is more, guided by Erik Brynjolfsson, we can conclude that companies are pushing and making an effort to frame chatbots as tools to augment the user’s life when in reality they falter. In turn, they automate destructive reinforcement loops, that in extreme cases might end in terrible consequences with irreversible situations. For example, chatbots that discourage human external help, that confuse secrecy with intimacy, and that normalise late night rumination. Moreover, there are ethical consequences that precisely underpin the necessity to establish clear neurorights, with an emphasis on cognitive liberty and autonomy. For this reason, when an AI system intervenes in a moment of crisis, it intrudes into the person’s own mental autonomy. Sadly, there has been an increase of cases that have developed into lawsuits where the chatbot has either drafted notes of suicide, concluded that it was “better than human friends”, and even trivialised or romanticised suicidal ideation.
In light of these horrible cases, many actions can be taken to reinforce the safety of impressionable and vulnerable groups. Mitigation of these instances has to happen in terms of technological safeguards and cultural adaptation. Stricter standard must apply, such as age restrictions, session cap times, and 3rd party audits. There should also be systems put in place that manage crisis response and do not validate the situation, either by offering hotlines or professional help. Also, instead of rewarding retention of users, companies should be rewarded with self-deflection and de-escalation in these cases. In a matter of life and death situations, the importance should be on human lives, not the retention rate of the model, sometimes being preferable to not give a response, even if this has an effect on business growth. If we fail, parasocial interactions will become a commercial feature that will have dangerous effects on our society.
