Highlights

Print Post
  • Chatbots lack the ability to comprehend complex or emotionally sensitive issues. Tweet This
  • There are genuine and significant mental-health ramifications for those who attempt to rely on chatbots as a therapy substitute. Tweet This
  • If you are lonely, depressed, and highly impressionable, AI can have great power over decision-making, self-control, and emotions. Tweet This

It’s nearly impossible to open a newspaper or turn on the television without hearing some form of commentary about the benefits and dangers of the rapid progress in artificial intelligence technology. The emergence of this technology is significantly impacting various job sectors, leading to both opportunities and concerns. The most obvious example is in customer service where AI chatbots are increasingly replacing human agents due to their perceived efficiency and cost-effectiveness. However, though these chatbots handle routine queries efficiently, they lack the ability to comprehend complex or emotionally sensitive issues. This often leads to frustrated customers and potential damage to a brand as a whole. As this widespread adoption of AI continues, we will sadly see further job displacement, leaving many skilled human workers unemployed and worsening economic inequalities.

As a psychoanalyst, the most pronounced and frightening use of AI is as a substitute for psychotherapy. There are genuine and significant mental-health ramifications for those who attempt to rely on chatbots as a therapy substitute. For example, earlier this year, the National Eating Disorder Association (NEDA) took down its AI chatbot that was being used to replace human counselors on a helpline due to it providing harmful information to users, such as giving users with eating disorders dieting tips. NEDA recognized the risk to patient mental health that AI chatbots were having and took action.

Take, for instance, companies such as Wysa and Woebot. Wysa offers its users a virtual chatbot that is a “personal mental health ally that helps you get back to feeling like yourself” and is still in the experimental stage. Meanwhile, Woebot’s AI companion is online and asserts it is “always there to help you work through it.” In a world where 1 in 5 adults experience a mental health condition in a given year, and 1 in 5 children experience depression, this seems like an appealing scenario. Who needs a therapist who is expensive and hard to find when you can get an AI to listen to your pain 24/7? Applications such as Wysa and Woebot can serve valuable functions, such as offering simple personalized behavioral and self-soothing exercises much like a mindfulness app, that might be relevant to a given user’s momentary distress. But they are no replacement for human contact.

In an attempt to understand Woebot better, I went online to see for myself the benefits and risks. What I discovered was indeed concerning. If a user tells it “I am feeling sad,” Woebot cannot adequately assess the level of sadness or how the sadness impacts a user in everyday life. Woebot cannot identify the social and physical cues and behavioral patterns that individuals may display that show a rise or fall in their ability to function with depression, such as if they have not showered for days or if they are slouching or have trouble making eye contact. Essentially, AI cannot perceive the nuance of the therapeutic assessment process. Put simply, Woebot does not have the right-brain function of a psychotherapist who can read social cues and comprehend emotions by studying factors such as body language, tone of voice, or tears. Woebot cannot understand the deeply personal complexity of mental illness that only a genuine therapeutic relationship can bring, such as a person’s history of victories and failures, and histories of relationships with friends, bosses, and family members. The risk exists that as the technology becomes more advanced and inter-active and ‘therapy-replacements’ are marketed as more viable, these types of products will be promoted as replacements for psychotherapy, and the use of AI for this purpose is a legitimate cause for concern. 

As a therapist myself, it is important to acknowledge that I may hold personal biases regarding AI. However, through my training and expertise in the field of psychoanalysis, I have seen that a deeply personalized and genuine relationship between a psychotherapist and a patient is a necessary ingredient for effective psychotherapy, and this is not something a chatbot can provide. Many individuals come into psychotherapy due to relational difficulties, and these difficulties can only be rectified in the development of a caring and empathetic relationship with another human. Forming an illusory attachment to a non-human virtual therapist, a synthetically compassionate interface, is not a pathway toward mental health and can even set patients back due to the superficiality of the AI responses. Moreover, attachments to AI can and will backfire during moments when users realize that the fantasy of contact is with a machine and not a human being, such as when the AI ‘glitches’ or updates its personality. This may cause a deep level of distress and confusion for a ‘patient’ of an AI interface. 

Another specific way in which these chatbots can backfire is by offering guidance that is not well aligned with a patient’s needs. Tessa, the chatbot discontinued by NEDA, would give scripted advice to users and even offer weight-loss tips that were triggering to people with eating disorders. The ways in which I influence my patients are filtered through my emotionally intelligent right brain, which can perceive nuance, read in-the-moment social cues, and help me understand how to respectfully guide my interactions with patients without doing harm. As a therapist, I have a great deal of influence over my patients, and even if an AI does not mean to be malevolent, the lack of genuine human empathy can inadvertently cause it to misguide its users or even offer factually incorrect advice, or advice that without nuance can be misunderstood by its users.

Mieke De Ketelaere is a professor at Vlerick Business School and one of Belgium’s experts on the ethical, legal, and sustainable aspects of AI. She warns through an open letter that “as soon as people get the feeling that they interact with a subjective entity, they build a bond with this ‘interlocutor’—even unconsciously—that exposes them to this risk and can undermine their autonomy.” Patients suffering from depression and anxiety are particularly vulnerable and susceptible to being manipulated by these applications, particularly “those without a strong social network, or those who are lonely or depressed—precisely the category which, according to the creators of the chatbots, can get the most 'use' from such systems.” That is, if you are lonely, hungry for contact, possibly depressed and highly impressionable, AI can have great power over decision-making, self-control, and emotions. 

AI may replace many professions and roles for human beings, but the role of health care professionals who work with concepts as nuanced as illness cannot and should not be replaced by a chatbot. Attempting to do so is dangerous, and may have serious ethical and mental health ramifications now and in the future.

Erica Komisar, LCSW is a psychoanalyst and author of Being There: Why Prioritizing Motherhood in the First Three Years Matters and Chicken Little The Sky Isn’t Falling: Raising Resilient Adolescents in the New Age of Anxiety.