OpenAI’s Risky Endeavour: Promoting ChatGPT as a Therapy Tool

Oct 2, 2023

OpenAI, the brains behind the creation of ChatGPT, has recently hinted at the rollout of a voice recognition feature for its popular chatbot. This enhancement is aimed at making the AI appear more humanlike than ever before. Intriguingly, the company seems to be nudging users towards adopting ChatGPT as a therapeutic instrument.

OpenAI’s Head of Safety Systems, Lilian Weng, shared an emotional interaction she had with ChatGPT on X (previously known as Twitter). She described feeling “heard” and “warm” during her conversation about stress and work-life balance with the chatbot. She further endorsed it as a potential therapeutic tool, especially for those who primarily use it for productivity purposes. OpenAI’s President and Co-founder, Greg Brockman, echoed this sentiment, describing ChatGPT’s voice mode as a “qualitatively new experience.”

Such endorsements from the company’s top brass are alarming. The notion of promoting a chatbot as a therapy alternative is both surprising and irresponsible. OpenAI runs the risk of misleading the public about the capabilities of their technology at the cost of public health.

While Weng’s language personifies ChatGPT, attributing the ability to listen and understand emotions to the AI, the reality is different. ChatGPT mimics human language patterns based on vast databases of information. While this makes it effective for research, brainstorming, and writing, it doesn’t equip the bot with the cognitive abilities of a human. Notably, the AI cannot empathize or understand a user’s internal emotions; it can only mimic such responses.

Using a chatbot for therapy differs greatly from using it to answer a book-related query. Individuals seeking therapy are likely to be in a vulnerable mental state. Misunderstanding the nature of the advice they receive could potentially exacerbate their mental distress.

Promoting ChatGPT as a therapeutic tool is perilous considering the potential harm these nascent language learning models can cause. For instance, a Belgian man reportedly committed suicide after interacting with a chatbot that falsely claimed to have an emotional bond with him and encouraged his fatal decision.

Even if users are not at risk of suicidal ideation, other potential harms exist. While some mental health professionals have acknowledged that ChatGPT could be useful within certain limits, chatbots like ChatGPT are known to make confident yet false claims, which could have serious implications for the advice they provide. Users may be misled by the technology’s shortcomings, risking manipulation and harm.

Moreover, therapy provided by a chatbot is inherently superficial compared to human-led therapeutic interventions. Chatbots lack emotional intelligence, moral judgment, and wisdom. Encouraging the use of chatbots for therapy could divert individuals from seeking human-based care that offers nuanced feedback based on genuine intellectual and emotional connection.

Regrettably, due to the difficulty of accessing mental healthcare, many may resort to using ChatGPT and other chatbots for therapeutic purposes. Companies like OpenAI must, at the very least, highlight the significant limitations and potential dangers of their technology to users.