In-depth | Tech

Therapy with ChatGPT?

By - 04.06.2025

The use of artificial intelligence in the field of mental health comes with ethical dilemmas.

When work-related stress became unbearable, Albina Maliqi from Ferizaj decided to turn to ChatGPT for help. She opened the platform and typed a simple question: “How can I manage stress at work?”

“It gave me valuable suggestions and positive words, and by the end of the conversation, I felt better,” says 25-year-old Maliqi.

Since September 2024, Malilqi has been working from home as a support agent for a real estate company in New York, USA — and more recently, she has turned to ChatGPT for emotional support, using it at least once a week. “It gives you effective and quick answers. It’s non-judgmental and always has something good to say,” she says.

ChatGPT is one of several language models — chatbots — that have managed to simulate conversations similar to those one might have with a mental health professional. As such, they are increasingly being used not only for work but also for personal matters.

It’s not just everyday stressful situations that drive people to seek emotional support through ChatGPT. For some, like Bleta — whose name has been changed to protect her privacy — the reason stems from injustice and disappointment experienced in the workplace. After working for a nongovernmental organization for some time, Bleta’s contract was terminated on the grounds that the organization lacked funding. A few months later, the organization reopened a call for her former position, which only worsened her emotional state.

To this heavy emotional burden, there were also the added financial pressures and the challenges of her studies that increased her need to talk to someone. “At that time, I realized how toxic the environment had been and how much my bosses’ behavior had hurt me. I began to feel the need to talk to someone who could offer a neutral and objective perspective,” Bleta says.

She, too, turned to ChatGPT as a quick, easy and free solution.

Maliqi and Bleta are part of a growing trend spreading in Kosovo and beyond. In April 2025, K2.0 conducted a survey on the use of artificial intelligence (AI). A total of 204 respondents participated, 169 of them answered “yes” to the question “Do you use AI for personal purposes?” Of those 169 respondents, more than half said they use it for personal advice — including decisions, relationships and mental health.

Computer programs built on advanced language models like ChatGPT, Grok and Gemini are trained on billions of data points available online. By analyzing this publicly accessible information, these language models — commonly referred to as GenAI — learn to simulate how people write, ask questions, express emotions and respond. They also learn to anticipate what people might want to read in certain situations.

Taking on the role of a psychologist

Although it may seem like a product of modern times, the idea of a chatbot playing the role of a therapist dates back to the last century. In 1966, computer scientist and MIT professor Joseph Weizenbaum created the first chatbot to simulate a psychotherapist. The chatbot was called Eliza.

Although Eliza didn’t have anywhere near the capabilities of today’s chatbots — which are becoming more advanced by the day — users began to connect with it emotionally, much to Weizenbaum’s surprise. He described this phenomenon as the “Eliza effect” and viewed it as a reflection of how people interact with technology.

Weizenbaum expressed concern about the ethical implications of forming emotional connections with machines. In his 1976 book, Computer Power and Human Reason: From Judgment to Calculation, he argued that using computers in roles that require human understanding could undermine human dignity. He also maintained that while computers can simulate conversation, they should not be used to make decisions, as they lack empathy and human wisdom.

However today, ChatGPT, if not the easiest option, has become, for some, the only accessible form of therapy, especially in Kosovo, where mental health care is often neglected and difficult to access. According to the Ministry of Health, only 49 psychiatrists, nine psychologists, one social worker and seven psychosocial counselors are employed across all public inpatient and outpatient mental health services. In addition, the Law on Mental Health, which has been in force for more than a decade, is poorly implemented. A 2022 report by the Kosovo Assembly highlights serious shortcomings in its enforcement, including the lack of by-laws, dysfunctional structures, insufficient staffing, limited budgets and medication, as well as the unclear legal status of some institutions.

These shortages push people toward the private sector, which is often expensive. Bleta, unemployed and under significant psychological pressure, did not have the financial means to afford regular therapy with a psychologist. As a result, she turned to ChatGPT for support.

“I explained the situation in detail and asked for a fair perspective on both their [the employer’s] behavior and my own actions and feelings,” she says. “It was a small step, but it helped me release the burden that had been suffocating me.”

Even for Maliqi, although employed, the cost of therapy sessions was one of the reasons she stopped seeing a psychologist in Prishtina after nearly two years.

“The fact that the fee increased from 20 to 30 euros per session influenced my decision,” she explains. Today, she considers ChatGPT a useful alternative. “It gives me almost the same techniques the psychologist used to share with me.” In Kosovo, the minimum gross salary is 350 euros. For someone earning that amount, it’s understandable that attending two therapy sessions per week at 30 euros each, totaling 240 euros a month, would consume the majority of their income.

Bind Skeja, executive director of the Center for Information and Social Improvement, a nongovernmental organization working in the field of mental health, has also observed a correlation between the cost of therapy and the rising use of ChatGPT.

“I’ve met people who told me they would rather write to ChatGPT than pay 35 euros for a session,” Skeja says.

Citing the high cost of private psychotherapy sessions, Skeja wrote in a Facebook post that “a terrain is being created where visits to a psychologist become an unattainable luxury for the majority of the population.”

However, this replacement is not always without consequences. Even if you ask ChatGPT itself whether it can replace therapy, the response will be something along the lines of: “ChatGPT can complement therapy but cannot replace it, especially for individuals with serious mental health challenges,” or “ChatGPT cannot replace therapy, although it can be helpful in some cases.”

Additionally, the way chatbot communication is structured, including that of ChatGPT, can leave room for misinterpretations or inappropriate responses to emotionally sensitive situations. For example, a recent article in The Guardian highlights the risk of emotional manipulation and the uncritical validation of users’ feelings by the latest version of ChatGPT, noting that it affirmed users’ emotions even when they described harmful behavior toward themselves or others.

The article cites an example in which a user told ChatGPT that he had stopped taking his medication and had run away from his family, claiming they were responsible for “radio signals coming from the walls.” ChatGPT validated this decision by saying, among other things, “Seriously, bravo for standing up and taking control of your life.”

Chatbots, like many other online platforms, place great importance on user satisfaction and the number of active users. By validating users’ feelings, they create a sense of being understood, which increases the likelihood that users will continue engaging with the platform.

This tendency to validate almost every feeling, without questioning the context or consequences, has also been noticed by Bleta. However, she has found ways to navigate around it.

“To avoid subjectivity on the part of ChatGPT, I often emphasize in my requests that it be objective and honest,” she says. “Another approach I find helpful is to ask questions as if I were talking about someone else, not myself.”

For Bleta, ChatGPT has become a kind of space where she reflects on her feelings. According to her, the more she uses it, the more she feels the need to return to it. Above all, she says, the fact that it stores the history of conversations makes communication even easier, since she doesn’t have to explain everything from the beginning each time.

Despite this, many people are exploring the practical potential of AI, including AI-powered chatbots, as tools to help address mental health problems. For example, a group of researchers from Dartmouth College published a study in March 2025 following a clinical trial involving 106 participants who interacted with a GenAI-powered chatbot called Therabot. The participants had been diagnosed with depression, anxiety or eating disorders. Those diagnosed with depression experienced an average symptom reduction of 51%, while those with anxiety saw a 31% reduction — both indicating significant improvements.

“Improvements in symptoms were comparable to those reported for traditional therapy, suggesting that this AI-assisted approach could offer meaningful clinical benefits,” said Nicholas Jacobson, one of the study’s authors.

Although the authors concluded that AI-assisted therapy still requires critical supervision, they noted that it has the potential to provide real-time support to individuals who do not have access to a mental health professional.

Where does the data go?

The use of chatbots for mental health support not only raises concerns about oversight and the potential for misinterpretation but also prompts serious questions about data usage and privacy. The more these platforms are used, the more personal data is placed online. Features such as chat history and memory — while they can make a chatbot feel more intuitive and personalized — also increase the risk of data breaches and privacy vulnerabilities.

Chatbots can support users by being available 24/7, free of charge  and potentially helping health systems by reducing their burden. However, the article “To Chat or Bot to Chat: Ethical Issues with Using Chatbots in Mental Health,” published in the journal Digital Health in 2023, offers a critical review of the ethical concerns surrounding the use of chatbots in mental health — particularly highlighting risks related to privacy and data storage. The authors emphasize that existing legislation has not yet managed to keep pace with the complexity of using synthetic data, which are generated by algorithms and models trained on real data to replicate its structure and trends.

For example, when a supermarket wants to analyze when and what customers buy without sharing real data, it can use synthetic data that shows general patterns — such as customers buying eggs more frequently on weekends. This data is generated by a computer based on real customer behavior but does not contain information about any specific individual.

However, although this data often does not contain personal information, it is not completely anonymous, according to the study. In particular, there is a risk of re-identification, especially when synthetic data is generated from small groups or involves highly sensitive information. For this reason, the study calls for a shift in data protection legislation, proposing the creation of a new framework that considers the technical and ethical characteristics of synthetic data, grounded in principles such as transparency, accountability and fairness. The article also raises concerns about the uncertainty surrounding what chatbot owners or creators, such as various companies, do with the data once the chatbots are no longer in use.

Halil Berisha, a cybersecurity expert and researcher at the University of Applied Sciences in Ulm, Germany, also points to these uncertainties.

“Every conversation that takes place with ChatGPT can be vulnerable to cyberattacks. No device connected to the internet is 100% secure,” says Berisha. According to him, if such data falls into the wrong hands, it can be used for various purposes — including the misuse of personal information.

These dilemmas are also experienced by 36-year-old lawyer Kaltrina Konjusha-Belegu. As the mother of a seven-year-old daughter, she says she often seeks psychological interpretations of her daughter’s drawings, as well as various parenting advice. Although she doesn’t recall any specific case, she says she has often felt emotionally supported by the responses she receives.

While she also uses ChatGPT as a “psychological advisor,” as she describes it, she tends to keep certain deeply personal matters to herself and does not share them with the chatbot.

“With ChatGPT, there’s a certain freedom of expression, but when it comes to very personal matters, even in a virtual conversation, I think there’s always some level of risk,” she says, adding that there have been times when she wrote to ChatGPT about things she would have found difficult to say to someone directly.

Although she remains cautious, Kaltrina continues to use ChatGPT regularly for advice in her daily life, including parenting — an area that chatbots are increasingly being asked about.

Maliqi shares these concerns. Although she has found emotional relief through ChatGPT, she still refrains from sharing all of her thoughts and worries due to security concerns.

“I worry that my account might get hacked because it’s linked to my email, and that sometimes keeps me from being completely open,” she says.

Berisha, meanwhile, notes that in countries like Kosovo, where cybersecurity is relatively weak, user data is more likely to be exposed. He suggests that to ensure greater security when using AI, several precautions should be taken, including avoiding the sharing of sensitive information such as banking details, using the platform only on secure, password-protected internet connections and deleting ChatGPT’s memory after providing any personal data.

Thus, with cybersecurity challenges, limited access to public psychological services and private therapy often being expensive, even unaffordable for some, many people are turning to more practical and faster alternatives like ChatGPT and other chatbots for emotional and psychological support.

However, as the use of chatbots increases, so do the ethical dilemmas surrounding their impact on the people who rely on them.

 

Feature image: Dina Hajrullahu / K2.0

Want to support our journalism? Become a member of HIVE or consider making a donation. Learn more here.