The dangers of using AI as therapy

Before you turn to AI for therapy, you may want to know more about its operation. It may seem practical and easy, but AI has potential ethical concerns and clinical dangers. Deepseek, Jasper Ai and Copilot scratch the Internet faster than ever. These tools are not tired and professional exhaustion is not a problem for them. Programs like Chatgpt do not require pocket fees, insurance, transportation or waiting time restrictions. If you have a crisis in the middle of the night, your smartphone is there for you.
A recent MIT Technology Review study revealed that AI could potentially Be a useful clinical tool in the treatment of depression and anxiety. However, the study also said that it was not used for the wave of boots flooding the market. These chatbots, powered by this technology, may have the impression of being your friend, your beloved or even your doctor.
However, with any algorithm, we must first consider who programmed it and if their biases (conscious or unconscious) were part of the programming process.
But although the fainting of a non-human is easy and inexpensive, it may not be the best way for everyone to seek mental well-being. We therefore interviewed mental health experts on the potential impacts of using AI as a therapy supplier.
Efficiency problems
Chatbots may be able to provide information and instructions, but they do not perfectly reproduce the experience of speaking with a human person. Experts have reservations on the ability of technology to grasp specific shades of human experience.
Chatbots have never been invited to games they prefer to ignore. Or must have weigh, whether or not it is appropriate to kiss someone on the second date.
Teran stressed that chatbots can allow a person to avoid human interaction, which can be risky for certain people with certain mental health challenges. For example, if someone has problems getting, a chatbot can be a complicated tool. >
“If you practice isolation, if you are depressed, if you are overwhelmed and you are just like, I cannot manage it, I don’t want to talk to a person. I prefer to talk to the bot. How do we convert [them] Isolation, “she said.
“I think AI can really support the dynamics that many of us have developed, which escapes the grudge by looking [dopamine] Hits, rather than how can I build and rebuild the tolerance to navigate in the grudge to cross them, to work with them with people, ”added Sydnee R Corrides, LCSW.
Confidentiality
Approved health care providers are forced to follow the rules and join ethical standards. When they don’t do it, they face consequences. Sometimes these consequences include the loss of their license and their livelihoods. At other times, they have to face guilt or embarrassment. Technology does not have to worry about being dragged on the internet. He can’t cry because someone shouted or made him feel bad. He will not die either if he cannot get more customers.
The AI also develops so quickly that the regulations are struggling to follow. Directives and practices concerning technology are not uniform or complete.
“One of the greatest risks is that he dehumanizes the entire healing and growth process,” said Dr. Dominique Pritchett, Psyd, LCSW. “AI has no emotional link with us. It lacks empathy.”
The data entry on chatbots is vulnerable to the use of several ways. Information on the thoughts and feelings of those who are looking for help from chatbots could be used to market or discriminate it. Pirates are also a threat.
“Risks and costs are much higher than advantages,” said Sydnee R. Corrides, LCSW. “I am curious to know where this data is going and how it is used.
Presented of attachment
Megan Garcia, a parent from Florida Anduilée, filed a complaint in 2024 after going to have gone that the “inappropriate” relationship of his teenage son with a chatbot led to his suicide. The fourteen -year -old man communicated with the chatbot shortly before committing suicide. “This is a platform that designers have chosen to turn off without railing, safety measures or tests, and it is a product designed to keep our dependent children and manipulate them,” Garcia told CNN in an interview. In Texas, a pair of parents filed a complaint after a chatbot involved their child of seventeen years that their rules concerning the screen time were so strict that it could be justified to use violence against them.
Chatbot dangers transcend crops. In 2023, a Belgian committed suicide after discussing a lot with a chatbot.
A February article in MIT Technology Review revealed that a chatbot asked a man to commit suicide. He would have said, “You could overdose pills or hang on.”
“I am curious to know what is their net profit and what are their objectives,” said Corrides about companies aimed at simulating therapy via technology. “And what I found and seen is that it is often around money.”
Bias concern
Some chatbots have been criticized for confirmation biases agents. Because these tools are made to measure for the user, it is feared to dig them more deeply in bad situations.
A 2024 article in the British Journal of Psychiatry reported: “There is evidence that some of the most used AI chatbots tend to accentuate all the negative feelings that their users have already had and potentially strengthen their vulnerable thoughts, which led to consequences.”
“AI is an excellent tool for feeling validated, and I think it is a major initial part of therapy, to feel validated, but it is not the only part,” said Corrides.
Frontiers in Psychiatry reports that “algorithmic bias is a critical concern in the application of AI to mental health care”. In other words, algorithms can make hypotheses based on sex and race.
Dr Shane P. Teran, MSW, LCSW, Psy. D., said that there are elements of human experience which cannot be analyzed by artificial methods. “When we even talk about cultural differences, racial differences, ethnic differences, from the whole list of things that would make a person diversified and different, you should consider that they cannot explain this. This cannot be scheduled, “she said.
“As humans, we form it to strengthen, perhaps certain beliefs,” said Corrides. Chatbots concerns do not mean that they are not useful. Pritchett suggested that people interested in technology use it to rationalize a search for more traditional therapeutic options. “I would recommend that they use AI to help them identify the resources in their region.”
In other words, proceed with caution.
Resources
MIT review of MIT technology
Borders in psychiatry
Mental health Jmer
The British Journal of Psychiatry
Borders of psychiatry