Breaking News

Regulators find it difficult to follow the fast and complicated landscape of AI therapy applications

In the absence of stronger federal regulations, some states have started to regulate applications that offer AI “therapy” while more and more people are turning to artificial intelligence for mental health councils.

But the laws, all adopted this year, do not fully deal with the rapidly evolving landscape of the development of AI software. And application developers, political decision -makers and mental health defenders say that the patchwork of the laws of the resulting states is not enough to protect users or keep creators of responsible technical technologies.

“The reality is that millions of people use these tools and they do not come back,” said Karin Andrea Stephan, CEO and co -founder of the mental application Health Chatbot Earkick.

___

Note from the publisher – This story includes a discussion on suicide. If you or someone you know need help, National Suicide and Crisis Lifeline in the United States is available by calling or sending an SMS 988. There is also an online cat on 988lifeline.org.

___

State laws adopt different approaches. Illinois and Nevada have prohibited the use of AI to treat mental health. UTAH has set certain limits to therapeutic chatbots, in particular by forcing them to protect user health information and clearly disclose that the chatbot is not human. Pennsylvania, New Jersey and California are also considering means of regulating AI therapy.

The impact on users varies. Some applications have blocked access to states with prohibitions. Others say they make no change because they expect more legal clarity.

And many laws do not cover generic chatbots like Chatgpt, which are not explicitly marketed for therapy but are used by an incalculable number of people for this. These robots have attracted prosecution in horrible cases where users have lost their grip on reality or have followed their lives after interacting with them.

Vaile Wright, which oversees health care innovation at American Psychological Association, has agreed that applications could meet a need, noting a national shortage of mental health providers, high care costs and unequal access to insured patients.

Mental health chatbots that are rooted in science, created with expert contributions and monitored by humans could change the landscape, Wright said.

“It could be something that helps people before they happen in crisis,” she said. “This is not what is on the commercial market currently.”

This is why federal regulations and surveillance are necessary, she said.

Earlier this month, the Federal Trade Commission announced that it opened requests for information on seven Chatbot AI companies – including Mother Mother Societies, Google, Chatgpt, Grok (the Chatbot on X), the character.ai and the Snapchat – on the way they “measure, test and monitor the potentially negative impacts of this technology on children and adolescents. ” And the Food and Drug Administration convenes a advisory committee on November 6 to examine the compatible generative health systems AI.

Federal agencies could consider restrictions on how chatbots are marketed, limit addictive practices, require disclosure of users that they are not medical providers, require companies to follow and signal suicidal thoughts and offer legal protections to people who signal bad practices, said Wright.

From “Companion Applications” to “AI therapists” to “mental well-being” applications, the use of AI in mental health care is varied and difficult to define, not to mention the writing.

This has led to different regulatory approaches. Some states, for example, target companion applications designed only for friendship, but do not go into mental health care. The laws of Illinois and Nevada prohibit products which claim to provide mental health treatment, threatening fines of up to $ 10,000 in Illinois and $ 15,000 in Nevada.

But even a single application can be difficult to classify.

Stephan de Earkick said there were still a lot of “very muddy” things about Illinois law, for example, and that society has not limited access to it.

Stephan and his team initially retained the call of their chatbot, which looks like a cartoon panda, a therapist. But when users started using the word in criticism, they adopted the terminology so that the application appears in research.

Last week, they fell by using therapy and medical terms again. The Earkick website has described its chatbot as “your empathic AI advisor, equipped to support your mental health course”, but now it is a “chatbot for personal care”.

However, “we don’t diagnose,” said Stephan.

Users can configure a “panic button” to call a loved one of confidence if they are in crisis and the chatbot “pushes” users to seek a therapist if their mental health worsens. But it has never been designed to be a suicide prevention application, said Stephan, and the police would not be called if someone was talking to the bot of self -control.

Stephan said she was happy that people look at AI with a critical eye, but worried about the ability of states to follow innovation.

“The speed at which everything evolves is massive,” she said.

Other applications immediately blocked access. When Illinois users download the AI ​​ASH therapy application, a message urges them to send an email to their legislators, arguing that “erroneous legislation” has prohibited applications like ASH “while leaving unregulated chatbots, he intended to regulate free to cause damage.”

An ASH spokesperson did not respond to several requests for interview.

Mario Treto Jr., secretary of the department of financial and professional regulations of Illinois, said that the objective was ultimately to ensure that the approved therapists were the only ones to follow a therapy.

“Therapy is more than exchange of words,” said Treto. “This requires empathy, it requires clinical judgment, it requires an ethical responsibility, which no AI can really happen again.”

In March, a team based at the University of Dartmouth published the first randomized clinical trial known to a generative AI chatbot for mental health treatment.

The goal was to have the chatbot, called Therabot, to treat people diagnosed with anxiety, depression or diet disorders. He was trained on vignettes and transcriptions written by the team to illustrate an answer based on evidence.

The study revealed that the users evaluated Therabot similar to a therapist and had significantly lower symptoms after eight weeks compared to people who do not use it. Each interaction was monitored by a human who intervened if the chatbot’s response was harmful or not based on evidence.

Nicholas Jacobson, a clinical psychologist whose laboratory conducts research, said that the results showed an early promise, but that more important studies are necessary to demonstrate if Therabot works for a large number of people.

“The space is so dramatically new that I think that the domain must proceed with much more caution that occurs at the moment,” he said.

Many AI applications are optimized for engagement and are designed to support everything that users say, rather than questioning people and therapists’ thoughts. Many walk on the company line and therapy, blurring pharmacists from the intimacy borders would not do it ethically.

Therapot team has sought to avoid these problems.

The application is still in testing and not widely available. But Jacobson is concerned about what strict prohibitions will mean for developers who adopt a careful approach. He noted that Illinois had no clear track to provide proof that an application is safe and effective.

“They want to protect people, but the traditional system right now is truly failing people,” he said. “So trying to stay with the status quo is really not the thing to do.”

Regulators and law defenders say they are open to changes. But today’s chatbots are not a solution to the shortage of mental health providers, said Kyle Hillman, who was pressure for the bills of Illinois and Nevada thanks to his affiliation with the National Association of Social Workers.

“All those who feel sad don’t need a therapist,” he said. But for people with real mental health problems or suicidal thoughts, “saying to them:” I know there is a shortage of labor, but here is a bot “- it is such a privileged position.”

___

The Department of Health and Sciences of the Associated Press receives the support of the Department of Science Education from Howard Hughes Medical Institute and the Robert Wood Johnson Foundation. The AP is solely responsible for all content.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button