Hallucinations AI: What are they and are they always bad?

Hallucinations are a frequent point of concern in AI conversations in health care. But what do they really mean in practice? It was the subject of discussion during a panel held last week at the Medcity Invest Digital Health Conference in Dallas.
According to Soumi Saha, Vice-President Director of Government Affairs first Inc. and moderator of the session, Hallucinations of AI are when AI “uses his imagination”, which can sometimes harm patients because he could provide bad information.
One of the panelists – Jennifer Goldsack, founder and CEO of Digital Medicine Society – described the Hallucinations of AI as “the technological equivalent of bullshit”. Randi Seigel, partner of Manatt, Phelps & Phillips, defined it as when AI invents something, “but it seems that it is a fact, so you don’t want to question it.” Finally, Gigi Yuen, data manager and COHERE HEALTH AI officer, said hallucinations are when AI is “not anchored” and “not humble”.
But are the hallucinations still bad? Saha asked the panelists this question, wondering if hallucination can help people “identify a potential gap in data or a gap in research” which shows the need to do more.
Yuen said hallucinations are bad when the user does not know that AI is hallucinated.
However, “I will be completely happy to have a brainstorming conversation with my IA chatbot, if it is ready to share with me how comfortable they are with what they say,” she noted.
Goldsack has assimilated AI hallucinations to clinical trial data, arguing that missing data can really tell researchers something. For example, when carrying out clinical mental health trials, missing data can actually be a signal that someone is doing very well because they “live their lives” instead of recording their symptoms daily. However, the health care industry often uses the blame for language in the event of missing data, indicating that there is a lack of membership in patients, instead of thinking about what missing data really means.
She added that the health care industry tends to put a lot of “value judgments on technology”, but technology “has no sense of values”. So, if the health care industry has hallucinations with AI, it is up to man to be curious to know why there is hallucination and use critical thinking.
“If we cannot make these tools work for us, it is not clear for me how we really have a sustainable health system in the future,” said Goldsack. “So I think that we are responsible for being curious and being somehow on the lookout for this kind of thing, and to think about how we compare and really contrast with other legal frameworks, at least as a jump point.”
Seigel de Manatt, Phelps and Phillips, for their part, stressed the importance of tightening AI in the study program for medical and nursing students, including how to understand it and ask questions.
“It will certainly not be enough to click on a course in your annual training that you spend three hours already to tell you how to train on AI.