Latest Trends

Google AI insights put people at risk with misleading health advice | Google

False and misleading health information in Google’s artificial intelligence summaries is putting people at risk, a Guardian investigation has found.

The company said its AI Insights, which use generative AI to provide snapshots of critical information on a topic or issue, are “useful” and “reliable.”

But some summaries, which appeared at the top of search results, contained inaccurate health information and put people at risk.

In a case that experts called “truly dangerous,” Google wrongly advised people with pancreatic cancer to avoid foods high in fat. Experts said this was the exact opposite of what should be recommended and could increase patients’ risk of death from the disease.

In another “alarming” example, the company provided false information about crucial liver function tests, which could leave people with severe liver disease wrongly thinking they are healthy.

Google searches for cancer screening tests in women also provided “completely false” information, which experts say could lead people to ignore real symptoms.

A Google spokesperson said many of the health examples shared with them were “incomplete screenshots”, but from what they were able to assess, they linked “to well-known and reputable sources and recommend seeking advice from experts”.

The Guardian investigation comes amid growing concerns that AI data could confuse consumers who might assume it is reliable. In November last year, a study found that AI chatbots on various platforms were giving inaccurate financial advice, while similar concerns were raised around news summaries.

Sophie Randall, director of the Patient Information Forum, which promotes evidence-based health information to patients, the public and healthcare professionals, said the examples show that “Google’s AI insights can put inaccurate health information at the top of online searches, posing a risk to people’s health.”

Stephanie Parker, director of digital at end-of-life charity Marie Curie, said: “People turn to the internet in times of worry and crisis. If the information they receive is inaccurate or out of context, it can cause serious harm to their health.”

The Guardian discovered several cases of inaccurate health information in Google’s AI previews after a number of health groups, charities and professionals raised concerns.

Anna Jewell, director of support, research and influence at Pancreatic Cancer UK, said advising patients to avoid high-fat foods was “completely incorrect”. This “could be very dangerous and jeopardize a person’s chances of being well enough to receive treatment,” she added.

Jewell said: “The Google AI answer suggests that people with pancreatic cancer avoid high-fat foods and provides a list of examples. However, if someone followed what the search result told them, they might not consume enough calories, have trouble gaining weight, and be unable to tolerate potentially life-saving chemotherapy or surgery.”

Typing “what is the normal range for liver blood tests” also provided misleading information, with masses of numbers, little context, and no consideration of patients’ nationality, gender, ethnicity, or age.

Pamela Healy, chief executive of the British Liver Trust, said the AI ​​summaries were alarming. “Many people with liver disease don’t have any symptoms until late stages, which is why it’s so important that they get tested. But what Google’s AI previews consider ‘normal’ can vary greatly from what is actually considered normal.”

“This is dangerous because it means that some people with severe liver disease may think their result is normal and not bother to attend a medical follow-up meeting.”

A search for “vaginal cancer symptoms and tests” listed a Pap smear as a screening test for vaginal cancer, which is incorrect.

Athena Lamnisos, chief executive of cancer charity Eve Appeal, said: “It’s not a test to detect cancer, and it’s certainly not a test to detect vaginal cancer – this is completely false information. Getting wrong information like this could potentially lead to someone not having their vaginal cancer symptoms checked because they had a clear result in a recent cervical screening.”

“We were also concerned that the AI ​​summary was changing when we performed the exact same search, getting a different answer each time from different sources. This means that people are getting a different answer depending on when they perform the search, and that’s not good enough.”

Lamnisos said she was extremely worried. “Some of the results we’ve seen are really concerning and can potentially put women at risk,” she said.

The Guardian also found that Google’s AI previews provided misleading results for searches for mental health issues. “This is a major concern for us as a charity,” said Stephen Buckley, head of information at Mind.

Some AI summaries on conditions such as psychosis and eating disorders offered “very dangerous advice” and were “incorrect, harmful or could cause people to avoid seeking help”, Buckley said.

Some also left out important context or nuance, he added. “They may suggest accessing information from inappropriate sites… and we know that when AI summarizes information, it can often reflect existing biases, stereotypes or stigmatizing narratives. »

Google said the vast majority of its AI overviews were factual and useful, and that it has continually improved the quality. The accuracy rate of AI previews was comparable to its other search features such as featured snippets, which had been around for more than a decade, he added.

The company also said that when AI Overviews misinterprets web content or lacks context, it will take appropriate action in accordance with its policies.

A Google spokesperson said: “We invest significantly in the quality of AI insights, particularly on topics like health, and the vast majority provide accurate insights. »

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button