Health News

Clinical decision support using AI: revolutionizing the doctor’s black bag

The tools available to doctors in the “doctor’s black bag” have not changed much in over a century. Twenty years ago, my colleagues and I published a paper in which we advocated the addition of a portable ultrasound device—not much larger than a smartphone—that would be available to clinicians in the exam room. And over the past two decades, many clinicians have begun using a portable ultrasound machine, not as a replacement for a stethoscope, but to provide rapid, accurate information that cannot be obtained from a single, unaided clinical assessment. It made us better clinicians, and here’s the key: it gave clinicians more time to get to know the patient as a person so that medical evidence could be tailored to the unique needs of the person in front of us.

Now is the time to update the doctor’s black bag: decision support tools based on artificial intelligence (AI). When designed responsibly, AI can be a powerful instrument that provides instant access to expertly curated information. As a result, clinicians can make more informed decisions without sacrificing this vital human-to-human connection.

The arguments in favor of clinical decision support using AI

With the remarkable increase in the number of medical articles published in recent years, clinicians need to be able to quickly obtain reliable and trustworthy clinical decision support at the point of care. Until recently, most clinicians had to rely on information platforms that delivered content in dense prose that was difficult to use at the bedside. Fortunately, AI tools now provide insights much faster.

However, when it comes to healthcare, we cannot sacrifice accuracy or reliability in the name of speed. Given the high stakes of the exam room, it’s no surprise that the biggest concern clinicians and patients have about AI is trust. We must emphasize that the AI ​​tool used to provide clinical decision support retrieves data whose quality has been carefully assessed by experts in evidence assessment and classification and by expert clinicians.

How would you feel if the next time you boarded a plane, you heard the flight attendant announce that your flight would be piloted by a 90-95% accurate AI pilot? Given the consequences of an error, this level of precision is simply not sufficient. It’s not good enough for the cockpit, nor for the exam room.

It turns out that some clinicians are using general-purpose AI tools like ChatGPT to facilitate patient care. And while ChatGPT is great for planning a trip or finding recipes, the idea of ​​it helping your doctor should terrify you. In fact, reports show that a majority (85%) of healthcare leaders are exploring or have already adopted generative AI capabilities. Frankly, this is concerning in a profession where the stakes are often life and death, because relying on unverified information from this technology is downright dangerous.

Safeguards for trustworthy AI

Coming back to the portable ultrasound device, its usefulness lies in its ability to quickly deliver reliable, trustworthy and accurate images at the point of care, every time. The utility of AI-based clinical decision support tools also lies in its ability to quickly deliver reliable, trustworthy and accurate information to the point of care, every time. To achieve this, AI must be built with clear guardrails. It comes down to two key practices: control and transparency and secure integration and monitoring.

First, control and transparency are non-negotiable. General-purpose AI models are trained on large amounts of unfiltered data from the Internet. This inevitably includes a messy mix of fact and fiction, making it a risky source of medical information. Medical AI must be fundamentally different. Its models must be trained exclusively on verified, evidence-based clinical data and research, ensuring the information is correct and free from the noise of the public web. Additionally, AI must be transparent about how it arrives at its recommendations. This visibility is necessary to build trust with both clinicians and patients, as it allows everyone to understand the reasoning behind a recommendation and spot potential errors.

Second, secure integration and monitoring are necessary. AI should not act as an autonomous agent; it should be a supporting tool that integrates seamlessly with existing clinical workflows. This means that it must complement, not complicate, the clinician’s routine to be truly useful. And more importantly, it must be closely monitored by humans through what is called a human-in-the-loop model, which is crucial for tackling complex scenarios or edge cases where the technology can falter. However, medical AI does not need just any human; it needs experts in evaluating and classifying scientific and medical evidence and knowledgeable expert clinicians.

How can AI bring medical care back to humanity?

As clinicians are asked to do more with less, healthcare visits have become less personal and more transactional. Patients do not feel recognized as individuals, and when clinicians do not know their patients, they cannot tailor treatment to their unique needs. AI – the right AI – as a tool in the doctor’s black bag, can provide reliable and trustworthy information to aid clinical decision-making. AI can reduce the time clinicians need to get information and documents into the medical record, allowing them to better know their patients as people and bringing a human touch back to the practice of medicine.

Image: Flickr user Eva Blue.


Dr. Roy Ziegelstein, Editor-in-Chief of DynaMed has over 30 years of experience in medical education and healthcare. He joined Johns Hopkins in 1986 after receiving his M.D. from Boston University. He completed his internal medicine residency and chief residency at Osler Medical Services and his cardiology fellowship at the Johns Hopkins School of Medicine before joining the faculty in 1993. He has held numerous leadership positions, including director of the internal medicine residency program, executive vice president and vice chair for humanism in the Department of Medicine at Johns Hopkins Bayview Medical Center. Since 2013, he has served as associate dean for education at Johns Hopkins University School of Medicine. A dedicated educator and co-director of the Aliki Initiative on Patient-Centered Care, Dr. Ziegelstein has received numerous awards for teaching excellence and is an internationally recognized expert on the link between depression and cardiovascular disease.

This message appears via the MedCity Influencers program. Anyone can post their views on healthcare business and innovation on MedCity News through MedCity Influencers. Click here to find out how.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button