6 questions to ask before integrating AI into a clinical workflow

The emergence of large languages models (LLMS) prompted a research team to compare the performance of this technology compared to traditional clinical decision support systems to identify potential drug / medication interactions. Retrospective analysis revealed that traditional clinical support tools identified 280 clinically relevant instances, but AI found only 80.
Such studies are good examples of the reason why health care providers are cautious to adopt AI in clinical practice. Respondents to a study of health expenditure in health in 2024 of bathing and Klas Research cite regulatory, legal, costs and precision problems, which are all valid considerations when patient safety is at stake.
However, the study also revealed that AI continues to gain ground with health care providers. Respondents seem optimistic about the implementation of the generative AI and are more likely to experiment with technology to improve results.
AI faces the longtime central dilemma when technology integration into clinical workflows: How to use technology to improve care while minimizing risks?
Let us examine this question through the objective of the appearance of clinical decision, in particular information on drugs for prescribers. For decades, technology has argued that clinicians have an overview of drug safety, as it would be impossible for clinicians to keep up with the rate of growth and continuous evolution. For example, more than 30 million quotes currently exist in Pubmed and increases by around one million new quotes each year.
Technology can help. Content databases monitor global literature, regulatory updates and clinical directives. They criticize quality critically and synthesize the results in the content and recommendations that clinicians can use at the care point.
Solid decision support systems provide evidence -based information. They hire clinicians to carefully and accurately manage it from the world of medical literature available today. This provides users of clinicians with the latest relevant evidence to shed light decisions for patient care at the care point. AI can improve the experience by surfaceing information within these systems even more quickly, and with fewer clicks, especially if it has been built for this purpose.
General ai vs ia specially designed
The models of large languages (LLM), such as Chatgpt, have taken the spotlight in the conversations on AI in recent years. These tools allow better understanding and reasoning capacities of general languages.
However, the simple fact of adding general AI tools to these decision support systems and pointing them to a set of clinical documents will not provide the advantages that many are looking for. Studies provide an edifying history to those who believe they can use LLM for general use instead of a decision support system established to assess drug interactions.
For example, a study revealed that Chatgpt has failed clinically important potential potential interactions. In another study, Chatgpt could identify the potential drug-drug interactions, but badly obtained the forecast of severity and the appearance or supply of high quality documentation. These results demonstrate the gaps in systems that are not specially designed for clinicians who make patient care decisions.
Simple questions can help health care organizations to determine whether the appearance of the IA decision they envision are specially designed for clinicians:
- Who is this AI designed? The specially designed AI is targeted. He targets a limited audience and focuses on the questions that matter most to this audience. When carried out correctly, these systems should surpass a system for general use in its field of expertise.
- What data forms this AI? Direct quotes of evidence must be an essential part of any response in decision -making tools. General AI systems can comb the internet for related content, but may include erroneous evidence and not evaluated by peers or verified by experts. Many publications are not available in free full text on the Internet, so an LLM may not capture the details of a critical piece, creating a evidence. The system must also be updated frequently to include the most recent results and regulatory documents. Finally, it should be clear for the user what information used by AI to obtain supplies.
- How does this AI interpret my question? In health care, users can ask questions with ambiguous acronyms or incomplete follow -up issues. For example, if someone typed “what about vancomycin”, it seems to be a random fragment in isolation. But if the previous question was “to monitor Cefepime’s parameters”, it becomes clear that the correct interpretation of the question is to “monitor the parameters of vancomycin”, the AI system should tell the user how he interprets a question, so the user knows from the start if the AI even answers the right question. Clarification mechanisms allow users to refine their request before AI responds.
- Does this AI offer more than one best adjusted answer? A common situation for nurses and pharmacists is to determine whether several drugs can be combined in various solutions for intravenous administration (IV). A simple chat response can only provide one best adjusted response, but the clinician may need several options, especially if the patient has limited IV access. Clinicians should have systems that allow them to use their best judgment to administer drugs safely.
- Will this AI recognize its limits? AI technologies improve every day, but they have limits. Finding an answer quickly is important, but expectations should be realistic. For example, a user could ask an equivalent question to ask the AI to make a meta-analysis, which would be difficult to do with precision and quickly to support a decision at the care point. AI systems must recognize and be transparent on their limits rather than risking providing a fabricated response that endangers patient safety.
- Have clinicians were involved in the development of this AI? Clinicians must always stay in the driver’s seat for any tool, technology or process that affects patient safety. Period. Clinicians provide an essential point of view to developing technologies and continuous feedback loop that continuously improves systems. User clinicians and tests should validate critical components of all clinical decision -making tools.
A collaborative approach offers better results
Ultimately, the specially designed AI focuses on the result: helping clinicians access confidence in the care point. Together, the combination of man and AI can achieve better results than one or the other can alone.
Image: mr.cole_photographer, Getty Images
Sonika Mathur is executive vice-president and chief executive officer of MicroMEDEX, clinical drug information decision-making technology. Sonika has more than 20 years of experience in clinical decision -making assistance, a technology -oriented care provision and patient engagement. Before joining Merative, she directed initiatives at Cityblock Health and Elsevier Clinical Solutions.
This message appears through the Medcity influencers program. Anyone can publish their point of view on business and innovation in health care on Medcity News through Medcity influencers. Click here to find out how.
:max_bytes(150000):strip_icc()/VWH-GettyImages-1450331860-37deef675c7a427bb281c9c102d14337.jpg?w=390&resize=390,220&ssl=1)
:max_bytes(150000):strip_icc()/VWH-GettyImages-1293028001-2467a01588034572bf7502efa4b04696.jpg?w=390&resize=390,220&ssl=1)


