Latest Trends

If AI can diagnose patients, what are the doctors for?

It seems inevitable that the future of medicine implies an AI, and medical schools already encourage students to use large languages ​​models. “I fear that these tools can erode my ability to make an independent diagnosis,” said Benjamin Popokh, medical student at the University of Texas Southwestern. Popokh decided to become a doctor after the death of a twelve -year cousin of a brain tumor. During a recent rotation, his teachers asked his class to work on a case using AI tools such as Chatgpt and Openevidence, an increasingly popular medical LLM which offers free access to health care professionals. Each chatbot properly diagnosed a blood clot in the lungs. “There was no control group,” said Popokh, which means that none of the students worked in the case without help. For a while, Popokh found himself using AI after almost every meeting with a patient. “I started to feel dirty, presenting my thoughts to the doctors, knowing that they were in fact the thoughts of the AI,” he told me. One day, when he left the hospital, he had a disturbing achievement: he had not thought of a single patient independently that day. He decided that, from that moment, he would force himself to settle on a diagnosis before consulting artificial intelligence. “I went to the Faculty of Medicine to become a real doctor of the capital,” he told me. “If all you do is connect symptoms to an AI, are you still a doctor, or are you just a little better to encourage AI than your patients?”

A few weeks after the Cabot demonstration, Manrai gave me access to the model. He was trained on the CPCs of THE New England Journal of Medicine; I first tested it on the cases of Jama Network, a family of leading medical journals. He made precise diagnostics of patients with various conditions, including rashes, lumps, growths and muscle losses, with a small number of exceptions: he has confused a type of tumor with another and a viral mouth ulcer as cancer. (Chatgpt, in comparison, wrongly diagnosed about half of the cases I gave it, confused cancer with an infection and an allergic reaction for an autoimmune condition.) Real patients do not present carefully organized case studies, however, and I wanted to see how Cabot would respond to the types of situations that doctors really overwhelmed.

I gave Cabot the broad stokes of what Matthew Williams had experienced: bike ride, dinner, abdominal pain, vomiting, two emergency department visits. I did not organize information like a doctor. Alarming, when Cabot generated one of his net presentations, the slides were full of laboratory values, vital signs and exam results. “The abdomen looks distended at the top,” said AI, wrongly. “When you crash it gently, you hear this classic swelling – Soulide to sleep in a closed container.” Cabot even mentioned the report of a computed tomography that would have shown the inflated stomach of Williams. He arrived at an erroneous diagnosis of the gastric volvulus: a torsion of the stomach, not the intestine.

I tried to give Cabot an official summary of Williams’ second emergency visit, as detailed by the doctors who saw it, and this produced a very different result – because they had more data, sorted by salience. The patient’s hemoglobin level had dropped; Its white cells, or leukocytes, had multiplied; He was doubled in pain. This time, Cabot clung to the relevant data and seemed to invent anything. “Foressed up indicators – leukocytosis, the drop in hemoglobin – are all blinking,” he said. Cabot diagnosed an obstruction in the small intestines, perhaps due to the Volvulus or a hernia. “Involve surgery early,” he said. Technically, Cabot was slightly out of the brand: Williams’ problem appeared in the big one, not the small intestine. But the next steps would have been practically identical. A surgeon would have found the intestinal knot.

Talking in Cabot was both stimulating and annoying. I felt like I could now receive a second opinion, in any specialty, at any time. But it is only with vigilance and medical training that I could fully enjoy its capacities and detect its errors. AI models can look like a doctorate, even while making grade school errors in judgment. Chatbots cannot examine patients and they are known to fight with open requests. Their production improves when you highlight what is most important, but most people are not trained to sort the symptoms in this way. A person with chest pain may feel acidic, inflammation or heart attack; A doctor asked if the pain occurs when it eats, when it walks or when it is lying in bed. If the person leans forward, did the pain aggravate or decrease? Sometimes we listen to sentences that considerably increase the chances of a particular condition. “The worst headache of my life” can mean a cerebral hemorrhage; “CURTAIN OVER MY EYE” suggests a blockage with retinal artery. The difference between AI and previous diagnostic technologies is like the difference between a powerful saw and a metal saw. But a user who does not pay attention could cut a finger.

Assist enough clinicopathological lectures, or watch enough episodes of “house”, and each medical case begins to look like a mystery to be resolved. Lisa Sanders, the doctor in the center of Times magazine The Netflix series and series “Diagnostic” compared its work to that of Sherlock Holmes. But the daily practice of medicine is often much more routine and repetitive. During a rotation in a hospital goes during my training, for example, I felt less in Sherlock than as Sisyphus. Almost all patients, it seems, presented a combination of emphysema, heart failure, diabetes, chronic kidney disease and high blood pressure. I became familiar with a new sentence – “probable multifactorial”, which meant that there were several explanations on what the patient was going through – and I looked for ways to resolve one condition without exacerbating another. (The emptying of the liquid to relieve an overloaded heart, for example, can easily dehydrate the kidneys.) Sometimes a precise diagnosis was out of purpose; A patient could come with shortness of breath and low oxygen levels and be treated for chronic obstructive pulmonary disease, heart failure and pneumonia. Sometimes we never understood what had caused a given episode – but we could help the patient feel better and bring him home. Asking an AI to diagnose it would not have offered us a lot of clarity; In practice, there was no careful and satisfactory solution.

Trying an AI with the resolution of a medical affair makes the mistake of “starting at the end”, according to Gurpreet Dhaliwal, doctor at the University of California in San Francisco, that the Times Once described as “one of the most skilful clinical diagnoses in practice”. In the opinion of Dhaliwal, the doctors would better ask AI help for “orientation”: instead of asking what has missed a patient, a doctor could ask a model to identify the trends in the patient’s trajectory, as well as important details that the doctor could have missed. The model would not give the doctor’s orders to follow; Instead, he could alert him to a recent study, offer a useful blood test or find a laboratory lead to a medical file for several decades. Dhaliwal’s vision for medical AI recognizes the difference between diagnosing people and taking care of them. “Just because you have a Japanese-English dictionary in your office

“I don’t care what they call – I need my iced coffee to be at least as large.”

Cartoon by Lauren Simkin Berke

Cabot remains experimental, but other AI tools already shape patient care. Chatgpt is blocked on the network of my hospital, but I and many of my colleagues use Openevidence. The platform has license agreements with the best medical journals and says that it complies with the Law on Patients-Priverge Hipaa. Each of his responses cites a set of articles evaluated by peers, including sometimes an exact figure or a textual quote from a relevant article, to prevent hallucinations. When I gave Openevidence a recent case, he did not immediately try to solve the mystery but rather asked me a series of clarifying questions.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button