
MOSCOW, May 4, Vladislav Strekopytov.Scientists compared the responses of real doctors and medical chatbots to patient requests. It turned out that artificial intelligence (AI) advice often seems to people more understandable, informative and correct. Why the recommendations of the «virtual doctor» should still be treated with caution — in the material .
Medicine in the era of AI
Practical applications based on generative AI are developing rapidly, including in medicine. Today, AI is helping doctors interpret X-ray, MRI, CT, and other examinations, analyze patient records, and generate prescription and recommendation templates.
The COVID-19 pandemic has accelerated the adoption of digital health and telemedicine. On the one hand, this simplified access to medical care, on the other hand, it led to a decrease in its quality and violation of regulations. AI partially frees doctors from routine paper work and prepares information, but a person is still responsible for the diagnosis and treatment.
Thanks to medical chatbots based on ChatGPT, patients have the opportunity to directly contact the «virtual doctor». This area has not yet been regulated in any way. Scientists are trying to find the best options for combining AI with the experience and knowledge of specialists.
“In the longer term, I can envision a technology where clinicians look after AI like physicians look after their interns,” writes Teva Brender, MD, UC San Francisco professor. inherent in any new field, especially with existential implications.I am cautiously optimistic about a future where AI will allow us to get up from the computer and return to what we decided to go into medicine for.»
Digital empathy
There is a widespread opinion among physicians that they are not afraid of competition with AI, since the machine mind is only capable of mechanically interpreting data and is not able to replace patient communication with a doctor. American researchers led by Dr. John Ayers from the University of California at San Diego tested the professionalism of the recommendations generated by the ChatGPT chat bot, and at the same time evaluated their «humanity».
195 randomly selected questions from users of the Ask a Doctor (r/AskDocs) online community were taken as initial data. This is a subreddit with an audience of about 452,000 people, where doctors consult.
ChatGPT answers to the same questions without indicating the source, along with the recommendations of real doctors, were presented to the court of medical experts. In 79 percent of cases, experts preferred AI options.
«There is more detailed and accurate information,» said one of the participants in the experiment, San Diego nurse practitioner Jessica Kelly.
The chatbot recommendations were also more responsive: empathy was 9.8 times higher than that of doctors. This is due to pre-programmed linguistic techniques.
» Empias — complicated emotional cognitive process, says the Doctor of Philosophy Edivirira Desapriya, Faculty of Medicine at the University of British Columbia in Canada «It means active listening, genuine concern, the ability to understand and respond to patients' anxieties.»
The professor of artificial intelligence at the University of Bath, Nello Christianini, in a discussion of the article, gives this example: “My father was a doctor, and sometimes he got calls in the middle of the night. In the morning I asked what happened. He answered that people are afraid of death and sometimes they just need to hear a doctor. Not sure if a chatbot can or should ever take on this important role.»
Intelligent Assistant
AI-generated messages were perceived as more informative and empathetic first of all, thanks to the deployment and emphasized respect for the patient.
«Long answers are more popular, which affects rankings,» says Desapriya.
Scientists believe that using ChatGPT as an assistant is justified in a clinical setting. When consulting remotely, AI can be entrusted with answer templates, which will then be edited by specialists. This will speed up the process, facilitate the work of doctors and improve the results of treatment.
Such a system is already being implementedat the University of California, San Diego School of Medicine. The supercomputing infrastructure of Microsoft Azure allows doctors to use ChatGPT to create a draft response when communicating with patients.
Another direction is electronic medical records. Generative artificial intelligence OpenAI GPT-4 is designed to process and record conversations between doctors and patients.
Bias and fabrication of «facts»
The database for Chat GPT is usually open sources on the Internet. But the individual characteristics of the patient are known only to the attending physician. The Stanford University School of Medicine compared Chat GPT's postoperative treatment instructions with those given by specialists upon discharge from the hospital. on the 20th.
However, chatbots have their advantages. Generative AI is indispensable when there is no way to go to the clinic. It takes into account the latest scientific information, and adapts the answers to different levels of literacy. In addition, it has the ability to self-learning.
Of the minuses — a phenomenon that was called «automation bias». We are talking about the tendency of people, including specialists, to overly trust the conclusions of AI when making decisions.
Anthony Kohn, professor of automated thinking at the University of Leeds, says clinicians should remain vigilant and always check chatbot recommendations.
intelligence,» he warns. «You need to be careful with a chatbot. It's just an auxiliary tool for a medical professional.» =»https://ria.ru/20221220/migren-1839770808.html» data->
Recently, scientists from Germany and the Netherlands evaluated the correctness of breast cancer diagnosis performed by radiologists using an automated mammogram reader. With an AI no-tumor diagnosis, cancer detection by experienced doctors dropped from 82 percent to 45 percent, and by trainees from 80 percent to 20 percent. and diagnoses and treatment recommendations carefully checked.

