New Delhi, Jan 16 (IANS) Tech giant Google has developed a novel chatbot which can converse with patients and make diagnostic reasoning at par with human doctors.
The Articulate Medical Intelligence Explorer (AMIE), a conversational diagnostic research AI system, is based on a large language model (LLM) developed by Google, and can deliver results across a multitude of disease conditions, specialties and scenarios.
“We trained and evaluated AMIE along many dimensions that reflect quality in real-world clinical consultations from the perspective of both clinicians and patients,” Alan Karthikesalingam and Vivek Natarajan, Research Leads, Google Research, wrote in a blog post.
“The physician-patient conversation is a cornerstone of medicine, in which skilled and intentional communication drives diagnosis, management, empathy and trust. AI systems capable of such diagnostic dialogues could increase availability, accessibility, quality and consistency of care by being useful conversational partners to clinicians and patients alike. But approximating clinicians’ considerable expertise is a significant challenge,” they added.
The AMIE has been trained on real-world datasets comprising medical reasoning, medical summarisation, and real-world clinical conversations.
To train the chatbot, the team developed a novel self-play based simulated diagnostic dialogue environment with automated feedback mechanisms in a virtual care setting. They also employed an inference time chain-of-reasoning strategy to improve AMIE’s diagnostic accuracy and conversation quality.
AMIE’s performance was tested in consultations with simulated patients (played by trained actors), compared to those performed by 20 real board-certified primary care physicians (PCPs).
“AMIE and PCPs were assessed from the perspectives of both specialist attending physicians and our simulated patients in a randomised, blinded crossover study that included 149 case scenarios from OSCE providers in Canada, the UK, and India in a diverse range of specialties and diseases,” the researchers said.
Objective structured clinical examination (OSCE) is a practical assessment commonly used in the real world to examine clinicians’ skills and competencies in a standardised and objective way.
AMIE performed simulated diagnostic conversations at least as well as PCPs when both were evaluated along multiple clinically-meaningful pointers of consultation quality. This included history-taking, diagnostic accuracy, clinical management, clinical communication skills, relationship fostering and empathy.
AMIE had greater diagnostic accuracy and superior performance for 28 of 32 pointers from the perspective of specialist physicians, and 24 of 26 from the perspective of patient actors.
“Our research has several limitations and should be interpreted with appropriate caution. Clinicians were limited to unfamiliar synchronous text-chat which permits large-scale LLM-patient interactions but is not representative of usual clinical practice. While further research is required before AMIE could be translated to real-world settings, the results represent a milestone towards conversational diagnostic AI,” the researchers wrote in the paper, published on the arXiv preprint server.