Explaining Patient Data Towards Improving Patient Understanding and Medical Efficiency
Location
Hager-Lubbers Exhibition Hall
Description
PURPOSE: Patients often receive their lab results via electronic health portals before any explanation from a provider. These results, when summarized, are frequently written in technical language that can be difficult for non-medical users to understand. This study examines the application of large language models (LLMs), including ChatGPT, to automatically generate test result summaries, thereby enhancing patient comprehension and provider efficiency. SUBJECTS: Two groups were surveyed: undergraduate and graduate students from various majors and medical professionals from multiple specialties. METHODS AND MATERIALS: Participants reviewed original test results, a clinically detailed LLM-generated summary, and a simplified summary. Students answered comprehension questions, and medical professionals completed a Likert-scale evaluation assessing accuracy, clinical depth, and adoption potential. ANALYSES: Quantitative analysis of student response accuracy and summary preference; qualitative feedback and Likert averages for professional evaluations. RESULTS: The student survey indicates high comprehension of both detailed and simplified summaries, with most participants preferring to receive both formats. The medical professional survey reveals support for LLM adoption in practice, particularly when combined with provider oversight and patient access. While the summaries were not fully accurate due to missing lab data, they were generally reliable and considered clinically useful when carefully reviewed. CONCLUSIONS: LLMs show promise in bridging the communication gap between patients and providers. They may enhance comprehension and workflow if summaries are paired with patient support options and reviewed by medical professionals before delivery.
Explaining Patient Data Towards Improving Patient Understanding and Medical Efficiency
Hager-Lubbers Exhibition Hall
PURPOSE: Patients often receive their lab results via electronic health portals before any explanation from a provider. These results, when summarized, are frequently written in technical language that can be difficult for non-medical users to understand. This study examines the application of large language models (LLMs), including ChatGPT, to automatically generate test result summaries, thereby enhancing patient comprehension and provider efficiency. SUBJECTS: Two groups were surveyed: undergraduate and graduate students from various majors and medical professionals from multiple specialties. METHODS AND MATERIALS: Participants reviewed original test results, a clinically detailed LLM-generated summary, and a simplified summary. Students answered comprehension questions, and medical professionals completed a Likert-scale evaluation assessing accuracy, clinical depth, and adoption potential. ANALYSES: Quantitative analysis of student response accuracy and summary preference; qualitative feedback and Likert averages for professional evaluations. RESULTS: The student survey indicates high comprehension of both detailed and simplified summaries, with most participants preferring to receive both formats. The medical professional survey reveals support for LLM adoption in practice, particularly when combined with provider oversight and patient access. While the summaries were not fully accurate due to missing lab data, they were generally reliable and considered clinically useful when carefully reviewed. CONCLUSIONS: LLMs show promise in bridging the communication gap between patients and providers. They may enhance comprehension and workflow if summaries are paired with patient support options and reviewed by medical professionals before delivery.