Date Approved

5-12-2025

Graduate Degree Type

Thesis

Degree Name

Applied Computer Science (M.S.)

Degree Program

School of Computing and Information Systems

First Advisor

Jonathan Leidig

Second Advisor

Suhila Sawesi

Third Advisor

Guenter Tusch

Academic Year

2024/2025

Abstract

Patients frequently access medical test results through electronic health record (EHR) portals, often before receiving input from healthcare providers. These results typically lack interpretation or are presented with complex clinical terminology, leading to confusion, anxiety, and misinformed decision-making. This thesis examines using large language models (LLMs), such as ChatGPT, to generate test result summaries in detailed and simplified formats automatically. A Python-based middleware was developed to interface between OpenEMR and OpenAI’s API, enabling automated summary generation. Summaries included a clinically detailed version and a simplified version written at a third-grade reading level. Two surveys were conducted: one with undergraduate and graduate students to evaluate comprehension and preferences, and another with licensed medical professionals to assess accuracy, clinical usefulness, and adoption potential. Statistical analysis showed improved understanding when results were summarized, particularly in simplified form. Most students preferred receiving both summary types. Medical professionals found the summaries helpful but cited missing values and lack of contextual nuance as areas needing improvement. They supported adoption, provided that summaries are reviewed by providers, and paired with patient support mechanisms. These findings suggest that LLMs can improve patient comprehension and streamline clinical workflows. With appropriate safeguards and oversight, LLM-generated summaries have the potential to promote equitable access to health information and support more effective communication.

Share

COinS