Web-Based, Deep Learning Assisted Medical Image Tagging Tool

Document Type


Lead Author Type

CIS Masters Student


Dr. Jonathan Engelsma; jonathan.engelsma@gvsu.edu

Embargo Period



One of the biggest challenges when building supervised machine learning models is to obtain the desired dataset along with its respective annotations. This is especially true in the medical field where all data produced is expected to be consumed by a human being instead of a machine. More often than not, the data can be found only by itself and data scientists are burdened with the task of manually creating the tags for it, a tedious and time-consuming task.

This project aims to speed up the process of manually annotating regions of interest (ROI) in images from computed tomography (CT) scans by leveraging fully convolutional deep networks and web technologies. A partially trained deep learning model suggests ROI to the user who evaluate and adjust them. These corrected images can then be fed to the model as ground truths to continue training. The end result of this process is the tagged dataset and a fully trained machine learning model for predicting ROI in CT scans. In an experiment performed with the help of a medically trained volunteer, tagging images aided by a model trained with 2.3% of the dataset resulted in a 7x speedup over the manual process.

This document is currently not available here.