Youbrush: Leveraging Edge-Based Machine Learning in Oral Care
Location
Hager-Lubbers Exhibition Hall
Description
PURPOSE: A disconnect often occurs regarding the length of time a person believes to have brushed their teeth and the length of time for which they actually do. Despite the average person asserting that they spend 2 minutes performing said act, the true average is roughly 47 seconds. By crafting a low-friction responsive mobile application experience, we sought to positively affect oral care habits in users. The crux of this application is an audio recognition model which listens to users’ oral care routines and drives the responsive/gamified experience. PROCEDURES: Audio samples were gathered from team members and freely available sound samples. As regions of auditory interest were extracted, Mel Frequency Cepstrum Coefficients (MFCCs) were compared to determine basic differences between brushing sounds as well as different oral regions which the subject was actively brushing. These samples were filtered, segmented, hierarchically clustered, and augmented to increase the amount and breadth of training data. Throughout model training iterations, weaknesses were drawn back to individual/groups of audio samples to determine which data elements and hyperparameters required further investigation and reinforcement. OUTCOME: The resulting in-use brushing detection model boasts precision of >92% across several auditory environments, and enables the YouBrush application to validate users’ brushing habits in a friction-less manner, driving greater user interaction, stickiness, and oral care adherence through ease-of-use. IMPACT: YouBrush brings features previously available only to users with smart toothbrushes to all users with the YouBrush app. Additionally, the procedures and methods used to prototype and construct the audio recognition model outline a pipeline for developing such models for any number of applications.
Youbrush: Leveraging Edge-Based Machine Learning in Oral Care
Hager-Lubbers Exhibition Hall
PURPOSE: A disconnect often occurs regarding the length of time a person believes to have brushed their teeth and the length of time for which they actually do. Despite the average person asserting that they spend 2 minutes performing said act, the true average is roughly 47 seconds. By crafting a low-friction responsive mobile application experience, we sought to positively affect oral care habits in users. The crux of this application is an audio recognition model which listens to users’ oral care routines and drives the responsive/gamified experience. PROCEDURES: Audio samples were gathered from team members and freely available sound samples. As regions of auditory interest were extracted, Mel Frequency Cepstrum Coefficients (MFCCs) were compared to determine basic differences between brushing sounds as well as different oral regions which the subject was actively brushing. These samples were filtered, segmented, hierarchically clustered, and augmented to increase the amount and breadth of training data. Throughout model training iterations, weaknesses were drawn back to individual/groups of audio samples to determine which data elements and hyperparameters required further investigation and reinforcement. OUTCOME: The resulting in-use brushing detection model boasts precision of >92% across several auditory environments, and enables the YouBrush application to validate users’ brushing habits in a friction-less manner, driving greater user interaction, stickiness, and oral care adherence through ease-of-use. IMPACT: YouBrush brings features previously available only to users with smart toothbrushes to all users with the YouBrush app. Additionally, the procedures and methods used to prototype and construct the audio recognition model outline a pipeline for developing such models for any number of applications.