Using Synchronized Audio Mapping to Track and Predict Velar and Pharyngeal Wall Locations during Dynamic MRI Sequences

Abstract

Purpose: The purpose of this study is to demonstrate a novel innovative computational modeling technique to 1) track velar and pharyngeal wall movement from dynamic MRI data and to 2) examine the utility of using recorded participant audio signals to estimate velar and pharyngeal wall movement during a speech task. A series of dynamic MRI data and audio acoustic features were used to develop and inform a Hidden Markov Model (HMM) and Mel-Frequency Cepstral Coefficients (MFCC) model.Methods: One adult male subject was imaged using a fast-gradient echo Fast Low Angle Shot (FLASH) multi-shot spiral technique to acquire 15.8 frames per second (fps) of the midsagittal image plane during the production of “ansa.†The nasal surface of the velum and the posterior pharyngeal wall was identified and marked using a novel pixel selection method. The error rate was measured by calculating the accumulation error and through visual inspection.Results: The proposed model traced and animated dynamic articulators during the speech process in real-time with an overall accuracy of 81% considering one pixel threshold. The predicted markers (pixels) segmented the structures of interest in the velopharyngeal area and were able to successfully predict the velar and pharyngeal configurations when provided with the audio signal.Conclusion: This study demonstrates a novel and innovative approach to tracking dynamic velopharyngeal movements. Discussion of the potential application of a predictive model that relies on audio signals to detect the presence of a velopharyngeal gap is discussed

    Similar works

    Full text

    thumbnail-image