29 research outputs found

    The mobile sleep medicine model in neurologic practice: Rationale and application

    Get PDF
    BACKGROUND: Undiagnosed obstructive sleep apnea (OSA) is prevalent in neurological practice and significantly contributes to morbidity and mortality. OSA is prevalent in US adults and causes poor quality sleep and significant neurocognitive, cardiovascular, and cerebrovascular impairments. Timely treatment of OSA reduces cardio-cerebrovascular risks and improves quality of life. However, most of the US population has limited systematic access to sleep medicine care despite its clinical significance. FOCUS: We discuss the importance of systematic screening, testing, and best-practice management of OSA and hypoventilation/hypoxemia syndromes (HHS) in patients with stroke, neurocognitive impairment, and neuromuscular conditions. This review aims to introduce and describe a novel integrated Mobile Sleep Medicine (iMSM) care model and provide the rationale for using an iMSM in general neurological practice to assist with systematic screening, testing and best-practice management of OSA, HHS, and potentially other sleep conditions. KEY POINTS: The iMSM is an innovative, patient-centered, clinical outcome-based program that uses a Mobile Sleep Medicine Unit-a sleep lab on wheels -designed to improve access to OSA management and sleep care at all levels of health care system. The protocol for the iMSM care model includes three levels of operations to provide effective and efficient OSA screening, timely testing/treatment plans, and coordination of further sleep medicine care follow-up. The iMSM care model prioritizes effective, efficient, and patient-centered sleep medicine care; therefore, all parties and segments of care that receive and provide clinical sleep medicine services may benefit from adopting this innovative approach

    Machine-learning assisted swallowing assessment: a deep learning-based quality improvement tool to screen for post-stroke dysphagia

    Get PDF
    IntroductionPost-stroke dysphagia is common and associated with significant morbidity and mortality, rendering bedside screening of significant clinical importance. Using voice as a biomarker coupled with deep learning has the potential to improve patient access to screening and mitigate the subjectivity associated with detecting voice change, a component of several validated screening protocols.MethodsIn this single-center study, we developed a proof-of-concept model for automated dysphagia screening and evaluated the performance of this model on training and testing cohorts. Patients were admitted to a comprehensive stroke center, where primary English speakers could follow commands without significant aphasia and participated on a rolling basis. The primary outcome was classification either as a pass or fail equivalent using a dysphagia screening test as a label. Voice data was recorded from patients who spoke a standardized set of vowels, words, and sentences from the National Institute of Health Stroke Scale. Seventy patients were recruited and 68 were included in the analysis, with 40 in training and 28 in testing cohorts, respectively. Speech from patients was segmented into 1,579 audio clips, from which 6,655 Mel-spectrogram images were computed and used as inputs for deep-learning models (DenseNet and ConvNext, separately and together). Clip-level and participant-level swallowing status predictions were obtained through a voting method.ResultsThe models demonstrated clip-level dysphagia screening sensitivity of 71% and specificity of 77% (F1 = 0.73, AUC = 0.80 [95% CI: 0.78–0.82]). At the participant level, the sensitivity and specificity were 89 and 79%, respectively (F1 = 0.81, AUC = 0.91 [95% CI: 0.77–1.05]).DiscussionThis study is the first to demonstrate the feasibility of applying deep learning to classify vocalizations to detect post-stroke dysphagia. Our findings suggest potential for enhancing dysphagia screening in clinical settings. https://github.com/UofTNeurology/masa-open-source

    State of the Art in Ray Tracing Animated Scenes

    No full text
    Ray tracing has long been a method of choice for off-line rendering, but traditionally was too slow for interactive use. With faster hardware and algorithmic improvements this has recently changed, and real-time ray tracing is finally within reach. However, real-time capability also opens up new problems that do not exist in an off-line environment. In particular real-time ray tracing offers the opportunity to interactively ray trace moving/animated scene content. This presents a challenge to the data structures that have been developed for ray tracing over the past few decades. Spatial data structures crucial for fast ray tracing must be rebuilt or updated as the scene changes, and this can become a bottleneck for the speed of ray tracing. This bottleneck has received much recent attention by researchers that has resulted in a multitude of different algorithms, data structures, and strategies for handling animated scenes. The effectiveness of techniques for ray tracing dynamic scenes vary dramatically depending on details such as scene complexity, model structure, type of motion, and the coherency of the rays. Consequently, there is so far no approach that is best in all cases, and determining the best technique for a particular problem can be a challenge. In this STAR, we aim to survey the different approaches to ray tracing animated scenes, discussing their strengths and weaknesses, and their relationship to other approaches. The overall goal is to help the reader choose the best approach depending on the situation, and to expose promising areas where there is potential for algorithmic improvements

    A New Dataset for Facial Motion Analysis in Individuals With Neurological Disorders

    No full text
    : We present the first public dataset with videos of oro-facial gestures performed by individuals with oro-facial impairment due to neurological disorders, such as amyotrophic lateral sclerosis (ALS) and stroke. Perceptual clinical scores from trained clinicians are provided as metadata. Manual annotation of facial landmarks is also provided for a subset of over 3300 frames. Through extensive experiments with multiple facial landmark detection algorithms, including state-of-the-art convolutional neural network (CNN) models, we demonstrated the presence of bias in the landmark localization accuracy of pre-trained face alignment approaches in our participant groups. The pre-trained models produced higher errors in the two clinical groups compared to age-matched healthy control subjects. We also investigated how this bias changes when the existing models are fine-tuned using data from the target population. The release of this dataset aims to propel the development of face alignment algorithms robust to the presence of oro-facial impairment, support the automatic analysis and recognition of oro-facial gestures, enhance the automatic identification of neurological diseases, as well as the estimation of disease severity from videos and images

    Obstructive Sleep Apnea and Incident Cancer: A Large Retrospective Multicenter Clinical Cohort Study

    No full text
    BackgroundTo examine the association between the severity of obstructive sleep apnea (OSA) and nocturnal hypoxemia with incident cancer.MethodsThis was a multicenter retrospective clinical cohort study using linked clinical and provincial health administrative data on consecutive adults who underwent a diagnostic sleep study between 1994 and 2017 in four academic hospitals (Canada) who were free of cancer at baseline. Cancer status was derived from the Ontario Cancer Registry. Cox cause-specific regressions were utilized to address the objective and to calculate the 10-year absolute risk difference (ARD) in the marginal probability of incident cancer and the number needed to harm (NNH).ResultsOf 33,997 individuals considered, 33,711 with no missing OSA severity were included: median age, 50 years; 58% male; and 23% with severe OSA (apnea-hypopnea index >30). Of the 18,458 individuals with information on sleep time spent with oxygen saturation (SaO2) <90%, 5% spent >30% of sleep with SaO2 <90% (severe nocturnal hypoxemia). Over a median of 7 years, 2,498 of 33,711 (7%) individuals developed cancer, with an incidence rate of 10.3 (10.0-10.8) per 1,000 person-years. Controlling for confounders, severe OSA was associated with a 15% increased hazard of developing cancer compared with no OSA (HR = 1.15, 1.02-1.30; ARD = 1.28%, 0.20-2.37; and NNH = 78). Severe hypoxemia was associated with about 30% increased hazard (HR = 1.32, 1.08-1.61; ARD = 2.38%, 0.47-4.31; and NNH = 42).ConclusionsIn a large cohort of individuals with suspected OSA free of cancer at baseline, the severity of OSA and nocturnal hypoxemia was independently associated with incident cancer.ImpactThese findings suggest the need for more targeted cancer risk awareness in individuals with OSA

    Data_Sheet_1_Machine-learning assisted swallowing assessment: a deep learning-based quality improvement tool to screen for post-stroke dysphagia.docx

    No full text
    IntroductionPost-stroke dysphagia is common and associated with significant morbidity and mortality, rendering bedside screening of significant clinical importance. Using voice as a biomarker coupled with deep learning has the potential to improve patient access to screening and mitigate the subjectivity associated with detecting voice change, a component of several validated screening protocols.MethodsIn this single-center study, we developed a proof-of-concept model for automated dysphagia screening and evaluated the performance of this model on training and testing cohorts. Patients were admitted to a comprehensive stroke center, where primary English speakers could follow commands without significant aphasia and participated on a rolling basis. The primary outcome was classification either as a pass or fail equivalent using a dysphagia screening test as a label. Voice data was recorded from patients who spoke a standardized set of vowels, words, and sentences from the National Institute of Health Stroke Scale. Seventy patients were recruited and 68 were included in the analysis, with 40 in training and 28 in testing cohorts, respectively. Speech from patients was segmented into 1,579 audio clips, from which 6,655 Mel-spectrogram images were computed and used as inputs for deep-learning models (DenseNet and ConvNext, separately and together). Clip-level and participant-level swallowing status predictions were obtained through a voting method.ResultsThe models demonstrated clip-level dysphagia screening sensitivity of 71% and specificity of 77% (F1 = 0.73, AUC = 0.80 [95% CI: 0.78–0.82]). At the participant level, the sensitivity and specificity were 89 and 79%, respectively (F1 = 0.81, AUC = 0.91 [95% CI: 0.77–1.05]).DiscussionThis study is the first to demonstrate the feasibility of applying deep learning to classify vocalizations to detect post-stroke dysphagia. Our findings suggest potential for enhancing dysphagia screening in clinical settings. https://github.com/UofTNeurology/masa-opensource</p
    corecore