479 research outputs found
Towards a new method for kinematic quantification of bradykinesia in patients with parkinson's disease using triaxial accelerometry
We propose a new kinematic analysis procedure using triaxial accelerometers mounted to the wrist in the assessment of bradykinesia in patients with Parkinson's disease (PD). The deviation of the magnitude of the accelerometer vector signal from the magnitude of the gravitational acceleration is taken as a measure for effective magnitude of the acceleration at the position of the triaxial accelerometer. For low acceleration, two of the three angles describing the orientation of the lower arm can be derived from the accelerometer signal
SPES/SCOPA and MDS-UPDRS: Formulas for converting scores of two motor scales in Parkinson’s disease
AbstractBackgroundMotor impairment in Parkinson’s disease (PD) can be evaluated with the Short Parkinson’s Evaluation Scale/Scales for Outcomes in Parkinson’s disease (SPES/SCOPA) and the Movement Disorder Society-Unified Parkinson’s Disease Rating Scale (MDS-UPDRS). The aim of this study was to determine equation models for the conversion of scores from one scale to the other.Methods148 PD patients were evaluated with the SPES/SCOPA-motor and the MDS-UPDRS motor examination. Linear regression was used to develop equation models.ResultsScores on both scales were highly correlated (r = 0.88). Linear regression revealed the following equation models (explained variance: 78%):1.MDS-UPDRS motor examination score = 11.8 + 2.4 ∗ SPES/SCOPA-motor score2.SPES/SCOPA-motor score = −0.5 + 0.3 ∗ MDS-UPDRS motor examination score.ConclusionWith the equation models identified in this study, scores from SPES/SCOPA-motor can be converted to scores from MDS-UPDRS motor examination and vice versa
Does Deep Brain Stimulation of the Subthalamic Nucleus Prolong Survival in Parkinson's Disease?
Neurological Motor Disorder
Designing interpretable deep learning applications for functional genomics:a quantitative analysis
Deep learning applications have had a profound impact on many scientific fields, including functional genomics. Deep learning models can learn complex interactions between and within omics data; however, interpreting and explaining these models can be challenging. Interpretability is essential not only to help progress our understanding of the biological mechanisms underlying traits and diseases but also for establishing trust in these model's efficacy for healthcare applications. Recognizing this importance, recent years have seen the development of numerous diverse interpretability strategies, making it increasingly difficult to navigate the field. In this review, we present a quantitative analysis of the challenges arising when designing interpretable deep learning solutions in functional genomics. We explore design choices related to the characteristics of genomics data, the neural network architectures applied, and strategies for interpretation. By quantifying the current state of the field with a predefined set of criteria, we find the most frequent solutions, highlight exceptional examples, and identify unexplored opportunities for developing interpretable deep learning models in genomics
Loss of integrity and atrophy in cingulate structural covariance networks in Parkinson's disease
Neuro Imaging Researc
Altered Whole-Brain and Network-Based Functional Connectivity in Parkinson's Disease
Neuro Imaging Researc
Optical Hand Tracking: A Novel Technique for the Assessment of Bradykinesia in Parkinson's Disease
Neurological Motor Disorder
- …