1,183 research outputs found
Big Data in Parkinson’s Disease: Using Smartphones to Remotely Detect Longitudinal Disease Phenotypes
Objective: To better understand the longitudinal characteristics of Parkinson's disease (PD) through the analysis of finger tapping and memory tests collected remotely using smartphones. Approach: Using a large cohort (312 PD subjects and 236 controls) of participants in the mPower study, we extract clinically validated features from a finger tapping and memory test to monitor the longitudinal behaviour of study participants. We investigate any discrepancy in learning rates associated with motor and non-motor tasks between PD subjects and healthy controls. The ability of these features to predict self-assigned severity measures is assessed whilst simultaneously inspecting the severity scoring system for floor-ceiling effects. Finally, we study the relationship between motor and non-motor longitudinal behaviour to determine if separate aspects of the disease are dependent on one another. Main results: We find that the test performances of the most severe subjects show significant correlations with self-assigned severity measures. Interestingly, less severe subjects do not show significant correlations, which is shown to be a consequence of floor-ceiling effects within the mPower self-reporting severity system. We find that motor performance after practise is a better predictor of severity than baseline performance suggesting that starting performance at a new motor task is less representative of disease severity than the performance after the test has been learnt. We find PD subjects show significant impairments in motor ability as assessed through the alternating finger tapping (AFT) test in both the short- and long-term analyses. In the AFT and memory tests we demonstrate that PD subjects show a larger degree of longitudinal performance variability in addition to requiring more instances of a test to reach a steady state performance than healthy subjects. Significance: Our findings pave the way forward for objective assessment and quantification of longitudinal learning rates in PD. This can be particularly useful for symptom monitoring and assessing medication response. This study tries to tackle some of the major challenges associated with self-assessed severity labels by designing and validating features extracted from big datasets in PD, which could help identify digital biomarkers capable of providing measures of disease severity outside of a clinical environment
EventNet: Detecting Events in EEG
Neurologists are often looking for various "events of interest" when
analyzing EEG. To support them in this task various machine-learning-based
algorithms have been developed. Most of these algorithms treat the problem as
classification, thereby independently processing signal segments and ignoring
temporal dependencies inherent to events of varying duration. At inference
time, the predicted labels for each segment then have to be post processed to
detect the actual events. We propose an end-to-end event detection approach
(EventNet), based on deep learning, that directly works with events as learning
targets, stepping away from ad-hoc postprocessing schemes to turn model outputs
into events. We compare EventNet with a state-of-the-art approach for artefact
and and epileptic seizure detection, two event types with highly variable
durations. EventNet shows improved performance in detecting both event types.
These results show the power of treating events as direct learning targets,
instead of using ad-hoc postprocessing to obtain them. Our event detection
framework can easily be extended to other event detection problems in signal
processing, since the deep learning backbone does not depend on any
task-specific features.Comment: This work has been submitted to the IEEE for possible publicatio
Increasing Performance And Sample Efficiency With Model-agnostic Interactive Feature Attributions
Model-agnostic feature attributions can provide local insights in complex ML
models. If the explanation is correct, a domain expert can validate and trust
the model's decision. However, if it contradicts the expert's knowledge,
related work only corrects irrelevant features to improve the model. To allow
for unlimited interaction, in this paper we provide model-agnostic
implementations for two popular explanation methods (Occlusion and Shapley
values) to enforce entirely different attributions in the complex model. For a
particular set of samples, we use the corrected feature attributions to
generate extra local data, which is used to retrain the model to have the right
explanation for the samples. Through simulated and real data experiments on a
variety of models we show how our proposed approach can significantly improve
the model's performance only by augmenting its training dataset based on
corrected explanations. Adding our interactive explanations to active learning
settings increases the sample efficiency significantly and outperforms existing
explanatory interactive strategies. Additionally we explore how a domain expert
can provide feature attributions which are sufficiently correct to improve the
model
- …