3 research outputs found
Uncertainty Quantification for cross-subject Motor Imagery classification
Uncertainty Quantification aims to determine when the prediction from a
Machine Learning model is likely to be wrong. Computer Vision research has
explored methods for determining epistemic uncertainty (also known as model
uncertainty), which should correspond with generalisation error. These methods
theoretically allow to predict misclassifications due to inter-subject
variability. We applied a variety of Uncertainty Quantification methods to
predict misclassifications for a Motor Imagery Brain Computer Interface. Deep
Ensembles performed best, both in terms of classification performance and
cross-subject Uncertainty Quantification performance. However, we found that
standard CNNs with Softmax output performed better than some of the more
advanced methods
Uncertainty Quantification for cross-subject Motor Imagery classification
Uncertainty Quantification aims to determine when the prediction from a Machine Learning model is likely to be wrong. Computer Vision research has explored methods for determining epistemic uncertainty (also known as model uncertainty), which should correspond with generalisation error. These methods theoretically allow to predict misclassifications due to inter-subject variability. We applied a variety of Uncertainty Quantification methods to predict misclassifications for a Motor Imagery Brain Computer Interface. Deep Ensembles performed best, both in terms of classification performance and cross-subject Uncertainty Quantification performance. However, we found that standard CNNs with Softmax output performed better than some of the more advanced methods