211,354 research outputs found
Uncertainty Quantification in Machine Learning for Biosignal Applications -- A Review
Uncertainty Quantification (UQ) has gained traction in an attempt to fix the black-box nature of Deep Learning. Specifically (medical) biosignals such as electroencephalography (EEG), electrocardiography (ECG), electroocculography (EOG) and electromyography (EMG) could benefit from good UQ, since these suffer from a poor signal to noise ratio, and good human interpretability is pivotal for medical applications and Brain Computer Interfaces. In this paper, we review the state of the art at the intersection of Uncertainty Quantification and Biosignal with Machine Learning. We present various methods, shortcomings, uncertainty measures and theoretical frameworks that currently exist in this application domain. Overall it can be concluded that promising UQ methods are available, but that research is needed on how people and systems may interact with an uncertainty model in a (clinical) environment
Psychophysiological modelling and the measurement of fear conditioning
Quantification of fear conditioning is paramount to many clinical and translational studies on aversive learning. Various measures of fear conditioning co-exist, including different observables and different methods of pre-processing. Here, we first argue that low measurement error is a rational desideratum for any measurement technique. We then show that measurement error can be approximated in benchmark experiments by how closely intended fear memory relates to measured fear memory, a quantity that we term retrodictive validity. From this perspective, we discuss different approaches commonly used to quantify fear conditioning. One of these is psychophysiological modelling (PsPM). This builds on a measurement model that describes how a psychological variable, such as fear memory, influences a physiological measure. This model is statistically inverted to estimate the most likely value of the psychological variable, given the measured data. We review existing PsPMs for skin conductance, pupil size, heart period, respiration, and startle eye-blink. We illustrate the benefit of PsPMs in terms of retrodictive validity and translate this into sample size required to achieve a desired level of statistical power. This sample size can differ up to a factor of three between different observables, and between the best, and the current standard, data pre-processing methods
Bayesian Methods in Tensor Analysis
Tensors, also known as multidimensional arrays, are useful data structures in
machine learning and statistics. In recent years, Bayesian methods have emerged
as a popular direction for analyzing tensor-valued data since they provide a
convenient way to introduce sparsity into the model and conduct uncertainty
quantification. In this article, we provide an overview of frequentist and
Bayesian methods for solving tensor completion and regression problems, with a
focus on Bayesian methods. We review common Bayesian tensor approaches
including model formulation, prior assignment, posterior computation, and
theoretical properties. We also discuss potential future directions in this
field.Comment: 32 pages, 8 figures, 2 table
Trustworthy clinical AI solutions: a unified review of uncertainty quantification in deep learning models for medical image analysis
The full acceptance of Deep Learning (DL) models in the clinical field is
rather low with respect to the quantity of high-performing solutions reported
in the literature. Particularly, end users are reluctant to rely on the rough
predictions of DL models. Uncertainty quantification methods have been proposed
in the literature as a potential response to reduce the rough decision provided
by the DL black box and thus increase the interpretability and the
acceptability of the result by the final user. In this review, we propose an
overview of the existing methods to quantify uncertainty associated to DL
predictions. We focus on applications to medical image analysis, which present
specific challenges due to the high dimensionality of images and their quality
variability, as well as constraints associated to real-life clinical routine.
We then discuss the evaluation protocols to validate the relevance of
uncertainty estimates. Finally, we highlight the open challenges of uncertainty
quantification in the medical field
Recommended from our members
A Large-Scale Study of Modern Code Review and Security in Open Source Projects.
On the Brittleness of Bayesian Inference
With the advent of high-performance computing, Bayesian methods are
increasingly popular tools for the quantification of uncertainty throughout
science and industry. Since these methods impact the making of sometimes
critical decisions in increasingly complicated contexts, the sensitivity of
their posterior conclusions with respect to the underlying models and prior
beliefs is a pressing question for which there currently exist positive and
negative results. We report new results suggesting that, although Bayesian
methods are robust when the number of possible outcomes is finite or when only
a finite number of marginals of the data-generating distribution are unknown,
they could be generically brittle when applied to continuous systems (and their
discretizations) with finite information on the data-generating distribution.
If closeness is defined in terms of the total variation metric or the matching
of a finite system of generalized moments, then (1) two practitioners who use
arbitrarily close models and observe the same (possibly arbitrarily large
amount of) data may reach opposite conclusions; and (2) any given prior and
model can be slightly perturbed to achieve any desired posterior conclusions.
The mechanism causing brittlenss/robustness suggests that learning and
robustness are antagonistic requirements and raises the question of a missing
stability condition for using Bayesian Inference in a continuous world under
finite information.Comment: 20 pages, 2 figures. To appear in SIAM Review (Research Spotlights).
arXiv admin note: text overlap with arXiv:1304.677
Multi-Estimator Full Left Ventricle Quantification through Ensemble Learning
Cardiovascular disease accounts for 1 in every 4 deaths in United States.
Accurate estimation of structural and functional cardiac parameters is crucial
for both diagnosis and disease management. In this work, we develop an ensemble
learning framework for more accurate and robust left ventricle (LV)
quantification. The framework combines two 1st-level modules: direct estimation
module and a segmentation module. The direct estimation module utilizes
Convolutional Neural Network (CNN) to achieve end-to-end quantification. The
CNN is trained by taking 2D cardiac images as input and cardiac parameters as
output. The segmentation module utilizes a U-Net architecture for obtaining
pixel-wise prediction of the epicardium and endocardium of LV from the
background. The binary U-Net output is then analyzed by a separate CNN for
estimating the cardiac parameters. We then employ linear regression between the
1st-level predictor and ground truth to learn a 2nd-level predictor that
ensembles the results from 1st-level modules for the final estimation.
Preliminary results by testing the proposed framework on the LVQuan18 dataset
show superior performance of the ensemble learning model over the two base
modules.Comment: Jiasha Liu, Xiang Li and Hui Ren contribute equally to this wor
- …