680 research outputs found
Probabilistic multiple kernel learning
The integration of multiple and possibly heterogeneous information sources for an overall decision-making process has been an open and unresolved research direction in computing science since its very beginning. This thesis attempts to address parts of that direction by proposing probabilistic data integration algorithms for multiclass decisions where an observation of interest is assigned to one of many categories based on a plurality of information channels
Uncertainty Quantification in Machine Learning for Engineering Design and Health Prognostics: A Tutorial
On top of machine learning models, uncertainty quantification (UQ) functions
as an essential layer of safety assurance that could lead to more principled
decision making by enabling sound risk assessment and management. The safety
and reliability improvement of ML models empowered by UQ has the potential to
significantly facilitate the broad adoption of ML solutions in high-stakes
decision settings, such as healthcare, manufacturing, and aviation, to name a
few. In this tutorial, we aim to provide a holistic lens on emerging UQ methods
for ML models with a particular focus on neural networks and the applications
of these UQ methods in tackling engineering design as well as prognostics and
health management problems. Toward this goal, we start with a comprehensive
classification of uncertainty types, sources, and causes pertaining to UQ of ML
models. Next, we provide a tutorial-style description of several
state-of-the-art UQ methods: Gaussian process regression, Bayesian neural
network, neural network ensemble, and deterministic UQ methods focusing on
spectral-normalized neural Gaussian process. Established upon the mathematical
formulations, we subsequently examine the soundness of these UQ methods
quantitatively and qualitatively (by a toy regression example) to examine their
strengths and shortcomings from different dimensions. Then, we review
quantitative metrics commonly used to assess the quality of predictive
uncertainty in classification and regression problems. Afterward, we discuss
the increasingly important role of UQ of ML models in solving challenging
problems in engineering design and health prognostics. Two case studies with
source codes available on GitHub are used to demonstrate these UQ methods and
compare their performance in the life prediction of lithium-ion batteries at
the early stage and the remaining useful life prediction of turbofan engines
Machine Learning Methods with Noisy, Incomplete or Small Datasets
In many machine learning applications, available datasets are sometimes incomplete, noisy or affected by artifacts. In supervised scenarios, it could happen that label information has low quality, which might include unbalanced training sets, noisy labels and other problems. Moreover, in practice, it is very common that available data samples are not enough to derive useful supervised or unsupervised classifiers. All these issues are commonly referred to as the low-quality data problem. This book collects novel contributions on machine learning methods for low-quality datasets, to contribute to the dissemination of new ideas to solve this challenging problem, and to provide clear examples of application in real scenarios
Uncertainty quantification for an electric motor inverse problem - tackling the model discrepancy challenge
In the context of complex applications from engineering sciences the solution of identification problems still poses a fundamental challenge. In terms of Uncertainty Quantification (UQ), the identification problem can be stated as a separation task for structural model and parameter uncertainty. This thesis provides new insights and methods to tackle this challenge and demonstrates these developments on an industrial benchmark use case combining simulation and real-world measurement data.
While significant progress has been made in development of methods for model parameter inference, still most of those methods operate under the assumption of a perfect model. For a full, unbiased quantification of uncertainties in inverse problems, it is crucial to consider all uncertainty sources. The present work develops methods for inference of deterministic and aleatoric model parameters from noisy measurement data with explicit consideration of model discrepancy and additional quantification of the associated uncertainties using a Bayesian approach. A further important ingredient is surrogate modeling with Polynomial Chaos Expansion (PCE), enabling sampling from Bayesian posterior distributions with complex simulation models.
Based on this, a novel identification strategy for separation of different sources of uncertainty is presented. Discrepancy is approximated by orthogonal functions with iterative determination of optimal model complexity, weakening the problem inherent identifiability problems. The model discrepancy quantification is complemented with studies to statistical approximate numerical approximation error.
Additionally, strategies for approximation of aleatoric parameter distributions via hierarchical surrogate-based sampling are developed. The proposed method based on Approximate Bayesian Computation (ABC) with summary statistics estimates the posterior computationally efficient, in particular for large data.
Furthermore, the combination with divergence-based subset selection provides a novel methodology for UQ in stochastic inverse problems inferring both, model discrepancy and aleatoric parameter distributions. Detailed analysis in numerical experiments and successful application to the challenging industrial benchmark problem -- an electric motor test bench -- validates the proposed methods
Recommended from our members
Novel methods for biological network inference: an application to circadian Ca2+ signaling network
Biological processes involve complex biochemical interactions among a large number of species like cells, RNA, proteins and metabolites. Learning these interactions is essential to interfering artificially with biological processes in order to, for example, improve crop yield, develop new therapies, and predict new cell or organism behaviors to genetic or environmental perturbations. For a biological process, two pieces of information are of most interest. For a particular species, the first step is to learn which other species are regulating it. This reveals topology and causality. The second step involves learning the precise mechanisms of how this regulation occurs. This step reveals the dynamics of the system. Applying this process to all species leads to the complete dynamical network. Systems biology is making considerable efforts to learn biological networks at low experimental costs. The main goal of this thesis is to develop advanced methods to build models for biological networks, taking the circadian system of Arabidopsis thaliana as a case study. A variety of network inference approaches have been proposed in the literature to study dynamic biological networks. However, many successful methods either require prior knowledge of the system or focus more on topology. This thesis presents novel methods that identify both network topology and dynamics, and do not depend on prior knowledge. Hence, the proposed methods are applicable to general biological networks. These methods are initially developed for linear systems, and, at the cost of higher computational complexity, can also be applied to nonlinear systems. Overall, we propose four methods with increasing computational complexity: one-to-one, combined group and element sparse Bayesian learning (GESBL), the kernel method and reversible jump Markov chain Monte Carlo method (RJMCMC). All methods are tested with challenging dynamical network simulations (including feedback, random networks, different levels of noise and number of samples), and realistic models of circadian system of Arabidopsis thaliana. These simulations show that, while the one-to-one method scales to the whole genome, the kernel method and RJMCMC method are superior for smaller networks. They are robust to tuning variables and able to provide stable performance. The simulations also imply the advantage of GESBL and RJMCMC over the state-of-the-art method. We envision that the estimated models can benefit a wide range of research. For example, they can locate biological compounds responsible for human disease through mathematical analysis and help predict the effectiveness of new treatments
Flexible Time Series Matching for Clinical and Behavioral Data
Time Series data became broadly applied by the research community in the last decades after
a massive explosion of its availability. Nonetheless, this rise required an improvement
in the existing analysis techniques which, in the medical domain, would help specialists
to evaluate their patients condition. One of the key tasks in time series analysis is pattern
recognition (segmentation and classification). Traditional methods typically perform subsequence
matching, making use of a pattern template and a similarity metric to search
for similar sequences throughout time series. However, real-world data is noisy and variable
(morphological distortions), making a template-based exact matching an elementary
approach. Intending to increase flexibility and generalize the pattern searching tasks
across domains, this dissertation proposes two Deep Learning-based frameworks to solve
pattern segmentation and anomaly detection problems.
Regarding pattern segmentation, a Convolution/Deconvolution Neural Network is
proposed, learning to distinguish, point-by-point, desired sub-patterns from background
content within a time series. The proposed framework was validated in two use-cases:
electrocardiogram (ECG) and inertial sensor-based human activity (IMU) signals. It outperformed
two conventional matching techniques, being capable of notably detecting the
targeted cycles even in noise-corrupted or extremely distorted signals, without using any
reference template nor hand-coded similarity scores.
Concerning anomaly detection, the proposed unsupervised framework uses the reconstruction
ability of Variational Autoencoders and a local similarity score to identify
non-labeled abnormalities. The proposal was validated in two public ECG datasets (MITBIH
Arrhythmia and ECG5000), performing cardiac arrhythmia identification. Results
indicated competitiveness relative to recent techniques, achieving detection AUC scores
of 98.84% (ECG5000) and 93.32% (MIT-BIH Arrhythmia).Dados de séries temporais tornaram-se largamente aplicados pela comunidade cientÃfica
nas últimas decadas após um aumento massivo da sua disponibilidade. Contudo, este
aumento exigiu uma melhoria das atuais técnicas de análise que, no domÃnio clÃnico, auxiliaria
os especialistas na avaliação da condição dos seus pacientes. Um dos principais
tipos de análise em séries temporais é o reconhecimento de padrões (segmentação e classificação).
Métodos tradicionais assentam, tipicamente, em técnicas de correspondência em
subsequências, fazendo uso de um padrão de referência e uma métrica de similaridade
para procurar por subsequências similares ao longo de séries temporais. Todavia, dados
do mundo real são ruidosos e variáveis (morfologicamente), tornando uma correspondência
exata baseada num padrão de referência uma abordagem rudimentar. Pretendendo
aumentar a flexibilidade da análise de séries temporais e generalizar tarefas de procura
de padrões entre domÃnios, esta dissertação propõe duas abordagens baseadas em Deep
Learning para solucionar problemas de segmentação de padrões e deteção de anomalias.
Acerca da segmentação de padrões, a rede neuronal de Convolução/Deconvolução
proposta aprende a distinguir, ponto a ponto, sub-padrões pretendidos de conteúdo de
fundo numa série temporal. O modelo proposto foi validado em dois casos de uso: sinais
eletrocardiográficos (ECG) e de sensores inerciais em atividade humana (IMU). Este superou
duas técnicas convencionais, sendo capaz de detetar os ciclos-alvo notavelmente,
mesmo em sinais corrompidos por ruÃdo ou extremamente distorcidos, sem o uso de
nenhum padrão de referência nem métricas de similaridade codificadas manualmente.
A respeito da deteção de anomalias, a técnica não supervisionada proposta usa a
capacidade de reconstrução dos Variational Autoencoders e uma métrica de similaridade
local para identificar anomalias desconhecidas. A proposta foi validada na identificação
de arritmias cardÃacas em duas bases de dados públicas de ECG (MIT-BIH Arrhythmia e
ECG5000). Os resultados revelam competitividade face a técnicas recentes, alcançando
métricas AUC de deteção de 93.32% (MIT-BIH Arrhythmia) e 98.84% (ECG5000)
- …