654 research outputs found

    Probabilistic multiple kernel learning

    Get PDF
    The integration of multiple and possibly heterogeneous information sources for an overall decision-making process has been an open and unresolved research direction in computing science since its very beginning. This thesis attempts to address parts of that direction by proposing probabilistic data integration algorithms for multiclass decisions where an observation of interest is assigned to one of many categories based on a plurality of information channels

    Uncertainty Quantification in Machine Learning for Engineering Design and Health Prognostics: A Tutorial

    Full text link
    On top of machine learning models, uncertainty quantification (UQ) functions as an essential layer of safety assurance that could lead to more principled decision making by enabling sound risk assessment and management. The safety and reliability improvement of ML models empowered by UQ has the potential to significantly facilitate the broad adoption of ML solutions in high-stakes decision settings, such as healthcare, manufacturing, and aviation, to name a few. In this tutorial, we aim to provide a holistic lens on emerging UQ methods for ML models with a particular focus on neural networks and the applications of these UQ methods in tackling engineering design as well as prognostics and health management problems. Toward this goal, we start with a comprehensive classification of uncertainty types, sources, and causes pertaining to UQ of ML models. Next, we provide a tutorial-style description of several state-of-the-art UQ methods: Gaussian process regression, Bayesian neural network, neural network ensemble, and deterministic UQ methods focusing on spectral-normalized neural Gaussian process. Established upon the mathematical formulations, we subsequently examine the soundness of these UQ methods quantitatively and qualitatively (by a toy regression example) to examine their strengths and shortcomings from different dimensions. Then, we review quantitative metrics commonly used to assess the quality of predictive uncertainty in classification and regression problems. Afterward, we discuss the increasingly important role of UQ of ML models in solving challenging problems in engineering design and health prognostics. Two case studies with source codes available on GitHub are used to demonstrate these UQ methods and compare their performance in the life prediction of lithium-ion batteries at the early stage and the remaining useful life prediction of turbofan engines

    Machine Learning Methods with Noisy, Incomplete or Small Datasets

    Get PDF
    In many machine learning applications, available datasets are sometimes incomplete, noisy or affected by artifacts. In supervised scenarios, it could happen that label information has low quality, which might include unbalanced training sets, noisy labels and other problems. Moreover, in practice, it is very common that available data samples are not enough to derive useful supervised or unsupervised classifiers. All these issues are commonly referred to as the low-quality data problem. This book collects novel contributions on machine learning methods for low-quality datasets, to contribute to the dissemination of new ideas to solve this challenging problem, and to provide clear examples of application in real scenarios

    Uncertainty quantification for an electric motor inverse problem - tackling the model discrepancy challenge

    Get PDF
    In the context of complex applications from engineering sciences the solution of identification problems still poses a fundamental challenge. In terms of Uncertainty Quantification (UQ), the identification problem can be stated as a separation task for structural model and parameter uncertainty. This thesis provides new insights and methods to tackle this challenge and demonstrates these developments on an industrial benchmark use case combining simulation and real-world measurement data. While significant progress has been made in development of methods for model parameter inference, still most of those methods operate under the assumption of a perfect model. For a full, unbiased quantification of uncertainties in inverse problems, it is crucial to consider all uncertainty sources. The present work develops methods for inference of deterministic and aleatoric model parameters from noisy measurement data with explicit consideration of model discrepancy and additional quantification of the associated uncertainties using a Bayesian approach. A further important ingredient is surrogate modeling with Polynomial Chaos Expansion (PCE), enabling sampling from Bayesian posterior distributions with complex simulation models. Based on this, a novel identification strategy for separation of different sources of uncertainty is presented. Discrepancy is approximated by orthogonal functions with iterative determination of optimal model complexity, weakening the problem inherent identifiability problems. The model discrepancy quantification is complemented with studies to statistical approximate numerical approximation error. Additionally, strategies for approximation of aleatoric parameter distributions via hierarchical surrogate-based sampling are developed. The proposed method based on Approximate Bayesian Computation (ABC) with summary statistics estimates the posterior computationally efficient, in particular for large data. Furthermore, the combination with divergence-based subset selection provides a novel methodology for UQ in stochastic inverse problems inferring both, model discrepancy and aleatoric parameter distributions. Detailed analysis in numerical experiments and successful application to the challenging industrial benchmark problem -- an electric motor test bench -- validates the proposed methods

    Flexible Time Series Matching for Clinical and Behavioral Data

    Get PDF
    Time Series data became broadly applied by the research community in the last decades after a massive explosion of its availability. Nonetheless, this rise required an improvement in the existing analysis techniques which, in the medical domain, would help specialists to evaluate their patients condition. One of the key tasks in time series analysis is pattern recognition (segmentation and classification). Traditional methods typically perform subsequence matching, making use of a pattern template and a similarity metric to search for similar sequences throughout time series. However, real-world data is noisy and variable (morphological distortions), making a template-based exact matching an elementary approach. Intending to increase flexibility and generalize the pattern searching tasks across domains, this dissertation proposes two Deep Learning-based frameworks to solve pattern segmentation and anomaly detection problems. Regarding pattern segmentation, a Convolution/Deconvolution Neural Network is proposed, learning to distinguish, point-by-point, desired sub-patterns from background content within a time series. The proposed framework was validated in two use-cases: electrocardiogram (ECG) and inertial sensor-based human activity (IMU) signals. It outperformed two conventional matching techniques, being capable of notably detecting the targeted cycles even in noise-corrupted or extremely distorted signals, without using any reference template nor hand-coded similarity scores. Concerning anomaly detection, the proposed unsupervised framework uses the reconstruction ability of Variational Autoencoders and a local similarity score to identify non-labeled abnormalities. The proposal was validated in two public ECG datasets (MITBIH Arrhythmia and ECG5000), performing cardiac arrhythmia identification. Results indicated competitiveness relative to recent techniques, achieving detection AUC scores of 98.84% (ECG5000) and 93.32% (MIT-BIH Arrhythmia).Dados de séries temporais tornaram-se largamente aplicados pela comunidade científica nas últimas decadas após um aumento massivo da sua disponibilidade. Contudo, este aumento exigiu uma melhoria das atuais técnicas de análise que, no domínio clínico, auxiliaria os especialistas na avaliação da condição dos seus pacientes. Um dos principais tipos de análise em séries temporais é o reconhecimento de padrões (segmentação e classificação). Métodos tradicionais assentam, tipicamente, em técnicas de correspondência em subsequências, fazendo uso de um padrão de referência e uma métrica de similaridade para procurar por subsequências similares ao longo de séries temporais. Todavia, dados do mundo real são ruidosos e variáveis (morfologicamente), tornando uma correspondência exata baseada num padrão de referência uma abordagem rudimentar. Pretendendo aumentar a flexibilidade da análise de séries temporais e generalizar tarefas de procura de padrões entre domínios, esta dissertação propõe duas abordagens baseadas em Deep Learning para solucionar problemas de segmentação de padrões e deteção de anomalias. Acerca da segmentação de padrões, a rede neuronal de Convolução/Deconvolução proposta aprende a distinguir, ponto a ponto, sub-padrões pretendidos de conteúdo de fundo numa série temporal. O modelo proposto foi validado em dois casos de uso: sinais eletrocardiográficos (ECG) e de sensores inerciais em atividade humana (IMU). Este superou duas técnicas convencionais, sendo capaz de detetar os ciclos-alvo notavelmente, mesmo em sinais corrompidos por ruído ou extremamente distorcidos, sem o uso de nenhum padrão de referência nem métricas de similaridade codificadas manualmente. A respeito da deteção de anomalias, a técnica não supervisionada proposta usa a capacidade de reconstrução dos Variational Autoencoders e uma métrica de similaridade local para identificar anomalias desconhecidas. A proposta foi validada na identificação de arritmias cardíacas em duas bases de dados públicas de ECG (MIT-BIH Arrhythmia e ECG5000). Os resultados revelam competitividade face a técnicas recentes, alcançando métricas AUC de deteção de 93.32% (MIT-BIH Arrhythmia) e 98.84% (ECG5000)

    Artificial Intelligence-based Technique for Fault Detection and Diagnosis of EV Motors: A Review

    Get PDF
    The motor drive system plays a significant role in the safety of electric vehicles as a bridge for power transmission. Meanwhile, to enhance the efficiency and stability of the drive system, more and more studies based on AI technology are devoted to the fault detection and diagnosis of the motor drive system. This paper reviews the application of AI techniques in motor fault detection and diagnosis in recent years. AI-based FDD is divided into two main steps: feature extraction and fault classification. The application of different signal processing methods in feature extraction is discussed. In particular, the application of traditional machine learning and deep learning algorithms for fault classification is presented in detail. In addition, the characteristics of all techniques reviewed are summarized. Finally, the latest developments, research gaps and future challenges in fault monitoring and diagnosis of motor faults are discussed

    Analyzing Granger causality in climate data with time series classification methods

    Get PDF
    Attribution studies in climate science aim for scientifically ascertaining the influence of climatic variations on natural or anthropogenic factors. Many of those studies adopt the concept of Granger causality to infer statistical cause-effect relationships, while utilizing traditional autoregressive models. In this article, we investigate the potential of state-of-the-art time series classification techniques to enhance causal inference in climate science. We conduct a comparative experimental study of different types of algorithms on a large test suite that comprises a unique collection of datasets from the area of climate-vegetation dynamics. The results indicate that specialized time series classification methods are able to improve existing inference procedures. Substantial differences are observed among the methods that were tested
    • …
    corecore