3,725 research outputs found

    Bridging the ensemble Kalman and particle filter

    Full text link
    In many applications of Monte Carlo nonlinear filtering, the propagation step is computationally expensive, and hence, the sample size is limited. With small sample sizes, the update step becomes crucial. Particle filtering suffers from the well-known problem of sample degeneracy. Ensemble Kalman filtering avoids this, at the expense of treating non-Gaussian features of the forecast distribution incorrectly. Here we introduce a procedure which makes a continuous transition indexed by gamma in [0,1] between the ensemble and the particle filter update. We propose automatic choices of the parameter gamma such that the update stays as close as possible to the particle filter update subject to avoiding degeneracy. In various examples, we show that this procedure leads to updates which are able to handle non-Gaussian features of the prediction sample even in high-dimensional situations

    Performance improvement via bagging in probabilistic prediction of chaotic time series using similarity of attractors and LOOCV predictable horizon

    Get PDF
    Recently, we have presented a method of probabilistic prediction of chaotic time series. The method employs learning machines involving strong learners capable of making predictions with desirably long predictable horizons, where, however, usual ensemble mean for making representative prediction is not effective when there are predictions with shorter predictable horizons. Thus, the method selects a representative prediction from the predictions generated by a number of learning machines involving strong learners as follows: first, it obtains plausible predictions holding large similarity of attractors with the training time series and then selects the representative prediction with the largest predictable horizon estimated via LOOCV (leave-one-out cross-validation). The method is also capable of providing average and/or safe estimation of predictable horizon of the representative prediction. We have used CAN2s (competitive associative nets) for learning piecewise linear approximation of nonlinear function as strong learners in our previous study, and this paper employs bagging (bootstrap aggregating) to improve the performance, which enables us to analyze the validity and the effectiveness of the method

    Hierarchical Clustering of Ensemble Prediction Using LOOCV Predictable Horizon for Chaotic Time Series

    Get PDF
    Recently, we have presented a method of ensemble prediction of chaotic time series. The method employs strong learners capable of making predictions with small error, where usual ensemble mean does not work well owing to the long term unpredictability of chaotic time series. Thus, we have developed a method to select a representative prediction from a set of plausible predictions by means of using LOOCV (leave-one-out cross-validation) measure to estimate predictable horizon. Although we have shown the effectiveness of the method, it sometimes fails to select the representative prediction with long predictable horizon. In order to cope with this problem, this paper presents a method to select multiple candidates of representative prediction by means of employing hierarchical K-means clustering with K = 2. From numerical experiments, we show the effectiveness of the method and an analysis of the property of LOOCV predictable horizon.The 2017 IEEE Symposium Series on Computational Intelligence (IEEE SSCI 2017), November 27 to December 1, 2017, Honolulu, Hawaii, US

    Probabilistic Prediction of Chaotic Time Series Using Similarity of Attractors and LOOCV Predictable Horizons for Obtaining Plausible Predictions

    Get PDF
    This paper presents a method for probabilistic prediction of chaotic time series. So far, we have developed several model selection methods for chaotic time series prediction, but the methods cannot estimate the predictable horizon of predicted time series. Instead of using model selection methods employing the estimation of mean square prediction error (MSE), we present a method to obtain a probabilistic prediction which provides a prediction of time series and the estimation of predictable horizon. The method obtains a set of plausible predictions by means of using the similarity of attractors of training time series and the time series predicted by a number of learning machines with different parameter values, and then obtains a smaller set of more plausible predictions with longer predictable horizons estimated by LOOCV (leave-one-out cross-validation) method. The effectiveness and the properties of the present method are shown by means of analyzing the result of numerical experiments.22nd International Conference, ICONIP 2015, November 9-12, 2015, Istanbul, Turke

    ADVANCES IN SYSTEM RELIABILITY-BASED DESIGN AND PROGNOSTICS AND HEALTH MANAGEMENT (PHM) FOR SYSTEM RESILIENCE ANALYSIS AND DESIGN

    Get PDF
    Failures of engineered systems can lead to significant economic and societal losses. Despite tremendous efforts (e.g., $200 billion annually) denoted to reliability and maintenance, unexpected catastrophic failures still occurs. To minimize the losses, reliability of engineered systems must be ensured throughout their life-cycle amidst uncertain operational condition and manufacturing variability. In most engineered systems, the required system reliability level under adverse events is achieved by adding system redundancies and/or conducting system reliability-based design optimization (RBDO). However, a high level of system redundancy increases a system's life-cycle cost (LCC) and system RBDO cannot ensure the system reliability when unexpected loading/environmental conditions are applied and unexpected system failures are developed. In contrast, a new design paradigm, referred to as resilience-driven system design, can ensure highly reliable system designs under any loading/environmental conditions and system failures while considerably reducing systems' LCC. In order to facilitate the development of formal methodologies for this design paradigm, this research aims at advancing two essential and co-related research areas: Research Thrust 1 - system RBDO and Research Thrust 2 - system prognostics and health management (PHM). In Research Thrust 1, reliability analyses under uncertainty will be carried out in both component and system levels against critical failure mechanisms. In Research Thrust 2, highly accurate and robust PHM systems will be designed for engineered systems with a single or multiple time-scale(s). To demonstrate the effectiveness of the proposed system RBDO and PHM techniques, multiple engineering case studies will be presented and discussed. Following the development of Research Thrusts 1 and 2, Research Thrust 3 - resilience-driven system design will establish a theoretical basis and design framework of engineering resilience in a mathematical and statistical context, where engineering resilience will be formulated in terms of system reliability and restoration and the proposed design framework will be demonstrated with a simplified aircraft control actuator design problem

    Nonparametric Bayesian Deep Learning for Scientific Data Analysis

    Get PDF
    Deep learning (DL) has emerged as the leading paradigm for predictive modeling in a variety of domains, especially those involving large volumes of high-dimensional spatio-temporal data such as images and text. With the rise of big data in scientific and engineering problems, there is now considerable interest in the research and development of DL for scientific applications. The scientific domain, however, poses unique challenges for DL, including special emphasis on interpretability and robustness. In particular, a priority of the Department of Energy (DOE) is the research and development of probabilistic ML methods that are robust to overfitting and offer reliable uncertainty quantification (UQ) on high-dimensional noisy data that is limited in size relative to its complexity. Gaussian processes (GPs) are nonparametric Bayesian models that are naturally robust to overfitting and offer UQ out-of-the-box. Unfortunately, traditional GP methods lack the balance of expressivity and domain-specific inductive bias that is key to the success of DL. Recently, however, a number of approaches have emerged to incorporate the DL paradigm into GP methods, including deep kernel learning (DKL), deep Gaussian processes (DGPs), and neural network Gaussian processes (NNGPs). In this work, we investigate DKL, DGPs, and NNGPs as paradigms for developing robust models for scientific applications. First, we develop DKL for text classification, and apply both DKL and Bayesian neural networks (BNNs) to the problem of classifying cancer pathology reports, with BNNs attaining new state-of-the-art results. Next, we introduce the deep ensemble kernel learning (DEKL) method, which is just as powerful as DKL while admitting easier model parallelism. Finally, we derive a new model called a ``bottleneck NNGP\u27\u27 by unifying the DGP and NNGP paradigms, thus laying the groundwork for a new class of methods for future applications
    corecore