6,825 research outputs found
Hidden Markov Models and their Application for Predicting Failure Events
We show how Markov mixed membership models (MMMM) can be used to predict the
degradation of assets. We model the degradation path of individual assets, to
predict overall failure rates. Instead of a separate distribution for each
hidden state, we use hierarchical mixtures of distributions in the exponential
family. In our approach the observation distribution of the states is a finite
mixture distribution of a small set of (simpler) distributions shared across
all states. Using tied-mixture observation distributions offers several
advantages. The mixtures act as a regularization for typically very sparse
problems, and they reduce the computational effort for the learning algorithm
since there are fewer distributions to be found. Using shared mixtures enables
sharing of statistical strength between the Markov states and thus transfer
learning. We determine for individual assets the trade-off between the risk of
failure and extended operating hours by combining a MMMM with a partially
observable Markov decision process (POMDP) to dynamically optimize the policy
for when and how to maintain the asset.Comment: Will be published in the proceedings of ICCS 2020;
@Booklet{EasyChair:3183, author = {Paul Hofmann and Zaid Tashman}, title =
{Hidden Markov Models and their Application for Predicting Failure Events},
howpublished = {EasyChair Preprint no. 3183}, year = {EasyChair, 2020}
Bayesian Nonparametric Feature and Policy Learning for Decision-Making
Learning from demonstrations has gained increasing interest in the recent
past, enabling an agent to learn how to make decisions by observing an
experienced teacher. While many approaches have been proposed to solve this
problem, there is only little work that focuses on reasoning about the observed
behavior. We assume that, in many practical problems, an agent makes its
decision based on latent features, indicating a certain action. Therefore, we
propose a generative model for the states and actions. Inference reveals the
number of features, the features, and the policies, allowing us to learn and to
analyze the underlying structure of the observed behavior. Further, our
approach enables prediction of actions for new states. Simulations are used to
assess the performance of the algorithm based upon this model. Moreover, the
problem of learning a driver's behavior is investigated, demonstrating the
performance of the proposed model in a real-world scenario
Iterated filtering methods for Markov process epidemic models
Dynamic epidemic models have proven valuable for public health decision
makers as they provide useful insights into the understanding and prevention of
infectious diseases. However, inference for these types of models can be
difficult because the disease spread is typically only partially observed e.g.
in form of reported incidences in given time periods. This chapter discusses
how to perform likelihood-based inference for partially observed Markov
epidemic models when it is relatively easy to generate samples from the Markov
transmission model while the likelihood function is intractable. The first part
of the chapter reviews the theoretical background of inference for partially
observed Markov processes (POMP) via iterated filtering. In the second part of
the chapter the performance of the method and associated practical difficulties
are illustrated on two examples. In the first example a simulated outbreak data
set consisting of the number of newly reported cases aggregated by week is
fitted to a POMP where the underlying disease transmission model is assumed to
be a simple Markovian SIR model. The second example illustrates possible model
extensions such as seasonal forcing and over-dispersion in both, the
transmission and observation model, which can be used, e.g., when analysing
routinely collected rotavirus surveillance data. Both examples are implemented
using the R-package pomp (King et al., 2016) and the code is made available
online.Comment: This manuscript is a preprint of a chapter to appear in the Handbook
of Infectious Disease Data Analysis, Held, L., Hens, N., O'Neill, P.D. and
Wallinga, J. (Eds.). Chapman \& Hall/CRC, 2018. Please use the book for
possible citations. Corrected typo in the references and modified second
exampl
Bayesian Optimization with Unknown Constraints
Recent work on Bayesian optimization has shown its effectiveness in global
optimization of difficult black-box objective functions. Many real-world
optimization problems of interest also have constraints which are unknown a
priori. In this paper, we study Bayesian optimization for constrained problems
in the general case that noise may be present in the constraint functions, and
the objective and constraints may be evaluated independently. We provide
motivating practical examples, and present a general framework to solve such
problems. We demonstrate the effectiveness of our approach on optimizing the
performance of online latent Dirichlet allocation subject to topic sparsity
constraints, tuning a neural network given test-time memory constraints, and
optimizing Hamiltonian Monte Carlo to achieve maximal effectiveness in a fixed
time, subject to passing standard convergence diagnostics.Comment: 14 pages, 3 figure
- …