218 research outputs found
Discovering ship navigation patterns towards environmental impact modeling
In this work a data pipe-line to manage and extract patterns from time-series is described. The patterns found with a combination of Conditional Restricted Boltzmann Machine (CRBM) and k-Means algorithms are then validated using a visualization tool. The motivation of finding these patterns is to leverage future emission model
Hidden Markov Models and their Application for Predicting Failure Events
We show how Markov mixed membership models (MMMM) can be used to predict the
degradation of assets. We model the degradation path of individual assets, to
predict overall failure rates. Instead of a separate distribution for each
hidden state, we use hierarchical mixtures of distributions in the exponential
family. In our approach the observation distribution of the states is a finite
mixture distribution of a small set of (simpler) distributions shared across
all states. Using tied-mixture observation distributions offers several
advantages. The mixtures act as a regularization for typically very sparse
problems, and they reduce the computational effort for the learning algorithm
since there are fewer distributions to be found. Using shared mixtures enables
sharing of statistical strength between the Markov states and thus transfer
learning. We determine for individual assets the trade-off between the risk of
failure and extended operating hours by combining a MMMM with a partially
observable Markov decision process (POMDP) to dynamically optimize the policy
for when and how to maintain the asset.Comment: Will be published in the proceedings of ICCS 2020;
@Booklet{EasyChair:3183, author = {Paul Hofmann and Zaid Tashman}, title =
{Hidden Markov Models and their Application for Predicting Failure Events},
howpublished = {EasyChair Preprint no. 3183}, year = {EasyChair, 2020}
Estimating ensemble flows on a hidden Markov chain
We propose a new framework to estimate the evolution of an ensemble of
indistinguishable agents on a hidden Markov chain using only aggregate output
data. This work can be viewed as an extension of the recent developments in
optimal mass transport and Schr\"odinger bridges to the finite state space
hidden Markov chain setting. The flow of the ensemble is estimated by solving a
maximum likelihood problem, which has a convex formulation at the
infinite-particle limit, and we develop a fast numerical algorithm for it. We
illustrate in two numerical examples how this framework can be used to track
the flow of identical and indistinguishable dynamical systems.Comment: 8 pages, 4 figure
Deteksi Fraud Menggunakan Metode Model Markov Tersembunyi pada Proses Bisnis
Model Markov Tersembunyi merupakan sebuah metode statistik berdasarkan Model Markov sederhana yang memodelkan sistem serta membaginya dalam 2 (dua) state, state tersembunyi dan state observasi. Dalam pengerjaan tugas akhir ini, penulis mengusulkan penggunaan metode Model Markov Tersembunyi untuk menemukan fraud didalam sebuah pelaksanaan proses bisnis. Dengan penggunaan metode Model Markov Tersembunyi ini, maka pengamatan terhadap elemen penyusun sebuah kasus/kejadian, yakni beberapa aktivitas, akan diperoleh sebuah nilai peluang, yang sekaligus memberikan prediksi terhadap kasus/kejadian tersebut, sebuah fraud atau tidak. Hasil ekpserimen ini menunjukkan bahwa metode yang diusulkan mampu memberikan prediksi akhir dengan evaluasi TPR sebesar 87,5% dan TNR sebesar 99,4%
Learning Temporal Dependence from Time-Series Data with Latent Variables
We consider the setting where a collection of time series, modeled as random
processes, evolve in a causal manner, and one is interested in learning the
graph governing the relationships of these processes. A special case of wide
interest and applicability is the setting where the noise is Gaussian and
relationships are Markov and linear. We study this setting with two additional
features: firstly, each random process has a hidden (latent) state, which we
use to model the internal memory possessed by the variables (similar to hidden
Markov models). Secondly, each variable can depend on its latent memory state
through a random lag (rather than a fixed lag), thus modeling memory recall
with differing lags at distinct times. Under this setting, we develop an
estimator and prove that under a genericity assumption, the parameters of the
model can be learned consistently. We also propose a practical adaption of this
estimator, which demonstrates significant performance gains in both synthetic
and real-world datasets
A Simple Estimator for Dynamic Models with Serially Correlated Unobservables
We present a method for estimating Markov dynamic models with unobserved state variables which can be serially correlated over time. We focus on the case where all the model variables have discrete support. Our estimator is simple to compute because it is noniterative, and involves only elementary matrix manipulations. Our estimation method is nonparametric, in that no parametric assumptions on the distributions of the unobserved state variables or the laws of motions of the state variables are required. Monte Carlo simulations show that the estimator performs well in practice, and we illustrate its use with a dataset of doctors' prescription of pharmaceutical drugs.
- …