844 research outputs found
Exact Bayesian curve fitting and signal segmentation.
We consider regression models where the underlying functional relationship between the response and the explanatory variable is modeled as independent linear regressions on disjoint segments. We present an algorithm for perfect simulation from the posterior distribution of such a model, even allowing for an unknown number of segments and an unknown model order for the linear regressions within each segment. The algorithm is simple, can scale well to large data sets, and avoids the problem of diagnosing convergence that is present with Monte Carlo Markov Chain (MCMC) approaches to this problem. We demonstrate our algorithm on standard denoising problems, on a piecewise constant AR model, and on a speech segmentation problem
Segment Parameter Labelling in MCMC Mean-Shift Change Detection
This work addresses the problem of segmentation in time series data with
respect to a statistical parameter of interest in Bayesian models. It is common
to assume that the parameters are distinct within each segment. As such, many
Bayesian change point detection models do not exploit the segment parameter
patterns, which can improve performance. This work proposes a Bayesian
mean-shift change point detection algorithm that makes use of repetition in
segment parameters, by introducing segment class labels that utilise a
Dirichlet process prior. The performance of the proposed approach was assessed
on both synthetic and real world data, highlighting the enhanced performance
when using parameter labelling
Bayesian detection of piecewise linear trends in replicated time-series with application to growth data modelling
We consider the situation where a temporal process is composed of contiguous
segments with differing slopes and replicated noise-corrupted time series
measurements are observed. The unknown mean of the data generating process is
modelled as a piecewise linear function of time with an unknown number of
change-points. We develop a Bayesian approach to infer the joint posterior
distribution of the number and position of change-points as well as the unknown
mean parameters. A-priori, the proposed model uses an overfitting number of
mean parameters but, conditionally on a set of change-points, only a subset of
them influences the likelihood. An exponentially decreasing prior distribution
on the number of change-points gives rise to a posterior distribution
concentrating on sparse representations of the underlying sequence. A
Metropolis-Hastings Markov chain Monte Carlo (MCMC) sampler is constructed for
approximating the posterior distribution. Our method is benchmarked using
simulated data and is applied to uncover differences in the dynamics of fungal
growth from imaging time course data collected from different strains. The
source code is available on CRAN.Comment: Accepted to International Journal of Biostatistic
Joint segmentation of piecewise constant autoregressive processes by using a hierarchical model and a Bayesian sampling approach
International audienceWe propose a joint segmentation algorithm for piecewise constant autoregressive (AR) processes recorded by several independent sensors. The algorithm is based on a hierarchical Bayesian model. Appropriate priors allow to introduce correlations between the change locations of the observed signals. Numerical problems inherent to Bayesian inference are solved by a Gibbs sampling strategy. The proposed joint segmentation methodology yields improved segmentation results when compared to parallel and independent individual signal segmentations. The initial algorithm is derived for piecewise constant AR processes whose orders are fixed on each segment. However, an extension to models with unknown model orders is also discussed. Theoretical results are illustrated by many simulations conducted with synthetic signals and real arc-tracking and speech signals
Bayesian orthogonal component analysis for sparse representation
This paper addresses the problem of identifying a lower dimensional space
where observed data can be sparsely represented. This under-complete dictionary
learning task can be formulated as a blind separation problem of sparse sources
linearly mixed with an unknown orthogonal mixing matrix. This issue is
formulated in a Bayesian framework. First, the unknown sparse sources are
modeled as Bernoulli-Gaussian processes. To promote sparsity, a weighted
mixture of an atom at zero and a Gaussian distribution is proposed as prior
distribution for the unobserved sources. A non-informative prior distribution
defined on an appropriate Stiefel manifold is elected for the mixing matrix.
The Bayesian inference on the unknown parameters is conducted using a Markov
chain Monte Carlo (MCMC) method. A partially collapsed Gibbs sampler is
designed to generate samples asymptotically distributed according to the joint
posterior distribution of the unknown model parameters and hyperparameters.
These samples are then used to approximate the joint maximum a posteriori
estimator of the sources and mixing matrix. Simulations conducted on synthetic
data are reported to illustrate the performance of the method for recovering
sparse representations. An application to sparse coding on under-complete
dictionary is finally investigated.Comment: Revised version. Accepted to IEEE Trans. Signal Processin
A non-homogeneous dynamic Bayesian network with sequentially coupled interaction parameters for applications in systems and synthetic biology
An important and challenging problem in systems biology is the inference of gene regulatory networks from short non-stationary time series of transcriptional profiles. A popular approach that has been widely applied to this end is based on dynamic Bayesian networks (DBNs), although traditional homogeneous DBNs fail to model the non-stationarity and time-varying nature of the gene regulatory processes. Various authors have therefore recently proposed combining DBNs with multiple changepoint processes to obtain time varying dynamic Bayesian networks (TV-DBNs). However, TV-DBNs are not without problems. Gene expression time series are typically short, which leaves the model over-flexible, leading to over-fitting or inflated inference uncertainty. In the present paper, we introduce a Bayesian regularization scheme that addresses this difficulty. Our approach is based on the rationale that changes in gene regulatory processes appear gradually during an organism's life cycle or in response to a changing environment, and we have integrated this notion in the prior distribution of the TV-DBN parameters. We have extensively tested our regularized TV-DBN model on synthetic data, in which we have simulated short non-homogeneous time series produced from a system subject to gradual change. We have then applied our method to real-world gene expression time series, measured during the life cycle of Drosophila melanogaster, under artificially generated constant light condition in Arabidopsis thaliana, and from a synthetically designed strain of Saccharomyces cerevisiae exposed to a changing environment
Hierarchical Bayesian sparse image reconstruction with application to MRFM
This paper presents a hierarchical Bayesian model to reconstruct sparse
images when the observations are obtained from linear transformations and
corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is
well suited to such naturally sparse image applications as it seamlessly
accounts for properties such as sparsity and positivity of the image via
appropriate Bayes priors. We propose a prior that is based on a weighted
mixture of a positive exponential distribution and a mass at zero. The prior
has hyperparameters that are tuned automatically by marginalization over the
hierarchical Bayesian model. To overcome the complexity of the posterior
distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be
used to estimate the image to be recovered, e.g. by maximizing the estimated
posterior distribution. In our fully Bayesian approach the posteriors of all
the parameters are available. Thus our algorithm provides more information than
other previously proposed sparse reconstruction methods that only give a point
estimate. The performance of our hierarchical Bayesian sparse reconstruction
method is illustrated on synthetic and real data collected from a tobacco virus
sample using a prototype MRFM instrument.Comment: v2: final version; IEEE Trans. Image Processing, 200
- âŠ