84 research outputs found
Active multi-fidelity Bayesian online changepoint detection
Online algorithms for detecting changepoints, or abrupt shifts in the
behavior of a time series, are often deployed with limited resources, e.g., to
edge computing settings such as mobile phones or industrial sensors. In these
scenarios it may be beneficial to trade the cost of collecting an environmental
measurement against the quality or "fidelity" of this measurement and how the
measurement affects changepoint estimation. For instance, one might decide
between inertial measurements or GPS to determine changepoints for motion. A
Bayesian approach to changepoint detection is particularly appealing because we
can represent our posterior uncertainty about changepoints and make active,
cost-sensitive decisions about data fidelity to reduce this posterior
uncertainty. Moreover, the total cost could be dramatically lowered through
active fidelity switching, while remaining robust to changes in data
distribution. We propose a multi-fidelity approach that makes cost-sensitive
decisions about which data fidelity to collect based on maximizing information
gain with respect to changepoints. We evaluate this framework on synthetic,
video, and audio data and show that this information-based approach results in
accurate predictions while reducing total cost.Comment: 37th Conference on Uncertainty in Artificial Intelligenc
Adaptation of the Tuning Parameter in General Bayesian Inference with Robust Divergence
We introduce a methodology for robust Bayesian estimation with robust
divergence (e.g., density power divergence or {\gamma}-divergence), indexed by
a single tuning parameter. It is well known that the posterior density induced
by robust divergence gives highly robust estimators against outliers if the
tuning parameter is appropriately and carefully chosen. In a Bayesian
framework, one way to find the optimal tuning parameter would be using evidence
(marginal likelihood). However, we numerically illustrate that evidence induced
by the density power divergence does not work to select the optimal tuning
parameter since robust divergence is not regarded as a statistical model. To
overcome the problems, we treat the exponential of robust divergence as an
unnormalized statistical model, and we estimate the tuning parameter via
minimizing the Hyvarinen score. We also provide adaptive computational methods
based on sequential Monte Carlo (SMC) samplers, which enables us to obtain the
optimal tuning parameter and samples from posterior distributions
simultaneously. The empirical performance of the proposed method through
simulations and an application to real data are also provided
Bayesian Non-parametric Hidden Markov Model for Agile Radar Pulse Sequences Streaming Analysis
Multi-function radars (MFRs) are sophisticated types of sensors with the
capabilities of complex agile inter-pulse modulation implementation and dynamic
work mode scheduling. The developments in MFRs pose great challenges to modern
electronic reconnaissance systems or radar warning receivers for recognition
and inference of MFR work modes. To address this issue, this paper proposes an
online processing framework for parameter estimation and change point detection
of MFR work modes. At first, this paper designed a fully-conjugate Bayesian
non-parametric hidden Markov model with a designed prior distribution (agile
BNP-HMM) to represent the MFR pulse agility characteristics. The proposed model
allows fully-variational Bayesian inference. Then, the proposed framework is
constructed by two main parts. The first part is the agile BNP-HMM model for
automatically inferring the number of HMM hidden states and emission
distribution of the corresponding hidden states. An estimation error lower
bound on performance is derived and the proposed algorithm is shown to be close
to the bound. The second part utilizes the streaming Bayesian updating to
facilitate computation, and designed an online work mode change detection
framework based upon a weighted sequential probability ratio test. We
demonstrate that the proposed framework is consistently highly effective and
robust to baseline methods on diverse simulated data-sets.Comment: 15 pages, 10 figures, submitted to IEEE transactions on signal
processin
Robust and Scalable Bayesian Online Changepoint Detection
This paper proposes an online, provably robust, and scalable Bayesian approach for changepoint detection. The resulting algorithm has key advantages over previous work: it provides provable robustness by leveraging the generalised Bayesian perspective, and also addresses the scalability issues of previous attempts. Specifically, the proposed generalised Bayesian formalism leads to conjugate posteriors whose parameters are available in closed form by leveraging diffusion score matching. The resulting algorithm is exact, can be updated through simple algebra, and is more than 10 times faster than its closest competitor
Differentially Private Statistical Inference through -Divergence One Posterior Sampling
Differential privacy guarantees allow the results of a statistical analysis
involving sensitive data to be released without compromising the privacy of any
individual taking part. Achieving such guarantees generally requires the
injection of noise, either directly into parameter estimates or into the
estimation process. Instead of artificially introducing perturbations, sampling
from Bayesian posterior distributions has been shown to be a special case of
the exponential mechanism, producing consistent, and efficient private
estimates without altering the data generative process. The application of
current approaches has, however, been limited by their strong bounding
assumptions which do not hold for basic models, such as simple linear
regressors. To ameliorate this, we propose D-Bayes, a posterior sampling
scheme from a generalised posterior targeting the minimisation of the
-divergence between the model and the data generating process. This
provides private estimation that is generally applicable without requiring
changes to the underlying model and consistently learns the data generating
parameter. We show that D-Bayes produces more precise inference
estimation for the same privacy guarantees, and further facilitates
differentially private estimation via posterior sampling for complex
classifiers and continuous regression models such as neural networks for the
first time
Generalised Bayesian filtering via sequential Monte Carlo
We introduce a framework for inference in general state-space hidden Markov models (HMMs) under likelihood misspecification. In particular, we leverage the loss-theoretic perspective of Generalized Bayesian Inference (GBI) to define generalised filtering recursions in HMMs, that can tackle the problem of inference under model misspecification. In doing so, we arrive at principled procedures for robust inference against observation contamination by utilising the -divergence. Operationalising the proposed framework is made possible via sequential Monte Carlo methods (SMC), where the standard particle methods, and their associated convergence results, are readily adapted to the new setting. We demonstrate our approach to object tracking and Gaussian process regression problems, and observe improved performance over standard filtering algorithms
- …