8,004 research outputs found

    A second-order PHD filter with mean and variance in target number

    Get PDF
    The Probability Hypothesis Density (PHD) and Cardinalized PHD (CPHD) filters are popular solutions to the multi-target tracking problem due to their low complexity and ability to estimate the number and states of targets in cluttered environments. The PHD filter propagates the first-order moment (i.e. mean) of the number of targets while the CPHD propagates the cardinality distribution in the number of targets, albeit for a greater computational cost. Introducing the Panjer point process, this paper proposes a second-order PHD filter, propagating the second-order moment (i.e. variance) of the number of targets alongside its mean. The resulting algorithm is more versatile in the modelling choices than the PHD filter, and its computational cost is significantly lower compared to the CPHD filter. The paper compares the three filters in statistical simulations which demonstrate that the proposed filter reacts more quickly to changes in the number of targets, i.e., target births and target deaths, than the CPHD filter. In addition, a new statistic for multi-object filters is introduced in order to study the correlation between the estimated number of targets in different regions of the state space, and propose a quantitative analysis of the spooky effect for the three filters

    Optimal Calibration of PET Crystal Position Maps Using Gaussian Mixture Models

    Get PDF
    A method is developed for estimating optimal PET gamma-ray detector crystal position maps, for arbitrary crystal configurations, based on a binomial distribution model for scintillation photon arrival. The approach is based on maximum likelihood estimation of Gaussian mixture model parameters using crystal position histogram data, with determination of the position map taken from the posterior probability boundaries between mixtures. This leads to minimum probability of error crystal identification under the assumed model

    Delayed Sampling and Automatic Rao-Blackwellization of Probabilistic Programs

    Full text link
    We introduce a dynamic mechanism for the solution of analytically-tractable substructure in probabilistic programs, using conjugate priors and affine transformations to reduce variance in Monte Carlo estimators. For inference with Sequential Monte Carlo, this automatically yields improvements such as locally-optimal proposals and Rao-Blackwellization. The mechanism maintains a directed graph alongside the running program that evolves dynamically as operations are triggered upon it. Nodes of the graph represent random variables, edges the analytically-tractable relationships between them. Random variables remain in the graph for as long as possible, to be sampled only when they are used by the program in a way that cannot be resolved analytically. In the meantime, they are conditioned on as many observations as possible. We demonstrate the mechanism with a few pedagogical examples, as well as a linear-nonlinear state-space model with simulated data, and an epidemiological model with real data of a dengue outbreak in Micronesia. In all cases one or more variables are automatically marginalized out to significantly reduce variance in estimates of the marginal likelihood, in the final case facilitating a random-weight or pseudo-marginal-type importance sampler for parameter estimation. We have implemented the approach in Anglican and a new probabilistic programming language called Birch.Comment: 13 pages, 4 figure

    Locally adaptive smoothing with Markov random fields and shrinkage priors

    Full text link
    We present a locally adaptive nonparametric curve fitting method that operates within a fully Bayesian framework. This method uses shrinkage priors to induce sparsity in order-k differences in the latent trend function, providing a combination of local adaptation and global control. Using a scale mixture of normals representation of shrinkage priors, we make explicit connections between our method and kth order Gaussian Markov random field smoothing. We call the resulting processes shrinkage prior Markov random fields (SPMRFs). We use Hamiltonian Monte Carlo to approximate the posterior distribution of model parameters because this method provides superior performance in the presence of the high dimensionality and strong parameter correlations exhibited by our models. We compare the performance of three prior formulations using simulated data and find the horseshoe prior provides the best compromise between bias and precision. We apply SPMRF models to two benchmark data examples frequently used to test nonparametric methods. We find that this method is flexible enough to accommodate a variety of data generating models and offers the adaptive properties and computational tractability to make it a useful addition to the Bayesian nonparametric toolbox.Comment: 38 pages, to appear in Bayesian Analysi
    • …
    corecore