144,834 research outputs found

    Computation of Gaussian orthant probabilities in high dimension

    Full text link
    We study the computation of Gaussian orthant probabilities, i.e. the probability that a Gaussian falls inside a quadrant. The Geweke-Hajivassiliou-Keane (GHK) algorithm [Genz, 1992; Geweke, 1991; Hajivassiliou et al., 1996; Keane, 1993], is currently used for integrals of dimension greater than 10. In this paper we show that for Markovian covariances GHK can be interpreted as the estimator of the normalizing constant of a state space model using sequential importance sampling (SIS). We show for an AR(1) the variance of the GHK, properly normalized, diverges exponentially fast with the dimension. As an improvement we propose using a particle filter (PF). We then generalize this idea to arbitrary covariance matrices using Sequential Monte Carlo (SMC) with properly tailored MCMC moves. We show empirically that this can lead to drastic improvements on currently used algorithms. We also extend the framework to orthants of mixture of Gaussians (Student, Cauchy etc.), and to the simulation of truncated Gaussians

    Collaborative Information Processing in Wireless Sensor Networks for Diffusive Source Estimation

    Get PDF
    In this dissertation, we address the issue of collaborative information processing for diffusive source parameter estimation using wireless sensor networks (WSNs) capable of sensing in dispersive medium/environment, from signal processing perspective. We begin the dissertation by focusing on the mathematical formulation of a special diffusion phenomenon, i.e., an underwater oil spill, along with statistical algorithms for meaningful analysis of sensor data leading to efficient estimation of desired parameters of interest. The objective is to obtain an analytical solution to the problem, rather than using non-model based sophisticated numerical techniques. We tried to make the physical diffusion model as much appropriate as possible, while maintaining some pragmatic and reasonable assumptions for the simplicity of exposition and analytical derivation. The dissertation studies both source localization and tracking for static and moving diffusive sources respectively. For static diffusive source localization, we investigate two parametric estimation techniques based on the maximum-likelihood (ML) and the best linear unbiased estimator (BLUE) for a special case of our obtained physical dispersion model. We prove the consistency and asymptotic normality of the obtained ML solution when the number of sensor nodes and samples approach infinity, and derive the Cramer-Rao lower bound (CRLB) on its performance. In case of a moving diffusive source, we propose a particle filter (PF) based target tracking scheme for moving diffusive source, and analytically derive the posterior Cramer-Rao lower bound (PCRLB) for the moving source state estimates as a theoretical performance bound. Further, we explore nonparametric, machine learning based estimation technique for diffusive source parameter estimation using Dirichlet process mixture model (DPMM). Since real data are often complicated, no parametric model is suitable. As an alternative, we exploit the rich tools of nonparametric Bayesian methods, in particular the DPMM, which provides us with a flexible and data-driven estimation process. We propose DPMM based static diffusive source localization algorithm and provide analytical proof of convergence. The proposed algorithm is also extended to the scenario when multiple diffusive sources of same kind are present in the diffusive field of interest. Efficient power allocation can play an important role in extending the lifetime of a resource constrained WSN. Resource-constrained WSNs rely on collaborative signal and information processing for efficient handling of large volumes of data collected by the sensor nodes. In this dissertation, the problem of collaborative information processing for sequential parameter estimation in a WSN is formulated in a cooperative game-theoretic framework, which addresses the issue of fair resource allocation for estimation task at the Fusion center (FC). The framework allows addressing either resource allocation or commitment for information processing as solutions of cooperative games with underlying theoretical justifications. Different solution concepts found in cooperative games, namely, the Shapley function and Nash bargaining are used to enforce certain kinds of fairness among the nodes in a WSN

    Unbiased and Consistent Nested Sampling via Sequential Monte Carlo

    Full text link
    We introduce a new class of sequential Monte Carlo methods called Nested Sampling via Sequential Monte Carlo (NS-SMC), which reframes the Nested Sampling method of Skilling (2006) in terms of sequential Monte Carlo techniques. This new framework allows convergence results to be obtained in the setting when Markov chain Monte Carlo (MCMC) is used to produce new samples. An additional benefit is that marginal likelihood estimates are unbiased. In contrast to NS, the analysis of NS-SMC does not require the (unrealistic) assumption that the simulated samples be independent. As the original NS algorithm is a special case of NS-SMC, this provides insights as to why NS seems to produce accurate estimates despite a typical violation of its assumptions. For applications of NS-SMC, we give advice on tuning MCMC kernels in an automated manner via a preliminary pilot run, and present a new method for appropriately choosing the number of MCMC repeats at each iteration. Finally, a numerical study is conducted where the performance of NS-SMC and temperature-annealed SMC is compared on several challenging and realistic problems. MATLAB code for our experiments is made available at https://github.com/LeahPrice/SMC-NS .Comment: 45 pages, some minor typographical errors fixed since last versio

    Task adapted reconstruction for inverse problems

    Full text link
    The paper considers the problem of performing a task defined on a model parameter that is only observed indirectly through noisy data in an ill-posed inverse problem. A key aspect is to formalize the steps of reconstruction and task as appropriate estimators (non-randomized decision rules) in statistical estimation problems. The implementation makes use of (deep) neural networks to provide a differentiable parametrization of the family of estimators for both steps. These networks are combined and jointly trained against suitable supervised training data in order to minimize a joint differentiable loss function, resulting in an end-to-end task adapted reconstruction method. The suggested framework is generic, yet adaptable, with a plug-and-play structure for adjusting both the inverse problem and the task at hand. More precisely, the data model (forward operator and statistical model of the noise) associated with the inverse problem is exchangeable, e.g., by using neural network architecture given by a learned iterative method. Furthermore, any task that is encodable as a trainable neural network can be used. The approach is demonstrated on joint tomographic image reconstruction, classification and joint tomographic image reconstruction segmentation
    • …
    corecore