15,257 research outputs found

    Performance Bounds for Parameter Estimation under Misspecified Models: Fundamental findings and applications

    Full text link
    Inferring information from a set of acquired data is the main objective of any signal processing (SP) method. In particular, the common problem of estimating the value of a vector of parameters from a set of noisy measurements is at the core of a plethora of scientific and technological advances in the last decades; for example, wireless communications, radar and sonar, biomedicine, image processing, and seismology, just to name a few. Developing an estimation algorithm often begins by assuming a statistical model for the measured data, i.e. a probability density function (pdf) which if correct, fully characterizes the behaviour of the collected data/measurements. Experience with real data, however, often exposes the limitations of any assumed data model since modelling errors at some level are always present. Consequently, the true data model and the model assumed to derive the estimation algorithm could differ. When this happens, the model is said to be mismatched or misspecified. Therefore, understanding the possible performance loss or regret that an estimation algorithm could experience under model misspecification is of crucial importance for any SP practitioner. Further, understanding the limits on the performance of any estimator subject to model misspecification is of practical interest. Motivated by the widespread and practical need to assess the performance of a mismatched estimator, the goal of this paper is to help to bring attention to the main theoretical findings on estimation theory, and in particular on lower bounds under model misspecification, that have been published in the statistical and econometrical literature in the last fifty years. Secondly, some applications are discussed to illustrate the broad range of areas and problems to which this framework extends, and consequently the numerous opportunities available for SP researchers.Comment: To appear in the IEEE Signal Processing Magazin

    Robust approaches to remote calibration of a transmitting array

    Get PDF
    We consider the problem of estimating the gains and phases of the RF channels of a M-element transmitting array, based on a calibration procedure where M orthogonal signals are sent through M orthogonal beams and received on a single antenna. The received data vector obeys a linear model of the type y ¼ AFg þ n where A is an unknown complex scalar accounting for propagation loss and g is the vector of unknown complex gains. In order to improve the performance of the least-squares (LS) estimator at low signal to noise ratio (SNR), we propose to exploit knowledge of the nominal value of g, viz g. Towards this end, two approaches are presented. First, a Bayesian approach is advocated where A and g are considered as random variables, with a non-informative prior distribution for A and a Gaussian prior distribution for g. The posterior distributions of the unknown random variables are derived and a Gibbs sampling strategy is presented that enables one to generate samples distributed according to these posterior distributions, leading to the minimum mean-square error (MMSE) estimator. A second approach consists in solving a constrained least-squares problem in which h ¼ Ag is constrained to be close to a scaled version of g. This second approach yields a closed-form solution, which amounts to a linear combination of g and the LS estimator. Numerical simulations show that the two new estimators significantly outperform the conventional LS estimator, especially at low SNR

    Sequential stopping for high-throughput experiments

    Get PDF
    In high-throughput experiments, the sample size is typically chosen informally. Most formal sample-size calculations depend critically on prior knowledge. We propose a sequential strategy that, by updating knowledge when new data are available, depends less critically on prior assumptions. Experiments are stopped or continued based on the potential benefits in obtaining additional data. The underlying decision-theoretic framework guarantees the design to proceed in a coherent fashion. We propose intuitively appealing, easy-to-implement utility functions. As in most sequential design problems, an exact solution is prohibitive. We propose a simulation-based approximation that uses decision boundaries. We apply the method to RNA-seq, microarray, and reverse-phase protein array studies and show its potential advantages. The approach has been added to the Bioconductor package gaga

    Matched direction detectors and estimators for array processing with subspace steering vector uncertainties

    Get PDF
    In this paper, we consider the problem of estimating and detecting a signal whose associated spatial signature is known to lie in a given linear subspace but whose coordinates in this subspace are otherwise unknown, in the presence of subspace interference and broad-band noise. This situation arises when, on one hand, there exist uncertainties about the steering vector but, on the other hand, some knowledge about the steering vector errors is available. First, we derive the maximum-likelihood estimator (MLE) for the problem and compute the corresponding Cramer-Rao bound. Next, the maximum-likelihood estimates are used to derive a generalized likelihood ratio test (GLRT). The GLRT is compared and contrasted with the standard matched subspace detectors. The performances of the estimators and detectors are illustrated by means of numerical simulations

    Polarization-based Tests of Gravity with the Stochastic Gravitational-Wave Background

    Get PDF
    The direct observation of gravitational waves with Advanced LIGO and Advanced Virgo offers novel opportunities to test general relativity in strong-field, highly dynamical regimes. One such opportunity is the measurement of gravitational-wave polarizations. While general relativity predicts only two tensor gravitational-wave polarizations, general metric theories of gravity allow for up to four additional vector and scalar modes. The detection of these alternative polarizations would represent a clear violation of general relativity. The LIGO-Virgo detection of the binary black hole merger GW170814 has recently offered the first direct constraints on the polarization of gravitational waves. The current generation of ground-based detectors, however, is limited in its ability to sensitively determine the polarization content of transient gravitational-wave signals. Observation of the stochastic gravitational-wave background, in contrast, offers a means of directly measuring generic gravitational-wave polarizations. The stochastic background, arising from the superposition of many individually unresolvable gravitational-wave signals, may be detectable by Advanced LIGO at design-sensitivity. In this paper, we present a Bayesian method with which to detect and characterize the polarization of the stochastic background. We explore prospects for estimating parameters of the background, and quantify the limits that Advanced LIGO can place on vector and scalar polarizations in the absence of a detection. Finally, we investigate how the introduction of new terrestrial detectors like Advanced Virgo aid in our ability to detect or constrain alternative polarizations in the stochastic background. We find that, although the addition of Advanced Virgo does not notably improve detection prospects, it may dramatically improve our ability to estimate the parameters of backgrounds of mixed polarization.Comment: 24 pages, 20 figures; Accepted by PRX. This version includes major changes in response to referee comments and corrects an error in Eq. E

    Particle Filter Design Using Importance Sampling for Acoustic Source Localisation and Tracking in Reverberant Environments

    Get PDF
    Sequential Monte Carlo methods have been recently proposed to deal with the problem of acoustic source localisation and tracking using an array of microphones. Previous implementations make use of the basic bootstrap particle filter, whereas a more general approach involves the concept of importance sampling. In this paper, we develop a new particle filter for acoustic source localisation using importance sampling, and compare its tracking ability with that of a bootstrap algorithm proposed previously in the literature. Experimental results obtained with simulated reverberant samples and real audio recordings demonstrate that the new algorithm is more suitable for practical applications due to its reinitialisation capabilities, despite showing a slightly lower average tracking accuracy. A real-time implementation of the algorithm also shows that the proposed particle filter can reliably track a person talking in real reverberant rooms.This paper was performed while Eric A. Lehmann was working with National ICT Australia. National ICT Australia is funded by the Australian Government’s Department of Communications, Information Technology, and the Arts, the Australian Research Council, through Backing Australia’s Ability, and the ICT Centre of Excellence programs
    corecore