1,201 research outputs found
Performance Bounds for Parameter Estimation under Misspecified Models: Fundamental findings and applications
Inferring information from a set of acquired data is the main objective of
any signal processing (SP) method. In particular, the common problem of
estimating the value of a vector of parameters from a set of noisy measurements
is at the core of a plethora of scientific and technological advances in the
last decades; for example, wireless communications, radar and sonar,
biomedicine, image processing, and seismology, just to name a few. Developing
an estimation algorithm often begins by assuming a statistical model for the
measured data, i.e. a probability density function (pdf) which if correct,
fully characterizes the behaviour of the collected data/measurements.
Experience with real data, however, often exposes the limitations of any
assumed data model since modelling errors at some level are always present.
Consequently, the true data model and the model assumed to derive the
estimation algorithm could differ. When this happens, the model is said to be
mismatched or misspecified. Therefore, understanding the possible performance
loss or regret that an estimation algorithm could experience under model
misspecification is of crucial importance for any SP practitioner. Further,
understanding the limits on the performance of any estimator subject to model
misspecification is of practical interest. Motivated by the widespread and
practical need to assess the performance of a mismatched estimator, the goal of
this paper is to help to bring attention to the main theoretical findings on
estimation theory, and in particular on lower bounds under model
misspecification, that have been published in the statistical and econometrical
literature in the last fifty years. Secondly, some applications are discussed
to illustrate the broad range of areas and problems to which this framework
extends, and consequently the numerous opportunities available for SP
researchers.Comment: To appear in the IEEE Signal Processing Magazin
Performance bounds on matched-field methods for source localization and estimation of ocean environmental parameters
Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2001Matched-field methods concern estimation of source location and/or ocean environmental
parameters by exploiting full wave modeling of acoustic waveguide propagation.
Typical estimation performance demonstrates two fundamental limitations.
First, sidelobe ambiguities dominate the estimation at low signal-to-noise ratio (SNR),
leading to a threshold performance behavior. Second, most matched-field algorithms
show a strong sensitivity to environmental/system mismatch, introducing some biased
estimates at high SNR.
In this thesis, a quantitative approach for ambiguity analysis is developed so that
different mainlobe and sidelobe error contributions can be compared at different SNR
levels. Two large-error performance bounds, the Weiss-Weinstein bound (WWB)
and Ziv-Zakai bound (ZZB), are derived for the attainable accuracy of matched-field
methods. To include mismatch effects, a modified version of the ZZB is proposed.
Performance analyses are implemented for source localization under a typical shallow
water environment chosen from the Shallow Water Evaluation Cell Experiments
(SWellEX). The performance predictions describe the simulations of the maximum
likelihood estimator (MLE) well, including the mean square error in all SNR regions
as well as the bias at high SNR. The threshold SNR and bias predictions are also
verified by the SWellEX experimental data processing. These developments provide
tools to better understand some fundamental behaviors in matched-field performance
and provide benchmarks to which various ad hoc algorithms can be compared.Financial support for my research was provided by the Office of Naval Research
and the WHOI Education Office
Recommended from our members
Non-asymptotic quantum metrology: extracting maximum information from limited data
Science relies on our practical ability to extract information from reality, since processing this information is essential for developing theories that explain our world. This thesis is precisely the study of how to extract and process information using quantum systems when a constrained amount of resources means that the available data is limited. The natural framework for this task is quantum metrology, a set of tools to model and design quantum measurement strategies. Equipped with this theory, we advocate a Bayesian approach as the appropriate formalism to study systems with a finite amount of resources, which is a non-asymptotic problem, and we propose a methodology for non-asymptotic quantum metrology. To start with, we show the consistency of taking those solutions that are optimal in the asymptotic regime of many trials as a guide to calculate a generalised measure of uncertainty in the Bayesian framework. This provides an approximate but useful way of studying the non-asymptotic regime whenever a direct Bayesian optimisation is intractable, and it avoids non-physical results that can arise when only the asymptotic theory is employed. Secondly, we construct a new non-asymptotic Bayesian bound without relying on the previous approximation by first selecting the optimal quantum strategy for a single shot, and then simulating a sequence of repetitions of this scheme, which is suitable for experiments where we do not wish or cannot correlate different trials. These methods are applied to a Mach-Zehnder interferometer, which is a single-parameter problem, and to quantum sensing networks where the nodes are either qubits or optical modes, which are multi-parameter protocols. Our results provide a detailed characterisation of how the interplay between prior information, entanglement and a limited amount of data affects the performance of quantum metrology protocols, which has important implications for the analysis of theory and experiments in this field
Use and Abuse of the Fisher Information Matrix in the Assessment of Gravitational-Wave Parameter-Estimation Prospects
The Fisher-matrix formalism is used routinely in the literature on
gravitational-wave detection to characterize the parameter-estimation
performance of gravitational-wave measurements, given parametrized models of
the waveforms, and assuming detector noise of known colored Gaussian
distribution. Unfortunately, the Fisher matrix can be a poor predictor of the
amount of information obtained from typical observations, especially for
waveforms with several parameters and relatively low expected signal-to-noise
ratios (SNR), or for waveforms depending weakly on one or more parameters, when
their priors are not taken into proper consideration. In this paper I discuss
these pitfalls; show how they occur, even for relatively strong signals, with a
commonly used template family for binary-inspiral waveforms; and describe
practical recipes to recognize them and cope with them.
Specifically, I answer the following questions: (i) What is the significance
of (quasi-)singular Fisher matrices, and how must we deal with them? (ii) When
is it necessary to take into account prior probability distributions for the
source parameters? (iii) When is the signal-to-noise ratio high enough to
believe the Fisher-matrix result? In addition, I provide general expressions
for the higher-order, beyond--Fisher-matrix terms in the 1/SNR expansions for
the expected parameter accuracies.Comment: 24 pages, 3 figures, previously known as "A User Manual for the
Fisher Information Matrix"; final, corrected PRD versio
Signal Processing and Propagation for Aeroacoustic Sensor Networking,” Ch
Passive sensing of acoustic sources is attractive in many respects, including the relatively low signal bandwidth of sound waves, the loudness of most sources of interest, and the inherent difficulty of disguising or concealing emitted acoustic signals. The availability of inexpensive, low-power sensing and signal-processing hardware enables application of sophisticated real-time signal processing. Among th
Estimation and detection with chaotic systems
Includes bibliographical references (p. 209-214).Supported by the U.S. Air Force Office of Scientific Research under the Augmentation Awards for Science and Engineering Research Training Program Grant. F49620-92-J-0255 Supported by the U.S. Air Force Office of Scientific Research. AFOSR-91-0034-C Supported by the U.S. Navy Office of Naval Research. N00014-93-1-0686 Supported by Lockheed Sanders, Inc. under a U.S. Navy Office of Naval Research contract. N00014-91-C-0125Michael D. Richard
Signal Detection and Estimation for MIMO radar and Network Time Synchronization
The theory of signal detection and estimation concerns the recovery of useful information from signals corrupted by random perturbations. This dissertation discusses the application of signal detection and estimation principles to two problems of significant practical interest: MIMO (multiple-input multiple output) radar, and time synchronization over packet switched networks. Under the first topic, we study the extension of several conventional radar analysis techniques to recently developed MIMO radars. Under the second topic, we develop new estimation techniques to improve the performance of widely used packet-based time synchronization algorithms. The ambiguity function is a popular mathematical tool for designing and optimizing the performance of radar detectors. Motivated by Neyman-Pearson testing principles, an alternative definition of the ambiguity function is proposed under the first topic. This definition directly associates with each pair of true and assumed target parameters the probability that the radar will declare a target present. We demonstrate that the new definition is better suited for the analysis of MIMO radars that perform non-coherent processing, while being equivalent to the original ambiguity function when applied to conventional radars. Based on the nature of antenna placements, transmit waveforms and the observed clutter and noise, several types of MIMO radar detectors have been individually studied in literature. A second investigation into MIMO radar presents a general method to model and analyze the detection performance of such systems. We develop closed-form expressions for a Neyman-Pearson optimum detector that is valid for a wide class of radars. Further, general closed-form expressions for the detector SNR, another tool used to quantify radar performance, are derived. Theoretical and numerical results demonstrating the value of the proposed techniques to optimize and predict the performance of arbitrary radar configurations are presented.There has been renewed recent interest in the application of packet-based time synchronization algorithms such as the IEEE 1588 Precision Time Protocol (PTP), to meet challenges posed by next-generation mobile telecommunication networks. In packet based time synchronization protocols, clock phase offsets are determined via two-way message exchanges between a master and a slave. Since the end-to-end delays in packet networks are inherently stochastic in nature, the recovery of phase offsets from message exchanges must be treated as a statistical estimation problem. While many simple intuitively motivated estimators for this problem exist in the literature, in the second part of this dissertation we use estimation theoretic principles to develop new estimators that offer significant performance benefits. To this end, we first describe new lower bounds on the error variance of phase offset estimation schemes. These bounds are obtained by re-deriving two Bayesian estimation bounds, namely the Ziv-Zakai and Weiss-Weinstien bounds, for use under a non-Bayesian formulation. Next, we describe new minimax estimators for the problem of phase offset estimation, that are optimum in terms of minimizing the maximum mean squared error over all possible values of the unknown parameters.Minimax estimators that utilize information from past timestamps to improve accuracy are also introduced. These minimax estimators provide fundamental limits on the performance of phase offset estimation schemes.Finally, a restricted class of estimators referred to as L-estimators are considered, that are linear functions of order statistics. The problem of designing optimum L-estimators is studied under several hitherto unconsidered criteria of optimality. We address the case where the queuing delay distributions are fully known, as well as the case where network model uncertainty exists.Optimum L-estimators that utilize information from past observation windows to improve performance are also described.Simulation results indicate that significant performance gains over conventional estimators can be obtained via the proposed optimum processing techniques
- …