42 research outputs found

    A Fresh Look at the Bayesian Bounds of the Weiss-Weinstein Family

    No full text
    International audienceMinimal bounds on the mean square error (MSE) are generally used in order to predict the best achievable performance of an estimator for a given observation model. In this paper, we are interested in the Bayesian bound of the Weiss–Weinstein family. Among this family, we have Bayesian Cramér-Rao bound, the Bobrovsky–MayerWolf–Zakaï bound, the Bayesian Bhattacharyya bound, the Bobrovsky–Zakaï bound, the Reuven–Messer bound, and the Weiss–Weinstein bound. We present a unification of all these minimal bounds based on a rewriting of the minimum mean square error estimator (MMSEE) and on a constrained optimization problem. With this approach, we obtain a useful theoretical framework to derive new Bayesian bounds. For that purpose, we propose two bounds. First, we propose a generalization of the Bayesian Bhattacharyya bound extending the works of Bobrovsky, Mayer–Wolf, and Zakaï. Second, we propose a bound based on the Bayesian Bhattacharyya bound and on the Reuven–Messer bound, representing a generalization of these bounds. The proposed bound is the Bayesian extension of the deterministic Abel bound and is found to be tighter than the Bayesian Bhattacharyya bound, the Reuven–Messer bound, the Bobrovsky–Zakaï bound, and the Bayesian Cramér–Rao bound. We propose some closed-form expressions of these bounds for a general Gaussian observation model with parameterized mean. In order to illustrate our results, we present simulation results in the context of a spectral analysis problem

    The Marginal Bayesian Cramér–Rao Bound for Jump Markov Systems

    Full text link

    Information Geometric Approach to Bayesian Lower Error Bounds

    Full text link
    Information geometry describes a framework where probability densities can be viewed as differential geometry structures. This approach has shown that the geometry in the space of probability distributions that are parameterized by their covariance matrix is linked to the fundamentals concepts of estimation theory. In particular, prior work proposes a Riemannian metric - the distance between the parameterized probability distributions - that is equivalent to the Fisher Information Matrix, and helpful in obtaining the deterministic Cram\'{e}r-Rao lower bound (CRLB). Recent work in this framework has led to establishing links with several practical applications. However, classical CRLB is useful only for unbiased estimators and inaccurately predicts the mean square error in low signal-to-noise (SNR) scenarios. In this paper, we propose a general Riemannian metric that, at once, is used to obtain both Bayesian CRLB and deterministic CRLB along with their vector parameter extensions. We also extend our results to the Barankin bound, thereby enhancing their applicability to low SNR situations.Comment: 5 page

    An Information Fusion Perspective

    Get PDF
    A fundamental issue concerned the effectiveness of the Bayesian filter is raised.The observation-only (O2) inference is presented for dynamic state estimation.The "probability of filter benefit" is defined and quantitatively analyzed.Convincing simulations demonstrate that many filters can be easily ineffective. The general solution for dynamic state estimation is to model the system as a hidden Markov process and then employ a recursive estimator of the prediction-correction format (of which the best known is the Bayesian filter) to statistically fuse the time-series observations via models. The performance of the estimator greatly depends on the quality of the statistical mode assumed. In contrast, this paper presents a modeling-free solution, referred to as the observation-only (O2) inference, which infers the state directly from the observations. A Monte Carlo sampling approach is correspondingly proposed for unbiased nonlinear O2 inference. With faster computational speed, the performance of the O2 inference has identified a benchmark to assess the effectiveness of conventional recursive estimators where an estimator is defined as effective only when it outperforms on average the O2 inference (if applicable). It has been quantitatively demonstrated, from the perspective of information fusion, that a prior "biased" information (which inevitably accompanies inaccurate modelling) can be counterproductive for a filter, resulting in an ineffective estimator. Classic state space models have shown that a variety of Kalman filters and particle filters can easily be ineffective (inferior to the O2 inference) in certain situations, although this has been omitted somewhat in the literature

    Caractérisation des performances minimales d'estimation pour des modèles d'observations non-standards

    Get PDF
    In the parametric estimation context, estimators performances can be characterized, inter alia, by the mean square error and the resolution limit. The first quantities the accuracy of estimated values and the second defines the ability of the estimator to allow a correct resolvability. This thesis deals first with the prediction the "optimal" MSE by using lower bounds in the hybrid estimation context (i.e. when the parameter vector contains both random and non-random parameters), second with the extension of Cramér-Rao bounds for non-standard estimation problems and finally to the characterization of estimators resolution. This manuscript is then divided into three parts :First, we fill some lacks of hybrid lower bound on the MSE by using two existing Bayesian lower bounds: the Weiss-Weinstein bound and a particular form of Ziv-Zakai family lower bounds. We show that these extended lower bounds are tighter than the existing hybrid lower bounds in order to predict the optimal MSE.Second, we extend Cramer-Rao lower bounds for uncommon estimation contexts. Precisely: (i) Where the non-random parameters are subject to equality constraints (linear or nonlinear). (ii) For discrete-time filtering problems when the evolution of states are defined by a Markov chain. (iii) When the observation model differs to the real data distribution.Finally, we study the resolution of the estimators when their probability distributions are known. This approach is an extension of the work of Oh and Kashyap and the work of Clark to multi-dimensional parameters estimation problems.Dans le contexte de l'estimation paramétrique, les performances d'un estimateur peuvent être caractérisées, entre autre, par son erreur quadratique moyenne (EQM) et sa résolution limite. La première quantifie la précision des valeurs estimées et la seconde définit la capacité de l'estimateur à séparer plusieurs paramètres. Cette thèse s'intéresse d'abord à la prédiction de l'EQM "optimale" à l'aide des bornes inférieures pour des problèmes d'estimation simultanée de paramètres aléatoires et non-aléatoires (estimation hybride), puis à l'extension des bornes de Cramér-Rao pour des modèles d'observation moins standards. Enfin, la caractérisation des estimateurs en termes de résolution limite est également étudiée. Ce manuscrit est donc divisé en trois parties :Premièrement, nous complétons les résultats de littérature sur les bornes hybrides en utilisant deux bornes bayésiennes : la borne de Weiss-Weinstein et une forme particulière de la famille de bornes de Ziv-Zakaï. Nous montrons que ces bornes "étendues" sont plus précises pour la prédiction de l'EQM optimale par rapport à celles existantes dans la littérature.Deuxièmement, nous proposons des bornes de type Cramér-Rao pour des contextes d'estimation moins usuels, c'est-à-dire : (i) Lorsque les paramètres non-aléatoires sont soumis à des contraintes d'égalité linéaires ou non-linéaires (estimation sous contraintes). (ii) Pour des problèmes de filtrage à temps discret où l'évolution des états (paramètres) est régit par une chaîne de Markov. (iii) Lorsque la loi des observations est différente de la distribution réelle des données.Enfin, nous étudions la résolution et la précision des estimateurs en proposant un critère basé directement sur la distribution des estimées. Cette approche est une extension des travaux de Oh et Kashyap et de Clark pour des problèmes d'estimation de paramètres multidimensionnels

    Ein Beitrag zur effizienten Richtungsschätzung mittels Antennenarrays

    Get PDF
    Sicherlich gibt es nicht den einen Algorithmus zur Schätzung der Einfallsrichtung elektromagnetischer Wellen. Statt dessen existieren Algorithmen, die darauf optimiert sind Hunderte Pfade zu finden, mit uniformen linearen oder kreisförmigen Antennen-Arrays genutzt zu werden oder möglichst schnell zu sein. Die vorliegende Dissertation befasst sich mit letzterer Art. Wir beschränken uns jedoch nicht auf den reinen Algorithmus zur Richtungsschätzung (RS), sondern gehen das Problem in verschiedener Hinsicht an. Die erste Herangehensweise befasst sich mit der Beschreibung der Array-Mannigfaltigkeit (AM). Bisherige Interpolationsverfahren der AM berücksichtigen nicht inhärent Polarisation. Daher wird separat für jede Polarisation einzeln interpoliert. Wir übernehmen den Ansatz, eine diskrete zweidimensionale Fouriertransformation (FT) zur Interpolation zu nutzen. Jedoch verschieben wir das Problem in den Raum der Quaternionen. Dort wenden wir eine zweidimensionale diskrete quaternionische FT an. Somit können beide Polarisationszustände als eine einzige Größe betrachtet werden. Das sich ergebende Signalmodell ist im Wesentlichen kompatibel mit dem herkömmlichen komplexwertigen Modell. Unsere zweite Herangehensweise zielt auf die fundamentale Eignung eines Antennen-Arrays für die RS ab. Zu diesem Zweck nutzen wir die deterministische Cramér-Rao-Schranke (Cramér-Rao Lower Bound, CRLB). Wir leiten drei verschiedene CRLBs ab, die Polarisationszustände entweder gar nicht oder als gewünschte oder störende Parameter betrachten. Darüber hinaus zeigen wir auf, wie Antennen-Arrays schon während der Design-Phase auf RS optimiert werden können. Der eigentliche Algorithmus zur RS stellt die letzte Herangehensweise dar. Mittels einer MUSIC-basierte Kostenfunktion leiten wir effiziente Schätzer ab. Hierfür kommt eine modifizierte Levenberg- bzw. Levenberg-Marquardt-Suche zum Einsatz. Da die eigentliche Kostenfunktion hier nicht angewendet werden kann, ersetzen wir diese durch vier verschiedene Funktionen, die sich lokal ähnlich verhalten. Diese Funktionen beruhen auf einer Linearisierung eines Kroneckerproduktes zweier polarimetrischer Array-Steering-Vektoren. Dabei stellt sich heraus, dass zumindest eine der Funktionen in der Regel zu sehr schneller Konvergenz führt, sodass ein echtzeitfähiger Algorithmus entsteht.It is save to say that there is no such thing as the direction finding (DF) algorithm. Rather, there are algorithms that are tuned to resolve hundreds of paths, algorithms that are designed for uniform linear arrays or uniform circular arrays, and algorithms that strive for efficiency. The doctoral thesis at hand deals with the latter type of algorithms. However, the approach taken does not only incorporate the actual DF algorithm but approaches the problem from different perspectives. The first perspective concerns the description of the array manifold. Current interpolation schemes have no notion of polarization. Hence, the array manifold interpolation is performed separately for each state of polarization. In this thesis, we adopted the idea of interpolation via a 2-D discrete Fourier transform. However, we transform the problem into the quaternionic domain. Here, a 2-D discrete quaternionic Fourier transform is applied. Hence, both states of polarization can be viewed as a single quantity. The resulting interpolation is applied to a signal model which is essentially compatible to conventional complex model. The second perspective in this thesis is to look at the fundamental DF capability of an antenna array. For that, we use the deterministic Cramér-Rao Lower Bound (CRLB). We point out the differences between not considering polarimetric parameters and taking them as desired parameters or nuisance parameters. Such differences lead to three different CRLBs. Moreover, insight is given how a CRLB can be used to optimize an antenna array already during the design process to improve its DF performance. The actual DF algorithm constitutes the third perspective that is considered in this thesis. A MUSIC-based cost function is used to derive efficient estimators. To this end, a modified Levenberg search and Levenberg-Marquardt search are employed. Since the original cost function is not eligible to be used in this framework, we replace it by four different functions that locally show the same behavior. These functions are based on a linearization of Kronecker products of two polarimetric array steering vectors. It turns out that at least one of these functions usually exhibits very fast convergence leading to real-time capable algorithms

    Robust GNSS Carrier Phase-based Position and Attitude Estimation Theory and Applications

    Get PDF
    Mención Internacional en el título de doctorNavigation information is an essential element for the functioning of robotic platforms and intelligent transportation systems. Among the existing technologies, Global Navigation Satellite Systems (GNSS) have established as the cornerstone for outdoor navigation, allowing for all-weather, all-time positioning and timing at a worldwide scale. GNSS is the generic term for referring to a constellation of satellites which transmit radio signals used primarily for ranging information. Therefore, the successful operation and deployment of prospective autonomous systems is subject to our capabilities to support GNSS in the provision of robust and precise navigational estimates. GNSS signals enable two types of ranging observations: –code pseudorange, which is a measure of the time difference between the signal’s emission and reception at the satellite and receiver, respectively, scaled by the speed of light; –carrier phase pseudorange, which measures the beat of the carrier signal and the number of accumulated full carrier cycles. While code pseudoranges provides an unambiguous measure of the distance between satellites and receiver, with a dm-level precision when disregarding atmospheric delays and clock offsets, carrier phase measurements present a much higher precision, at the cost of being ambiguous by an unknown number of integer cycles, commonly denoted as ambiguities. Thus, the maximum potential of GNSS, in terms of navigational precision, can be reach by the use of carrier phase observations which, in turn, lead to complicated estimation problems. This thesis deals with the estimation theory behind the provision of carrier phase-based precise navigation for vehicles traversing scenarios with harsh signal propagation conditions. Contributions to such a broad topic are made in three directions. First, the ultimate positioning performance is addressed, by proposing lower bounds on the signal processing realized at the receiver level and for the mixed real- and integer-valued problem related to carrier phase-based positioning. Second, multi-antenna configurations are considered for the computation of a vehicle’s orientation, introducing a new model for the joint position and attitude estimation problems and proposing new deterministic and recursive estimators based on Lie Theory. Finally, the framework of robust statistics is explored to propose new solutions to code- and carrier phase-based navigation, able to deal with outlying impulsive noises.La información de navegación es un elemental fundamental para el funcionamiento de sistemas de transporte inteligentes y plataformas robóticas. Entre las tecnologías existentes, los Sistemas Globales de Navegación por Satélite (GNSS) se han consolidado como la piedra angular para la navegación en exteriores, dando acceso a localización y sincronización temporal a una escala global, irrespectivamente de la condición meteorológica. GNSS es el término genérico que define una constelación de satélites que transmiten señales de radio, usadas primordinalmente para proporcionar información de distancia. Por lo tanto, la operatibilidad y funcionamiento de los futuros sistemas autónomos pende de nuestra capacidad para explotar GNSS y estimar soluciones de navegación robustas y precisas. Las señales GNSS permiten dos tipos de observaciones de alcance: –pseudorangos de código, que miden el tiempo transcurrido entre la emisión de las señales en los satélites y su acquisición en la tierra por parte de un receptor; –pseudorangos de fase de portadora, que miden la fase de la onda sinusoide que portan dichas señales y el número acumulado de ciclos completos. Los pseudorangos de código proporcionan una medida inequívoca de la distancia entre los satélites y el receptor, con una precisión de decímetros cuando no se tienen en cuenta los retrasos atmosféricos y los desfases del reloj. En contraposición, las observaciones de la portadora son super precisas, alcanzando el milímetro de exactidud, a expensas de ser ambiguas por un número entero y desconocido de ciclos. Por ende, el alcanzar la máxima precisión con GNSS queda condicionado al uso de las medidas de fase de la portadora, lo cual implica unos problemas de estimación de elevada complejidad. Esta tesis versa sobre la teoría de estimación relacionada con la provisión de navegación precisa basada en la fase de la portadora, especialmente para vehículos que transitan escenarios donde las señales no se propagan fácilmente, como es el caso de las ciudades. Para ello, primero se aborda la máxima efectividad del problema de localización, proponiendo cotas inferiores para el procesamiento de la señal en el receptor y para el problema de estimación mixto (es decir, cuando las incógnitas pertenecen al espacio de números reales y enteros). En segundo lugar, se consideran las configuraciones multiantena para el cálculo de la orientación de un vehículo, presentando un nuevo modelo para la estimación conjunta de posición y rumbo, y proponiendo estimadores deterministas y recursivos basados en la teoría de Lie. Por último, se explora el marco de la estadística robusta para proporcionar nuevas soluciones de navegación precisa, capaces de hacer frente a los ruidos atípicos.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: José Manuel Molina López.- Secretario: Giorgi Gabriele.- Vocal: Fabio Dovi

    Contributions aux bornes inférieures de l’erreur quadratique moyenne en traitement du signal

    Get PDF
    A l’aide des bornes inférieures de l’erreur quadratique moyenne, la caractérisation du décrochement des estimateurs, l’analyse de la position optimale des capteurs dans un réseau ainsi que les limites de résolution statistiques sont étudiées dans le contexte du traitement d’antenne et du radar

    Quantum Communication, Sensing and Measurement in Space

    Get PDF
    The main theme of the conclusions drawn for classical communication systems operating at optical or higher frequencies is that there is a well‐understood performance gain in photon efficiency (bits/photon) and spectral efficiency (bits/s/Hz) by pursuing coherent‐state transmitters (classical ideal laser light) coupled with novel quantum receiver systems operating near the Holevo limit (e.g., joint detection receivers). However, recent research indicates that these receivers will require nonlinear and nonclassical optical processes and components at the receiver. Consequently, the implementation complexity of Holevo‐capacityapproaching receivers is not yet fully ascertained. Nonetheless, because the potential gain is significant (e.g., the projected photon efficiency and data rate of MIT Lincoln Laboratory's Lunar Lasercom Demonstration (LLCD) could be achieved with a factor‐of‐20 reduction in the modulation bandwidth requirement), focused research activities on ground‐receiver architectures that approach the Holevo limit in space‐communication links would be beneficial. The potential gains resulting from quantum‐enhanced sensing systems in space applications have not been laid out as concretely as some of the other areas addressed in our study. In particular, while the study period has produced several interesting high‐risk and high‐payoff avenues of research, more detailed seedlinglevel investigations are required to fully delineate the potential return relative to the state‐of‐the‐art. Two prominent examples are (1) improvements to pointing, acquisition and tracking systems (e.g., for optical communication systems) by way of quantum measurements, and (2) possible weak‐valued measurement techniques to attain high‐accuracy sensing systems for in situ or remote‐sensing instruments. While these concepts are technically sound and have very promising bench‐top demonstrations in a lab environment, they are not mature enough to realistically evaluate their performance in a space‐based application. Therefore, it is recommended that future work follow small focused efforts towards incorporating practical constraints imposed by a space environment. The space platform has been well recognized as a nearly ideal environment for some of the most precise tests of fundamental physics, and the ensuing potential of scientific advances enabled by quantum technologies is evident in our report. For example, an exciting concept that has emerged for gravity‐wave detection is that the intermediate frequency band spanning 0.01 to 10 Hz—which is inaccessible from the ground—could be accessed at unprecedented sensitivity with a space‐based interferometer that uses shorter arms relative to state‐of‐the‐art to keep the diffraction losses low, and employs frequency‐dependent squeezed light to surpass the standard quantum limit sensitivity. This offers the potential to open up a new window into the universe, revealing the behavior of compact astrophysical objects and pulsars. As another set of examples, research accomplishments in the atomic and optics fields in recent years have ushered in a number of novel clocks and sensors that can achieve unprecedented measurement precisions. These emerging technologies promise new possibilities in fundamental physics, examples of which are tests of relativistic gravity theory, universality of free fall, frame‐dragging precession, the gravitational inverse‐square law at micron scale, and new ways of gravitational wave detection with atomic inertial sensors. While the relevant technologies and their discovery potentials have been well demonstrated on the ground, there exists a large gap to space‐based systems. To bridge this gap and to advance fundamental‐physics exploration in space, focused investments that further mature promising technologies, such as space‐based atomic clocks and quantum sensors based on atom‐wave interferometers, are recommended. Bringing a group of experts from diverse technical backgrounds together in a productive interactive environment spurred some unanticipated innovative concepts. One promising concept is the possibility of utilizing a space‐based interferometer as a frequency reference for terrestrial precision measurements. Space‐based gravitational wave detectors depend on extraordinarily low noise in the separation between spacecraft, resulting in an ultra‐stable frequency reference that is several orders of magnitude better than the state of the art of frequency references using terrestrial technology. The next steps in developing this promising new concept are simulations and measurement of atmospheric effects that may limit performance due to non‐reciprocal phase fluctuations. In summary, this report covers a broad spectrum of possible new opportunities in space science, as well as enhancements in the performance of communication and sensing technologies, based on observing, manipulating and exploiting the quantum‐mechanical nature of our universe. In our study we identified a range of exciting new opportunities to capture the revolutionary capabilities resulting from quantum enhancements. We believe that pursuing these opportunities has the potential to positively impact the NASA mission in both the near term and in the long term. In this report we lay out the research and development paths that we believe are necessary to realize these opportunities and capitalize on the gains quantum technologies can offer
    corecore