40 research outputs found

    Networked distributed fusion estimation under uncertain outputs with random transmission delays, packet losses and multi-packet processing

    Get PDF
    This paper investigates the distributed fusion estimation problem for networked systems whose mul- tisensor measured outputs involve uncertainties modelled by random parameter matrices. Each sensor transmits its measured outputs to a local processor over different communication channels and random failures –one-step delays and packet dropouts–are assumed to occur during the transmission. White sequences of Bernoulli random variables with different probabilities are introduced to describe the ob- servations that are used to update the estimators at each sampling time. Due to the transmission failures, each local processor may receive either one or two data packets, or even nothing and, when the current measurement does not arrive on time, its predictor is used in the design of the estimators to compensate the lack of updated information. By using an innovation approach, local least-squares linear estimators (filter and fixed-point smoother) are obtained at the individual local processors, without requiring the signal evolution model. From these local estimators, distributed fusion filtering and smoothing estimators weighted by matrices are obtained in a unified way, by applying the least-squares criterion. A simula- tion study is presented to examine the performance of the estimators and the influence that both sensor uncertainties and transmission failures have on the estimation accuracy.This research is supported by Ministerio de Economía, Industria y Competitividad, Agencia Estatal de Investigación and Fondo Europeo de Desarrollo Regional FEDER (grant no. MTM2017-84199-P)

    Networked fusion estimation with multiple uncertainties and time-correlated channel noise

    Get PDF
    This paper is concerned with the fusion filtering and fixed-point smoothing problems for a class of networked systems with multiple random uncertainties in both the sensor outputs and the transmission connections. To deal with this kind of systems, random parameter matrices are considered in the mathematical models of both the sensor measurements and the data available after transmission. The additive noise in the transmission channel from each sensor is assumed to be sequentially time-correlated. By using the time-differencing approach, the available measurements are transformed into an equivalent set of observations that do not depend on the timecorrelated noise. The innovation approach is then applied to obtain recursive distributed and centralized fusion estimation algorithms for the filtering and fixed-point smoothing estimators of the signal based on the transformed measurements, which are equal to the estimators based on the original ones. The derivation of the algorithms does not require the knowledge of the signal evolution model, but only the mean and covariance functions of the processes involved (covariance information). A simulation example illustrates the utility and effectiveness of the proposed fusion estimation algorithms, as well as the applicability of the current model to deal with different network-induced random phenomena.This research is supported by Ministerio de Economía, Industria y Competitividad, Agencia Estatal de Investigación and Fondo Europeo de Desarrollo Regional FEDER (grant no. MTM2017-84199-P)

    Centralized Fusion Approach to the Estimation Problem with Multi-Packet Processing under Uncertainty in Outputs and Transmissions

    Get PDF
    This paper is concerned with the least-squares linear centralized estimation problem in multi-sensor network systems from measured outputs with uncertainties modeled by random parameter matrices. These measurements are transmitted to a central processor over different communication channels, and owing to the unreliability of the network, random one-step delays and packet dropouts are assumed to occur during the transmissions. In order to avoid network congestion, at each sampling time, each sensor’s data packet is transmitted just once, but due to the uncertainty of the transmissions, the processing center may receive either one packet, two packets, or nothing. Different white sequences of Bernoulli random variables are introduced to describe the observations used to update the estimators at each sampling time. To address the centralized estimation problem, augmented observation vectors are defined by accumulating the raw measurements from the different sensors, and when the current measurement of a sensor does not arrive on time, the corresponding component of the augmented measured output predictor is used as compensation in the estimator design. Through an innovation approach, centralized fusion estimators, including predictors, filters, and smoothers are obtained by recursive algorithms without requiring the signal evolution model. A numerical example is presented to show how uncertain systems with state-dependent multiplicative noise can be covered by the proposed model and how the estimation accuracy is influenced by both sensor uncertainties and transmission failures.This research is supported by Ministerio de Economía, Industria y Competitividad, Agencia Estatal de Investigación and Fondo Europeo de Desarrollo Regional FEDER (grant no. MTM2017-84199-P)

    Linear Estimation in Interconnected Sensor Systems with Information Constraints

    Get PDF
    A ubiquitous challenge in many technical applications is to estimate an unknown state by means of data that stems from several, often heterogeneous sensor sources. In this book, information is interpreted stochastically, and techniques for the distributed processing of data are derived that minimize the error of estimates about the unknown state. Methods for the reconstruction of dependencies are proposed and novel approaches for the distributed processing of noisy data are developed

    Linear Estimation in Interconnected Sensor Systems with Information Constraints

    Get PDF
    A ubiquitous challenge in many technical applications is to estimate an unknown state by means of data that stems from several, often heterogeneous sensor sources. In this book, information is interpreted stochastically, and techniques for the distributed processing of data are derived that minimize the error of estimates about the unknown state. Methods for the reconstruction of dependencies are proposed and novel approaches for the distributed processing of noisy data are developed

    Tracking and Fusion Methods for Extended Targets Parameterized by Center, Orientation, and Semi-axes

    Get PDF
    The improvements in sensor technology, e.g., the development of automotive Radio Detection and Ranging (RADAR) or Light Detection and Ranging (LIDAR), which are able to provide a higher detail of the sensor’s environment, have introduced new opportunities but also new challenges to target tracking. In classic target tracking, targets are assumed as points. However, this assumption is no longer valid if targets occupy more than one sensor resolution cell, creating the need for extended targets, modeling the shape in addition to the kinematic parameters. Different shape models are possible and this thesis focuses on an elliptical shape, parameterized with center, orientation, and semi-axes lengths. This parameterization can be used to model rectangles as well. Furthermore, this thesis is concerned with multi-sensor fusion for extended targets, which can be used to improve the target tracking by providing information gathered from different sensors or perspectives. We also consider estimation of extended targets, i.e., to account for uncertainties, the target is modeled by a probability density, so we need to find a so-called point estimate. Extended target tracking provides a variety of challenges due to the spatial extent, which need to be handled, even for basic shapes like ellipses and rectangles. Among these challenges are the choice of the target model, e.g., how the measurements are distributed across the shape. Additional challenges arise for sensor fusion, as it is unclear how to best consider the geometric properties when combining two extended targets. Finally, the extent needs to be involved in the estimation. Traditional methods often use simple uniform distributions across the shape, which do not properly portray reality, while more complex methods require the use of optimization techniques or large amounts of data. In addition, for traditional estimation, metrics such as the Euclidean distance between state vectors are used. However, they might no longer be valid because they do not consider the geometric properties of the targets’ shapes, e.g., rotating an ellipse by 180 degree results in the same ellipse, but the Euclidean distance between them is not 0. In multi-sensor fusion, the same holds, i.e., simply combining the corresponding elements of the state vectors can lead to counter-intuitive fusion results. In this work, we compare different elliptic trackers and discuss more complex measurement distributions across the shape’s surface or contour. Furthermore, we discuss the problems which can occur when fusing extended target estimates from different sensors and how to handle them by providing a transformation into a special density. We then proceed to discuss how a different metric, namely the Gaussian Wasserstein (GW) distance, can be used to improve target estimation. We define an estimator and propose an approximation based on an extension of the square root distance. It can be applied on the posterior densities of the aforementioned trackers to incorporate the unique properties of ellipses in the estimation process. We also discuss how this can be applied to rectangular targets as well. Finally, we evaluate and discuss our approaches. We show the benefits of more complex target models in simulations and on real data and we demonstrate our estimation and fusion approaches compared to classic methods on simulated data.2022-01-2

    Robust GNSS Carrier Phase-based Position and Attitude Estimation Theory and Applications

    Get PDF
    Mención Internacional en el título de doctorNavigation information is an essential element for the functioning of robotic platforms and intelligent transportation systems. Among the existing technologies, Global Navigation Satellite Systems (GNSS) have established as the cornerstone for outdoor navigation, allowing for all-weather, all-time positioning and timing at a worldwide scale. GNSS is the generic term for referring to a constellation of satellites which transmit radio signals used primarily for ranging information. Therefore, the successful operation and deployment of prospective autonomous systems is subject to our capabilities to support GNSS in the provision of robust and precise navigational estimates. GNSS signals enable two types of ranging observations: –code pseudorange, which is a measure of the time difference between the signal’s emission and reception at the satellite and receiver, respectively, scaled by the speed of light; –carrier phase pseudorange, which measures the beat of the carrier signal and the number of accumulated full carrier cycles. While code pseudoranges provides an unambiguous measure of the distance between satellites and receiver, with a dm-level precision when disregarding atmospheric delays and clock offsets, carrier phase measurements present a much higher precision, at the cost of being ambiguous by an unknown number of integer cycles, commonly denoted as ambiguities. Thus, the maximum potential of GNSS, in terms of navigational precision, can be reach by the use of carrier phase observations which, in turn, lead to complicated estimation problems. This thesis deals with the estimation theory behind the provision of carrier phase-based precise navigation for vehicles traversing scenarios with harsh signal propagation conditions. Contributions to such a broad topic are made in three directions. First, the ultimate positioning performance is addressed, by proposing lower bounds on the signal processing realized at the receiver level and for the mixed real- and integer-valued problem related to carrier phase-based positioning. Second, multi-antenna configurations are considered for the computation of a vehicle’s orientation, introducing a new model for the joint position and attitude estimation problems and proposing new deterministic and recursive estimators based on Lie Theory. Finally, the framework of robust statistics is explored to propose new solutions to code- and carrier phase-based navigation, able to deal with outlying impulsive noises.La información de navegación es un elemental fundamental para el funcionamiento de sistemas de transporte inteligentes y plataformas robóticas. Entre las tecnologías existentes, los Sistemas Globales de Navegación por Satélite (GNSS) se han consolidado como la piedra angular para la navegación en exteriores, dando acceso a localización y sincronización temporal a una escala global, irrespectivamente de la condición meteorológica. GNSS es el término genérico que define una constelación de satélites que transmiten señales de radio, usadas primordinalmente para proporcionar información de distancia. Por lo tanto, la operatibilidad y funcionamiento de los futuros sistemas autónomos pende de nuestra capacidad para explotar GNSS y estimar soluciones de navegación robustas y precisas. Las señales GNSS permiten dos tipos de observaciones de alcance: –pseudorangos de código, que miden el tiempo transcurrido entre la emisión de las señales en los satélites y su acquisición en la tierra por parte de un receptor; –pseudorangos de fase de portadora, que miden la fase de la onda sinusoide que portan dichas señales y el número acumulado de ciclos completos. Los pseudorangos de código proporcionan una medida inequívoca de la distancia entre los satélites y el receptor, con una precisión de decímetros cuando no se tienen en cuenta los retrasos atmosféricos y los desfases del reloj. En contraposición, las observaciones de la portadora son super precisas, alcanzando el milímetro de exactidud, a expensas de ser ambiguas por un número entero y desconocido de ciclos. Por ende, el alcanzar la máxima precisión con GNSS queda condicionado al uso de las medidas de fase de la portadora, lo cual implica unos problemas de estimación de elevada complejidad. Esta tesis versa sobre la teoría de estimación relacionada con la provisión de navegación precisa basada en la fase de la portadora, especialmente para vehículos que transitan escenarios donde las señales no se propagan fácilmente, como es el caso de las ciudades. Para ello, primero se aborda la máxima efectividad del problema de localización, proponiendo cotas inferiores para el procesamiento de la señal en el receptor y para el problema de estimación mixto (es decir, cuando las incógnitas pertenecen al espacio de números reales y enteros). En segundo lugar, se consideran las configuraciones multiantena para el cálculo de la orientación de un vehículo, presentando un nuevo modelo para la estimación conjunta de posición y rumbo, y proponiendo estimadores deterministas y recursivos basados en la teoría de Lie. Por último, se explora el marco de la estadística robusta para proporcionar nuevas soluciones de navegación precisa, capaces de hacer frente a los ruidos atípicos.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: José Manuel Molina López.- Secretario: Giorgi Gabriele.- Vocal: Fabio Dovi

    Robust GNSS Carrier Phase-based Position and Attitude Estimation

    Get PDF
    Navigation information is an essential element for the functioning of robotic platforms and intelligent transportation systems. Among the existing technologies, Global Navigation Satellite Systems (GNSS) have established as the cornerstone for outdoor navigation, allowing for all-weather, all-time positioning and timing at a worldwide scale. GNSS is the generic term for referring to a constellation of satellites which transmit radio signals used primarily for ranging information. Therefore, the successful operation and deployment of prospective autonomous systems is subject to our capabilities to support GNSS in the provision of robust and precise navigational estimates. GNSS signals enable two types of ranging observations: --code pseudorange, which is a measure of the time difference between the signal's emission and reception at the satellite and receiver, respectively, scaled by the speed of light; --carrier phase pseudorange, which measures the beat of the carrier signal and the number of accumulated full carrier cycles. While code pseudoranges provides an unambiguous measure of the distance between satellites and receiver, with a dm-level precision when disregarding atmospheric delays and clock offsets, carrier phase measurements present a much higher precision, at the cost of being ambiguous by an unknown number of integer cycles, commonly denoted as ambiguities. Thus, the maximum potential of GNSS, in terms of navigational precision, can be reach by the use of carrier phase observations which, in turn, lead to complicated estimation problems. This thesis deals with the estimation theory behind the provision of carrier phase-based precise navigation for vehicles traversing scenarios with harsh signal propagation conditions. Contributions to such a broad topic are made in three directions. First, the ultimate positioning performance is addressed, by proposing lower bounds on the signal processing realized at the receiver level and for the mixed real- and integer-valued problem related to carrier phase-based positioning. Second, multi-antenna configurations are considered for the computation of a vehicle's orientation, introducing a new model for the joint position and attitude estimation problems and proposing new deterministic and recursive estimators based on Lie Theory. Finally, the framework of robust statistics is explored to propose new solutions to code- and carrier phase-based navigation, able to deal with outlying impulsive noises

    Nonlinear Gaussian Filtering : Theory, Algorithms, and Applications

    Get PDF
    By restricting to Gaussian distributions, the optimal Bayesian filtering problem can be transformed into an algebraically simple form, which allows for computationally efficient algorithms. Three problem settings are discussed in this thesis: (1) filtering with Gaussians only, (2) Gaussian mixture filtering for strong nonlinearities, (3) Gaussian process filtering for purely data-driven scenarios. For each setting, efficient algorithms are derived and applied to real-world problems

    Nonlinear Filtering based on Log-homotopy Particle Flow : Methodological Clarification and Numerical Evaluation

    Get PDF
    The state estimation of dynamical systems based on measurements is an ubiquitous problem. This is relevant in applications like robotics, industrial manufacturing, computer vision, target tracking etc. Recursive Bayesian methodology can then be used to estimate the hidden states of a dynamical system. The procedure consists of two steps: a process update based on solving the equations modelling the state evolution, and a measurement update in which the prior knowledge about the system is improved based on the measurements. For most real world systems, both the evolution and the measurement models are nonlinear functions of the system states. Additionally, both models can also be perturbed by random noise sources, which could be non-Gaussian in their nature. Unlike linear Gaussian models, there does not exist any optimal estimation scheme for nonlinear/non-Gaussian scenarios. This thesis investigates a particular method for nonlinear and non-Gaussian data assimilation, termed as the log-homotopy based particle flow. Practical filters based on such flows have been known in the literature as Daum Huang filters (DHF), named after the developers. The key concept behind such filters is the gradual inclusion of measurements to counter a major drawback of single step update schemes like the particle filters i.e. namely the degeneracy. This could refer to a situation where the likelihood function has its probability mass well seperated from the prior density, and/or is peaked in comparison. Conventional sampling or grid based techniques do not perform well under such circumstances and in order to achieve a reasonable accuracy, could incur a high processing cost. DHF is a sampling based scheme, which provides a unique way to tackle this challenge thereby lowering the processing cost. This is achieved by dividing the single measurement update step into multiple sub steps, such that particles originating from their prior locations are graduated incrementally until they reach their final locations. The motion is controlled by a differential equation, which is numerically solved to yield the updated states. DH filters, even though not new in the literature, have not been fully explored in the detail yet. They lack the in-depth analysis that the other contemporary filters have gone through. Especially, the implementation details for the DHF are very application specific. In this work, we have pursued four main objectives. The first objective is the exploration of theoretical concepts behind DHF. Secondly, we build an understanding of the existing implementation framework and highlight its potential shortcomings. As a sub task to this, we carry out a detailed study of important factors that affect the performance of a DHF, and suggest possible improvements for each of those factors. The third objective is to use the improved implementation to derive new filtering algorithms. Finally, we have extended the DHF theory and derived new flow equations and filters to cater for more general scenarios. Improvements in the implementation architecture of a standard DHF is one of the key contributions of this thesis. The scope of the applicability of DHF is expanded by combining it with other schemes like the Sequential Markov chain Monte Carlo and the tensor decomposition based solution of the Fokker Planck equation, resulting in the development of new nonlinear filtering algorithms. The standard DHF, using improved implementation and the newly derived algorithms are tested in challenging simulated test scenarios. Detailed analysis have been carried out, together with the comparison against more established filtering schemes. Estimation error and the processing time are used as important performance parameters. We show that our new filtering algorithms exhibit marked performance improvements over the traditional schemes
    corecore