25 research outputs found

    Bibliographic Review on Distributed Kalman Filtering

    Get PDF
    In recent years, a compelling need has arisen to understand the effects of distributed information structures on estimation and filtering. In this paper, a bibliographical review on distributed Kalman filtering (DKF) is provided.\ud The paper contains a classification of different approaches and methods involved to DKF. The applications of DKF are also discussed and explained separately. A comparison of different approaches is briefly carried out. Focuses on the contemporary research are also addressed with emphasis on the practical applications of the techniques. An exhaustive list of publications, linked directly or indirectly to DKF in the open literature, is compiled to provide an overall picture of different developing aspects of this area

    Efficient delay-tolerant particle filtering

    Full text link
    This paper proposes a novel framework for delay-tolerant particle filtering that is computationally efficient and has limited memory requirements. Within this framework the informativeness of a delayed (out-of-sequence) measurement (OOSM) is estimated using a lightweight procedure and uninformative measurements are immediately discarded. The framework requires the identification of a threshold that separates informative from uninformative; this threshold selection task is formulated as a constrained optimization problem, where the goal is to minimize tracking error whilst controlling the computational requirements. We develop an algorithm that provides an approximate solution for the optimization problem. Simulation experiments provide an example where the proposed framework processes less than 40% of all OOSMs with only a small reduction in tracking accuracy

    Rao-Blackwellized Out-of-Sequence Processing for Mixed Linear/Nonlinear State-Space Models

    Get PDF
    We investigate the out-of-sequence measurements particle filtering problem for a set of conditionally linear Gaussian state-space models, known as mixed linear/nonlinear state-space models. Two different algorithms are proposed, which both exploit the conditionally linear substructure. The first approach is based on storing only a subset of the particles and their weights, which implies low memory and computation requirements. The second approach is based on a recently reported Rao-Blackwellized forward filter/backward simulator, adapted to the out-of-sequence filtering task with computational considerations for enabling online implementations. Simulation studies on two examples show that both approaches outperform recently reported particle filters, with the second approach being superior in terms of tracking performance

    Rao-Blackwellized Particle Filters with Out-of-Sequence Measurement Processing

    Get PDF
    This paper addresses the out-of-sequence measurement (OOSM) problem for mixed linear/nonlinear state-space models, which is a class of nonlinear models with a tractable, conditionally linear substructure. We develop two novel algorithms that utilize the linear substructure. The first algorithm effectively employs the Rao-Blackwellized particle filtering framework for updating with the OOSMs, and is based on storing only a subset of the particles and their weights over an arbitrary, predefined interval. The second algorithm adapts a backward simulation approach to update with the delayed (out-of-sequence) measurements, resulting in superior tracking performance. Extensive simulation studies show the efficacy of our approaches in terms of computation time and tracking performance. Both algorithms yield estimation improvements when compared with recent particle filter algorithms for OOSM processing; in the considered examples they achieve up to 10% enhancements in estimation accuracy. In some cases the proposed algorithms even deliver accuracy that is similar to the lower performance bounds. Because the considered setup is common in various estimation scenarios, the developed algorithms enable improvements in different types of filtering applications

    Distributed Kalman Filtering

    Get PDF

    Multisensor Out Of Sequence Data Fusion for Estimating the State of Discrete Control Systems

    Get PDF
    The fusion center of a complex control system estimates its state with the information provided by different sensors. Physically distributed sensors, communication networks, pre-processing algorithms, multitasking, etc, introduce non-systematic delays in the arrival of information to the fusion center, making the information available out-of-sequence (OOS). For real-time control systems, the state has to be efficiently estimated with all the information received so far. So, several solutions of the OOS problem for dynamic multiple-input multiple-output (MIMO) discrete control systems traditionally solved by the Kalman filter (KF) have been proposed recently. This paper presents two new streamlined algorithms for the linear and non-linear case. IFAsyn, the linear algorithm, is equivalent to other optimal solutions but more general, efficient and easy to implement. EIFAsyn, the nonlinear one, is a new solution of the OOS problem in the extended KF (EKF) framework

    Distributed Random Set Theoretic Soft/Hard Data Fusion

    Get PDF
    Research on multisensor data fusion aims at providing the enabling technology to combine information from several sources in order to form a unifi ed picture. The literature work on fusion of conventional data provided by non-human (hard) sensors is vast and well-established. In comparison to conventional fusion systems where input data are generated by calibrated electronic sensor systems with well-defi ned characteristics, research on soft data fusion considers combining human-based data expressed preferably in unconstrained natural language form. Fusion of soft and hard data is even more challenging, yet necessary in some applications, and has received little attention in the past. Due to being a rather new area of research, soft/hard data fusion is still in a edging stage with even its challenging problems yet to be adequately de fined and explored. This dissertation develops a framework to enable fusion of both soft and hard data with the Random Set (RS) theory as the underlying mathematical foundation. Random set theory is an emerging theory within the data fusion community that, due to its powerful representational and computational capabilities, is gaining more and more attention among the data fusion researchers. Motivated by the unique characteristics of the random set theory and the main challenge of soft/hard data fusion systems, i.e. the need for a unifying framework capable of processing both unconventional soft data and conventional hard data, this dissertation argues in favor of a random set theoretic approach as the first step towards realizing a soft/hard data fusion framework. Several challenging problems related to soft/hard fusion systems are addressed in the proposed framework. First, an extension of the well-known Kalman lter within random set theory, called Kalman evidential filter (KEF), is adopted as a common data processing framework for both soft and hard data. Second, a novel ontology (syntax+semantics) is developed to allow for modeling soft (human-generated) data assuming target tracking as the application. Third, as soft/hard data fusion is mostly aimed at large networks of information processing, a new approach is proposed to enable distributed estimation of soft, as well as hard data, addressing the scalability requirement of such fusion systems. Fourth, a method for modeling trust in the human agents is developed, which enables the fusion system to protect itself from erroneous/misleading soft data through discounting such data on-the-fly. Fifth, leveraging the recent developments in the RS theoretic data fusion literature a novel soft data association algorithm is developed and deployed to extend the proposed target tracking framework into multi-target tracking case. Finally, the multi-target tracking framework is complemented by introducing a distributed classi fication approach applicable to target classes described with soft human-generated data. In addition, this dissertation presents a novel data-centric taxonomy of data fusion methodologies. In particular, several categories of fusion algorithms have been identifi ed and discussed based on the data-related challenging aspect(s) addressed. It is intended to provide the reader with a generic and comprehensive view of the contemporary data fusion literature, which could also serve as a reference for data fusion practitioners by providing them with conducive design guidelines, in terms of algorithm choice, regarding the specifi c data-related challenges expected in a given application

    Particle filters for tracking in wireless sensor networks

    Get PDF
    The goal of this thesis is the development, implementation and assessment of efficient particle filters (PFs) for various target tracking applications on wireless sensor networks (WSNs). We first focus on developing efficient models and particle filters for indoor tracking using received signal strength (RSS) in WSNs. RSS is a very appealing type of measurement for indoor tracking because of its availability on many existing communication networks. In particular, most current wireless communication networks (WiFi, ZigBee or even cellular networks) provide radio signal strength (RSS) measurements for each radio transmission. Unfortunately, RSS in indoor scenarios is highly influenced by multipath propagation and, thus, it turns out very hard to adequately model the correspondence between the received power and the transmitterto- receiver distance. Further, the trajectories that the targets perform in indoor scenarios usually have abrupt changes that result from avoiding walls and furniture and consequently the target dynamics is also difficult to model. In Chapter 3 we propose a flexible probabilistic scheme that allows the description of different classes of target dynamics and propagation environments through the use of multiple switching models. The resulting state-space structure is termed a generalized switching multiple model (GSMM) system. The drawback of the GSMM system is the increase in the dimension of the system state and, hence, the number of variables that the tracking algorithm has to estimate. In order to handle the added difficulty, we propose two Rao-Blackwellized particle filtering (RBPF) algorithms in which a subset of the state variables is integrated out to improve the tracking accuracy. As the main drawback of the particle filters is their computational complexity we then move on to investigate how to reduce it via de distribution of the processing. Distributed applications of tracking are particularly interesting in situations where high-power centralized hardware cannot be used. For example, in deployments where computational infrastructure and power are not available or where there is no time or trivial way of connecting to it. The large majority of existing contributions related to particle filtering, however, only offer a theoretical perspective or computer simulation studies, owing in part to the complications of real-world deployment and testing on low-power hardware. In Chapter 4 we investigate the use of the distributed resampling with non-proportional allocation (DRNA) algorithm in order to obtain a distributed particle filtering (DPF) algorithm. The DRNA algorithm was devised to speed up the computations in particle filtering via the parallelization of the resampling step. The basic assumption is the availability of a set of processors interconnected by a high-speed network, in the manner of state-of-the-art graphical processing unit (GPU) based systems. In a typical WSN, the communications among nodes are subject to various constraints (i.e., transmission capacity, power consumption or error rates), hence the hardware setup is fundamentally different. We first revisit the standard PF and its combination with the DRNA algorithm, providing a formal description of the methodology. This includes a simple analysis showing that (a) the importance weights are proper and (b) the resampling scheme is unbiased. Then we address the practical implementation of a distributed PF for target tracking, based on the DRNA scheme, that runs in real time over a WSN. For the practical implementation of the methodology on a real-time WSN, we have developed a software and hardware testbed with the required algorithmic and communication modules, working on a network of wireless light-intensity sensors. The DPF scheme based on the DRNA algorithm guarantees the computation of proper weights and consistent estimators provided that the whole set of observations is available at every time instant at every node. Unfortunately, due to practical communication constraints, the technique described in Chapter 4 may turn out unrealistic for many WSNs of larger size. We thus investigate in Chapter 5 how to relax the communication requirements of the DPF algorithm using (a) a random model for the spread of data over the WSN and (b) methods that enable the out-of-sequence processing of sensor observations. The presented observation spread scheme is flexible and allows tuning of the observation spread over the network via the selection of a parameter. As the observation spread has a direct connection with the precision on the estimation, we have also introduced a methodology that allows the selection of the parameter a priori without the need of performing any kind of experiment. The performance of the proposed scheme is assessed by way of an extensive simulation study.De forma general, el objetivo de esta tesis doctoral es el desarrollo y la aplicación de filtros de partículas (FP) eficientes para diversas aplicaciones de seguimiento de blancos en redes de sensores inalámbricas (wireless sensor networks o WSNs). Primero nos centramos en el desarrollo de modelos y filtros de partículas para el seguimiento de blancos en entornos de interiores mediante el uso de medidas de potencia de señal de radio (received signal strength o RSS) en WSNs. Las medidas RSS son un tipo de medida muy utilizada debido a su disponibilidad en redes ya implantadas en muchos entornos de interiores. De hecho, en muchas redes de comunicaciones inalámbricas actuales (WiFi, ZigBee o incluso las redes de telefonía móvil), se pueden obtener medidas de RSS sin necesidad de modificación alguna. Desafortunadamente, las medidas RSS en entornos de interiores suelen distorsionarse debido a la propagación multitrayecto por lo que resulta muy difícil modelar adecuadamente la relación entre la potencia de señal recibida y la distancia entre el transmisor y el receptor. Otra dificultad añadida en el seguimiento de interiores es que las trayectorias realizadas por los blancos suelen tener por lo general cambios muy bruscos y en consecuencia el modelado de las trayectorias dinámicas es una actividad muy compleja. En el Capítulo 3 se propone un esquema probabilístico flexible que permite la descripción de los diferentes sistemas dinámicos y entornos de propagación mediante el uso de múltiples modelos conmutables entre sí. Este esquema permite la descripción de varios modelos dinámicos y de propagación de forma muy precisa de manera que el filtro sólo tiene que estimar el modelo adecuado a cada instante para poder hacer el seguimiento. El modelo de estado resultante (modelo de conmutación múltiple generalizado, generalized switiching multiple model o GSMM) tiene el inconveniente del aumento de la dimensión del estado del sistema y, por lo tanto, el número de variables que el algoritmo de seguimiento tiene que estimar. Para superar esta dificultad, se proponen varios algoritmos de filtros de partículas con reducción de la varianza (Rao-Blackwellized particle filtering (RBPF) algorithms) en el que un subconjunto de las variables de estado, incluyendo las variables indicadoras de observación, se integran a fin de mejorar la precisión de seguimiento. Dado que la mayor desventaja de los filtros de partículas es su complejidad computacional, a continuación investigamos cómo reducirla distribuyendo el procesado entre los diferentes nodos de la red. Las aplicaciones distribuidas de seguimiento en redes de sensores son de especial interés en muchas implementaciones reales, por ejemplo: cuando el hardware usado no tiene suficiente capacidad computacional, si se quiere alargar la vida de la red usando menos energía, o cuando no hay tiempo (o medios) para conectarse a la toda la red. La reducción de complejidad también es interesante cuando la red es tan extensa que el uso de hardware con alta capacidad de procesamiento la haría excesivamente costosa. La mayoría de las contribuciones existentes ofrecen exclusivamente una perspectiva teórica o muestran resultados sintéticos o simulados, debido en parte a las complicaciones asociadas a la implementación de los algoritmos y de las pruebas en un hardware con nodos de baja capacidad computacional. En el Capítulo 4 se investiga el uso del algoritmo distributed resampling with non proportional allocation (DRNA) a fin de obtener un filtro de partículas distribuido (FPD) para su implementación en una red de sensores real con nodos de baja capacidad computacional. El algoritmo DRNA fue elaborado para acelerar el cómputo del filtro de partículas centrándose en la paralelización de uno de sus pasos: el remuestreo. Para ello el DRNA asume la disponibilidad de un conjunto de procesadores interconectados por una red de alta velocidad. En una red de sensores inalábmrica, las comunicaciones entre los nodos suelen tener restricciones (debido a la capacidad de transmisión, el consumo de energía o de las tasas de error), y en consecuencia, la configuración de hardware es fundamentalmente diferente. En este trabajo abordamos el problema de la aplicación del algoritmo de DRNA en una WSN real. En primer lugar, revisamos el FP estándar y su combinación con el algoritmo DRNA, proporcionando una descripci´on formal de la metodología. Esto incluye un análisis que demuestra que (a) los pesos se calculan de forma adecuada y (b) que el paso del remuestreo no introduce ningún sesgo. A continuación describimos la aplicación práctica de un FP distribuido para seguimiento de objetivos, basado en el esquema DRNA, que se ejecuta en tiempo real a través de una WSN. Hemos desarrollado un banco de pruebas de software y hardware donde hemos usado unos nodos con sensores que miden intensidad de la luz y que a su vez tienen una capacidad de procesamiento y de comunicaciones limitada. Evaluamos el rendimiento del sistema de seguimiento en términos de error de la trayectoria estimada usando los datos sintéticos y evaluamos la capacidad computacional con datos reales. El filtro de partículas distribuído basado en el algoritmo DRNA garantiza el cómputo correcto de los pesos y los estimadores a condición de que el conjunto completo de observaciones estén disponibles en cada instante de tiempo y en cada nodo. Debido a limitaciones de comunicación esta metodología puede resultar poco realista para su implementación en muchas redes de sensores inalámbricas de tamaño grande. Por ello, en el Capítulo 5 investigamos cómo reducir los requisitos de comunicación del algoritmo anterior mediante (a) el uso de un modelo aleatorio para la difusión de datos de observación a través de las red y (b) la adaptación de los filtros para permitir el procesamiento de observaciones que lleguen fuera de secuencia. El esquema presentado permite reducir la carga de comunicaciones en la red a cambio de una reducción en la precisión del algoritmo mediante la selección de un parámetro de diseño. También presentamos una metodología que permite la selección de dicho parámetro que controla la difusión de las observaciones a priori sin la necesidad de llevar a cabo ningún tipo de experimento. El rendimiento del esquema propuesto ha sido evaluado mediante un estudio extensivo de simulaciones

    Leader-assisted localization approach for a heterogeneous multi-robot system

    Get PDF
    This thesis presents the design, implementation, and validation of a novel leader assisted localization framework for a heterogeneous multi-robot system (MRS) with sensing and communication range constraints. It is assumed that the given heterogeneous MRS has a more powerful robot (or group of robots) with accurate self localization capabilities (leader robots) while the rest of the team (child robots), i.e. less powerful robots, is localized with the assistance of leader robots and inter-robot observation between teammates. This will eventually pose a condition that the child robots should be operated within the sensing and communication range of leader robots. The bounded navigation space therefore may require added algorithms to avoid inter-robot collisions and limit robots’ maneuverability. To address this limitation, first, the thesis introduces a novel distributed graph search and global pose composition algorithm to virtually enhance the leader robots’ sensing and communication range while avoiding possible double counting of common information. This allows child robots to navigate beyond the sensing and communication range of the leader robot, yet receive localization services from the leader robots. A time-delayed measurement update algorithm and a memory optimization approach are then integrated into the proposed localization framework. This eventually improves the robustness of the algorithm against the unknown processing and communication time-delays associated with the inter-robot data exchange network. Finally, a novel hierarchical sensor fusion architecture is introduced so that the proposed localization scheme can be implemented using inter-robot relative range and bearing measurements. The performance of the proposed localization framework is evaluated through a series of indoor experiments, a publicly available multi-robot localization and mapping data-set and a set of numerical simulations. The results illustrate that the proposed leader-assisted localization framework is capable of establishing accurate and nonoverconfident localization for the child robots even when the child robots operate beyond the sensing and communication boundaries of the leader robots

    Linear Estimation in Interconnected Sensor Systems with Information Constraints

    Get PDF
    A ubiquitous challenge in many technical applications is to estimate an unknown state by means of data that stems from several, often heterogeneous sensor sources. In this book, information is interpreted stochastically, and techniques for the distributed processing of data are derived that minimize the error of estimates about the unknown state. Methods for the reconstruction of dependencies are proposed and novel approaches for the distributed processing of noisy data are developed
    corecore