268 research outputs found

    Ternary and Hybrid Event-based Particle Filtering for Distributed State Estimation in Cyber-Physical Systems

    Get PDF
    The thesis is motivated by recent advancements and developments in large, distributed, autonomous, and self-aware Cyber-Physical Systems (CPSs), which are emerging engineering systems with integrated processing, control, and communication capabilities. Efficient usage of available resources (communication,computation, bandwidth, and energy) is a pre-requisite for productive operation of CPSs, where security, privacy, and/or power considerations limit the number of information transfers between neighbouring sensors. In this regard, the focus of the thesis is on information acquisition, state estimation, and learning in the context of CPSs by adopting an Event-based Estimation (EBE) strategy, where information transfer is performed only in the occurrence of specific events identified via the localized triggering mechanisms. In particular, the thesis aims to address the following identified drawbacks of the existing EBE methodologies: (i) At one hand, while EBE using Gaussian-based approximations of the event-triggered posterior has been fairly investigated, application of non-linear, non-Gaussian filtering using particle filters is still in its infancy, and; (ii) On the other hand, the common assumption in the existing EBE strategies is having a binary (idle and event) decision process where during idle epochs, the sensor holds on to its local measurements while during the event epochs measurement communication happens. Although the binary event-based transfer of measurements potentially reduces the communication overhead, still communicating raw measurements during all the event instances could be very costly. To address the aforementioned shortcomings of existing EBE methodologies, first, an intuitively pleasing event-based particle filtering (EBPF) framework is proposed for centralized, hierarchical, and distributed (iii)state estimation architectures. Furthermore, a novel ternary event-triggering framework, referred to as the TEB-PF, is proposed by introducing the ternary event-triggering (TET) mechanism coupled with a non-Gaussian fusion strategy that jointly incorporates hybrid measurements within the particle filtering framework. Instead of using binary decision criteria, the proposed TET mechanism uses three local decision cases resulting in set-valued, quantized, and point-valued measurements. Due to a joint utilization of quantized and set-valued measurements in addition to the point-valued ones, the proposed TEB-PF simultaneously reduces the communication overhead, in comparison to its binary triggering counterparts, while also improves the estimation accuracy especially in low communication rates

    Self-triggered Consensus of Multi-agent Systems with Quantized Relative State Measurements

    Full text link
    This paper addresses the consensus problem of first-order continuous-time multi-agent systems over undirected graphs. Each agent samples relative state measurements in a self-triggered fashion and transmits the sum of the measurements to its neighbors. Moreover, we use finite-level dynamic quantizers and apply the zooming-in technique. The proposed joint design method for quantization and self-triggered sampling achieves asymptotic consensus, and inter-event times are strictly positive. Sampling times are determined explicitly with iterative procedures including the computation of the Lambert WW-function. A simulation example is provided to illustrate the effectiveness of the proposed method.Comment: 29 pages, 3 figures. To appear in IET Control Theory & Application

    Belief Condensation Filtering For Rssi-Based State Estimation In Indoor Localization

    Get PDF
    Recent advancements in signal processing and communication systems have resulted in evolution of an intriguing concept referred to as Internet of Things (IoT). By embracing the IoT evolution, there has been a surge of recent interest in localization/tracking within indoor environments based on Bluetooth Low Energy (BLE) technology. The basic motive behind BLE-enabled IoT applications is to provide advanced residential and enterprise solutions in an energy efficient and reliable fashion. Although recently different state estimation (SE) methodologies, ranging from Kalman filters, Particle filters, to multiple-modal solutions, have been utilized for BLEbased indoor localization, there is a need for ever more accurate and real-time algorithms. The main challenge here is that multipath fading and drastic fluctuations in the indoor environment result in complex non-linear, non-Gaussian estimation problems. The paper focuses on an alternative solution to the existing filtering techniques and introduce/discuss incorporation of the Belief Condensation Filter (BCF) for localization via BLE-enabled beacons. The BCF is a member of the universal approximation family of densities with performance bound achieving accuracy and efficiency in sequential SE and Bayesian tracking. It is a resilient filter in harsh environments where nonlinearities and non-Gaussian noise profiles persist, as seen in such applications as Indoor Localization

    Sparsity-Aware Low-Power ADC Architecture with Advanced Reconstruction Algorithms

    Get PDF
    Compressive sensing (CS) technique enables a universal sub-Nyquist sampling of sparse and compressible signals, while still guaranteeing the reliable signal recovery. Its potential lies in the reduced analog-to-digital conversion rate in sampling broadband and/or multi-channel sparse signals, where conventional Nyquist-rate sampling are either technology impossible or extremely hardware costly. Nevertheless, there are many challenges in the CS hardware design. In coherent sampling, state-of-the-art mixed-signal CS front-ends, such as random demodulator and modulated wideband converter, suffer from high power and nonlinear hardware. In signal recovery, state-of-the-art CS reconstruction methods have tractable computational complexity and probabilistically guaranteed performance. However, they are still high cost (basis pursuit) or noise sensitive (matching pursuit). In this dissertation, we propose an asynchronous compressive sensing (ACS) front-end and advanced signal reconstruction algorithms to address these challenges. The ACS front-end consists of a continuous-time ternary encoding (CT-TE) scheme which converts signal amplitude variations into high-rate ternary timing signal, and a digital random sampler (DRS) which captures the ternary timing signal at sub-Nyquist rate. The CT-TE employs asynchronous sampling mechanism for pulsed-like input and has signal-dependent conversion rate. The DRS has low power, ease of massive integration, and excellent linearity in comparison to state-of-the-art mixed-signal CS front-ends. We propose two reconstruction algorithms. One is group-based total variation, which exploits piecewise-constant characteristics and achieves better mean squared error and faster convergence rate than the conventional TV scheme with moderate noise. The second algorithm is split-projection least squares (SPLS), which relies on a series of low-complexity and independent l2-norm problems with the prior on ternary-valued signal. The SPLS scheme has good noise robustness, low-cost signal reconstruction and facilitates a parallel hardware for real-time signal recovery. In application study, we propose multi-channel filter banks ACS front-end for the interference-robust radar. The proposed receiver performs reliable target detection with nearly 8-fold data compression than Nyquist-rate sampling in the presence of -50dBm wireless interference. We also propose an asynchronous compressed beamformer (ACB) for low-power portable diagnostic ultrasound. The proposed ACB achieves 9-fold data volume compression and only 4.4% contrast-to-noise ratio loss on the imaging results when compared with the Nyquist-rate ADCs

    Robust navigation for industrial service robots

    Get PDF
    Pla de Doctorats Industrials de la Generalitat de CatalunyaRobust, reliable and safe navigation is one of the fundamental problems of robotics. Throughout the present thesis, we tackle the problem of navigation for robotic industrial mobile-bases. We identify its components and analyze their respective challenges in order to address them. The research work presented here ultimately aims at improving the overall quality of the navigation stack of a commercially available industrial mobile-base. To introduce and survey the overall problem we first break down the navigation framework into clearly identified smaller problems. We examine the Simultaneous Localization and Mapping (SLAM) problem, recalling its mathematical grounding and exploring the state of the art. We then review the problem of planning the trajectory of a mobile-base toward a desired goal in the generated environment representation. Finally we investigate and clarify the use of the subset of the Lie theory that is useful in robotics. The first problem tackled is the recognition of place for closing loops in SLAM. Loop closure refers to the ability of a robot to recognize a previously visited location and infer geometrical information between its current and past locations. Using only a 2D laser range finder sensor, we address the problem using a technique borrowed from the field of Natural Language Processing (NLP) which has been successfully applied to image-based place recognition, namely the Bag-of-Words. We further improve the method with two proposals inspired from NLP. Firstly, the comparison of places is strengthened by considering the natural relative order of features in each individual sensor reading. Secondly, topological correspondences between places in a corpus of visited places are established in order to promote together instances that are ‘close’ to one another. We then tackle the problem of motion model calibration for odometry estimation. Given a mobile-base embedding an exteroceptive sensor able to observe ego-motion, we propose a novel formulation for estimating the intrinsic parameters of an odometry motion model. Resorting to an adaptation of the pre-integration theory initially developed for inertial motion sensors, we employ iterative nonlinear on-manifold optimization to estimate the wheel radii and wheel separation. The method is further extended to jointly estimate both the intrinsic parameters of the odometry model together with the extrinsic parameters of the embedded sensor. The method is shown to accommodate to variation in model parameters quickly when the vehicle is subject to physical changes during operation. Following the generation of a map in which the robot is localized, we address the problem of estimating trajectories for motion planning. We devise a new method for estimating a sequence of robot poses forming a smooth trajectory. Regardless of the Lie group considered, the trajectory is seen as a collection of states lying on a spline with non-vanishing n-th derivatives at each point. Formulated as a multi-objective nonlinear optimization problem, it allows for the addition of cost functions such as velocity and acceleration limits, collision avoidance and more. The proposed method is evaluated for two different motion planning tasks, the planning of trajectories for a mobile-base evolving in the SE(2) manifold, and the planning of the motion of a multi-link robotic arm whose end-effector evolves in the SE(3) manifold. From our study of Lie theory, we developed a new, ready to use, programming library called `manif’. The library is open source, publicly available and is developed following good software programming practices. It is designed so that it is easy to integrate and manipulate, and allows for flexible use while facilitating the possibility to extend it beyond the already implemented Lie groups.La navegación autónoma es uno de los problemas fundamentales de la robótica, y sus diferentes desafíos se han estudiado durante décadas. El desarrollo de métodos de navegación robusta, confiable y segura es un factor clave para la creación de funcionalidades de nivel superior en robots diseñados para operar en entornos con humanos. A lo largo de la presente tesis, abordamos el problema de navegación para bases robóticas móviles industriales; identificamos los elementos de un sistema de navegación; y analizamos y tratamos sus desafíos. El trabajo de investigación presentado aquí tiene como último objetivo mejorar la calidad general del sistema completo de navegación de una base móvil industrial disponible comercialmente. Para estudiar el problema de navegación, primero lo desglosamos en problemas menores claramente identificados. Examinamos el subproblema de mapeo del entorno y localización del robot simultáneamente (SLAM por sus siglas en ingles) y estudiamos el estado del arte del mismo. Al hacerlo, recordamos y detallamos la base matemática del problema de SLAM. Luego revisamos el subproblema de planificación de trayectorias hacia una meta deseada en la representación del entorno generada. Además, como una herramienta para las soluciones que se presentarán más adelante en el desarrollo de la tesis, investigamos y aclaramos el uso de teoría de Lie, centrándonos en el subconjunto de la teoría que es útil para la estimación de estados en robótica. Como primer elemento identificado para mejoras, abordamos el problema de reconocimiento de lugares para cerrar lazos en SLAM. El cierre de lazos se refiere a la capacidad de un robot para reconocer una ubicación visitada previamente e inferí información geométrica entre la ubicación actual del robot y aquellas reconocidas. Usando solo un sensor láser 2D, la tarea es desafiante ya que la percepción del entorno que proporciona el sensor es escasa y limitada. Abordamos el problema utilizando 'bolsas de palabras', una técnica prestada del campo de procesamiento del lenguaje natural (NLP) que se ha aplicado con éxito anteriormente al reconocimiento de lugares basado en imágenes. Nuestro método incluye dos nuevas propuestas inspiradas también en NLP. Primero, la comparación entre lugares candidatos se fortalece teniendo en cuenta el orden relativo natural de las características en cada lectura individual del sensor; y segundo, se establece un corpus de lugares visitados para promover juntos instancias que están "cerca" la una de la otra desde un punto de vista topológico. Evaluamos nuestras propuestas por separado y conjuntamente en varios conjuntos de datos, con y sin ruido, demostrando mejora en la detección de cierres de lazo para sensores láser 2D, con respecto al estado del arte. Luego abordamos el problema de la calibración del modelo de movimiento para la estimación de la edometría. Dado que nuestra base móvil incluye un sensor exteroceptivo capaz de observar el movimiento de la plataforma, proponemos una nueva formulación que permite estimar los parámetros intrínsecos del modelo cinemático de la plataforma durante el cómputo de la edometría del vehículo. Hemos recurrido a una adaptación de la teoría de reintegración inicialmente desarrollado para unidades inerciales de medida, y aplicado la técnica a nuestro modelo cinemático. El método nos permite, mediante optimización iterativa no lineal, la estimación del valor del radio de las ruedas de forma independiente y de la separación entre las mismas. El método se amplía posteriormente par idéntica de forma simultánea, estos parámetros intrínsecos junto con los parámetros extrínsecos que ubican el sensor láser con respecto al sistema de referencia de la base móvil. El método se valida en simulación y en un entorno real y se muestra que converge hacia los verdaderos valores de los parámetros. El método permite la adaptación de los parámetros intrínsecos del modelo cinemático de la plataforma derivados de cambios físicos durante la operación, tales como el impacto que el cambio de carga sobre la plataforma tiene sobre el diámetro de las ruedas. Como tercer subproblema de navegación, abordamos el reto de planificar trayectorias de movimiento de forma suave. Desarrollamos un método para planificar la trayectoria como una secuencia de configuraciones sobre una spline con n-ésimas derivadas en todos los puntos, independientemente del grupo de Lie considerado. Al ser formulado como un problema de optimización no lineal con múltiples objetivos, es posible agregar funciones de coste al problema de optimización que permitan añadir límites de velocidad o aceleración, evasión de colisiones, etc. El método propuesto es evaluado en dos tareas de planificación de movimiento diferentes, la planificación de trayectorias para una base móvil que evoluciona en la variedad SE(2), y la planificación del movimiento de un brazo robótico cuyo efector final evoluciona en la variedad SE(3). Además, cada tarea se evalúa en escenarios con complejidad de forma incremental, y se muestra un rendimiento comparable o mejor que el estado del arte mientras produce resultados más consistentes. Desde nuestro estudio de la teoría de Lie, desarrollamos una nueva biblioteca de programación llamada “manif”. La biblioteca es de código abierto, está disponible públicamente y se desarrolla siguiendo las buenas prácticas de programación de software. Esta diseñado para que sea fácil de integrar y manipular, y permite flexibilidad de uso mientras se facilita la posibilidad de extenderla más allá de los grupos de Lie inicialmente implementados. Además, la biblioteca se muestra eficiente en comparación con otras soluciones existentes. Por fin, llegamos a la conclusión del estudio de doctorado. Examinamos el trabajo de investigación y trazamos líneas para futuras investigaciones. También echamos un vistazo en los últimos años y compartimos una visión personal y experiencia del desarrollo de un doctorado industrial.Postprint (published version

    Robust Wide-Baseline Stereo Matching for Sparsely Textured Scenes

    Get PDF
    The task of wide baseline stereo matching algorithms is to identify corresponding elements in pairs of overlapping images taken from significantly different viewpoints. Such algorithms are a key ingredient to many computer vision applications, including object recognition, automatic camera orientation, 3D reconstruction and image registration. Although today's methods for wide baseline stereo matching produce reliable results for typical application scenarios, they assume properties of the image data that are not always granted, for example a significant amount of distinctive surface texture. For such problems, highly advanced algorithms have been proposed, which are often very problem specific, difficult to implement and hard to transfer to new matching problems. The motivation for our work comes from the belief that we can find a generic formulation for robust wide baseline image matching that is able to solve difficult matching problems and at the same time applicable to a variety of applications. It should be easy to implement, and have good semantic interpretability. Therefore our key contribution is the development of a generic statistical model for wide baseline stereo matching, which seamlessly integrates different types of image features, similarity measures and spatial feature relationships as information cues. It unifies the ideas of existing approaches into a Bayesian formulation, which has a clear statistical interpretation as the MAP estimate of a binary classification problem. The model ultimately takes the form of a global minimization problem that can be solved with standard optimization techniques. The particular type of features, measures, and spatial relationships however is not prescribed. A major advantage of our model over existing approaches is its ability to compensate weaknesses in one information cue implicitly by exploiting the strength of others. In our experiments we concentrate on images of sparsely textured scenes as a specifically difficult matching problem. Here the amount of stable image features is typically rather small, and the distinctiveness of feature descriptions often low. We use the proposed framework to implement a wide baseline stereo matching algorithm that can deal better with poor texture than established methods. For demonstrating the practical relevance, we also apply this algorithm to a system for automatic image orientation. Here, the task is to reconstruct the relative 3D positions and orientations of the cameras corresponding to a set of overlapping images. We show that our implementation leads to more successful results in case of sparsely textured scenes, while still retaining state of the art performance on standard datasets.Robuste Merkmalszuordnung für Bildpaare schwach texturierter Szenen mit deutlicher Stereobasis Die Aufgabe von Wide Baseline Stereo Matching Algorithmen besteht darin, korrespondierende Elemente in Paaren überlappender Bilder mit deutlich verschiedenen Kamerapositionen zu bestimmen. Solche Algorithmen sind ein grundlegender Baustein für zahlreiche Computer Vision Anwendungen wie Objekterkennung, automatische Kameraorientierung, 3D Rekonstruktion und Bildregistrierung. Die heute etablierten Verfahren für Wide Baseline Stereo Matching funktionieren in typischen Anwendungsszenarien sehr zuverlässig. Sie setzen jedoch Eigenschaften der Bilddaten voraus, die nicht immer gegeben sind, wie beispielsweise einen hohen Anteil markanter Textur. Für solche Fälle wurden sehr komplexe Verfahren entwickelt, die jedoch oft nur auf sehr spezifische Probleme anwendbar sind, einen hohen Implementierungsaufwand erfordern, und sich zudem nur schwer auf neue Matchingprobleme übertragen lassen. Die Motivation für diese Arbeit entstand aus der Überzeugung, dass es eine möglichst allgemein anwendbare Formulierung für robustes Wide Baseline Stereo Matching geben muß, die sich zur Lösung schwieriger Zuordnungsprobleme eignet und dennoch leicht auf verschiedenartige Anwendungen angepasst werden kann. Sie sollte leicht implementierbar sein und eine hohe semantische Interpretierbarkeit aufweisen. Unser Hauptbeitrag besteht daher in der Entwicklung eines allgemeinen statistischen Modells für Wide Baseline Stereo Matching, das verschiedene Typen von Bildmerkmalen, Ähnlichkeitsmaßen und räumlichen Beziehungen nahtlos als Informationsquellen integriert. Es führt Ideen bestehender Lösungsansätze in einer Bayes'schen Formulierung zusammen, die eine klare Interpretation als MAP Schätzung eines binären Klassifikationsproblems hat. Das Modell nimmt letztlich die Form eines globalen Minimierungsproblems an, das mit herkömmlichen Optimierungsverfahren gelöst werden kann. Der konkrete Typ der verwendeten Bildmerkmale, Ähnlichkeitsmaße und räumlichen Beziehungen ist nicht explizit vorgeschrieben. Ein wichtiger Vorteil unseres Modells gegenüber vergleichbaren Verfahren ist seine Fähigkeit, Schwachpunkte einer Informationsquelle implizit durch die Stärken anderer Informationsquellen zu kompensieren. In unseren Experimenten konzentrieren wir uns insbesondere auf Bilder schwach texturierter Szenen als ein Beispiel schwieriger Zuordnungsprobleme. Die Anzahl stabiler Bildmerkmale ist hier typischerweise gering, und die Unterscheidbarkeit der Merkmalsbeschreibungen schlecht. Anhand des vorgeschlagenen Modells implementieren wir einen konkreten Wide Baseline Stereo Matching Algorithmus, der besser mit schwacher Textur umgehen kann als herkömmliche Verfahren. Um die praktische Relevanz zu verdeutlichen, wenden wir den Algorithmus für die automatische Bildorientierung an. Hier besteht die Aufgabe darin, zu einer Menge überlappender Bilder die relativen 3D Kamerapositionen und Kameraorientierungen zu bestimmen. Wir zeigen, dass der Algorithmus im Fall schwach texturierter Szenen bessere Ergebnisse als etablierte Verfahren ermöglicht, und dennoch bei Standard-Datensätzen vergleichbare Ergebnisse liefert

    Near Sensor Artificial Intelligence on IoT Devices for Smart Cities

    Get PDF
    The IoT is in a continuous evolution thanks to new technologies that open the doors to various applications. While the structure of the IoT network remains the same over the years, specifically composed of a server, gateways, and nodes, their tasks change according to new challenges: the use of multimedia information and the large amount of data created by millions of devices forces the system to move from the cloud-centric approach to the thing-centric approach, where nodes partially process the information. Computing at the sensor node level solves well-known problems like scalability and privacy concerns. However, this study’s primary focus is on the impact that bringing the computation at the edge has on energy: continuous transmission of multimedia data drains the battery, and processing information on the node reduces the amount of data transferred to event-based alerts. Nevertheless, most of the foundational services for IoT applications are provided by AI. Due to this class of algorithms’ complexity, they are always delegated to GPUs or devices with an energy budget that is orders of magnitude more than an IoT node, which should be energy-neutral and powered only by energy harvesters. Enabling AI on IoT nodes is a challenging task. From the software side, this work explores the most recent compression techniques for NN, enabling the reduction of state-of-the-art networks to make them fit in microcontroller systems. From the hardware side, this thesis focuses on hardware selection. It compares the AI algorithms’ efficiency running on both well-established microcontrollers and state-of-the-art processors. An additional contribution towards energy-efficient AI is the exploration of hardware for acquisition and pre-processing of sound data, analyzing the data’s quality for further classification. Moreover, the combination of software and hardware co-design is the key point of this thesis to bring AI to the very edge of the IoT network

    Object Tracking

    Get PDF
    Object tracking consists in estimation of trajectory of moving objects in the sequence of images. Automation of the computer object tracking is a difficult task. Dynamics of multiple parameters changes representing features and motion of the objects, and temporary partial or full occlusion of the tracked objects have to be considered. This monograph presents the development of object tracking algorithms, methods and systems. Both, state of the art of object tracking methods and also the new trends in research are described in this book. Fourteen chapters are split into two sections. Section 1 presents new theoretical ideas whereas Section 2 presents real-life applications. Despite the variety of topics contained in this monograph it constitutes a consisted knowledge in the field of computer object tracking. The intention of editor was to follow up the very quick progress in the developing of methods as well as extension of the application
    corecore