617 research outputs found

    Context Exploitation in Data Fusion

    Get PDF
    Complex and dynamic environments constitute a challenge for existing tracking algorithms. For this reason, modern solutions are trying to utilize any available information which could help to constrain, improve or explain the measurements. So called Context Information (CI) is understood as information that surrounds an element of interest, whose knowledge may help understanding the (estimated) situation and also in reacting to that situation. However, context discovery and exploitation are still largely unexplored research topics. Until now, the context has been extensively exploited as a parameter in system and measurement models which led to the development of numerous approaches for the linear or non-linear constrained estimation and target tracking. More specifically, the spatial or static context is the most common source of the ambient information, i.e. features, utilized for recursive enhancement of the state variables either in the prediction or the measurement update of the filters. In the case of multiple model estimators, context can not only be related to the state but also to a certain mode of the filter. Common practice for multiple model scenarios is to represent states and context as a joint distribution of Gaussian mixtures. These approaches are commonly referred as the join tracking and classification. Alternatively, the usefulness of context was also demonstrated in aiding the measurement data association. Process of formulating a hypothesis, which assigns a particular measurement to the track, is traditionally governed by the empirical knowledge of the noise characteristics of sensors and operating environment, i.e. probability of detection, false alarm, clutter noise, which can be further enhanced by conditioning on context. We believe that interactions between the environment and the object could be classified into actions, activities and intents, and formed into structured graphs with contextual links translated into arcs. By learning the environment model we will be able to make prediction on the target\u2019s future actions based on its past observation. Probability of target future action could be utilized in the fusion process to adjust tracker confidence on measurements. By incorporating contextual knowledge of the environment, in the form of a likelihood function, in the filter measurement update step, we have been able to reduce uncertainties of the tracking solution and improve the consistency of the track. The promising results demonstrate that the fusion of CI brings a significant performance improvement in comparison to the regular tracking approaches

    A Review of the Bayesian Occupancy Filter

    Get PDF
    Autonomous vehicle systems are currently the object of intense research within scientific and industrial communities; however, many problems remain to be solved. One of the most critical aspects addressed in both autonomous driving and robotics is environment perception, since it consists of the ability to understand the surroundings of the vehicle to estimate risks and make decisions on future movements. In recent years, the Bayesian Occupancy Filter (BOF) method has been developed to evaluate occupancy by tessellation of the environment. A review of the BOF and its variants is presented in this paper. Moreover, we propose a detailed taxonomy where the BOF is decomposed into five progressive layers, from the level closest to the sensor to the highest abstract level of risk assessment. In addition, we present a study of implemented use cases to provide a practical understanding on the main uses of the BOF and its taxonomy.This work has been founded by the Spanish Ministry of Economy and Competitiveness along with the European Structural and Investment Funds in the National Project TCAP-AUTO (RTC-2015-3942-4) in the program of “Retos Colaboración 2014”

    Real-time people tracking in a camera network

    Get PDF
    Visual tracking is a fundamental key to the recognition and analysis of human behaviour. In this thesis we present an approach to track several subjects using multiple cameras in real time. The tracking framework employs a numerical Bayesian estimator, also known as a particle lter, which has been developed for parallel implementation on a Graphics Processing Unit (GPU). In order to integrate multiple cameras into a single tracking unit we represent the human body by a parametric ellipsoid in a 3D world. The elliptical boundary can be projected rapidly, several hundred times per subject per frame, onto any image for comparison with the image data within a likelihood model. Adding variables to encode visibility and persistence into the state vector, we tackle the problems of distraction and short-period occlusion. However, subjects may also disappear for longer periods due to blind spots between cameras elds of view. To recognise a desired subject after such a long-period, we add coloured texture to the ellipsoid surface, which is learnt and retained during the tracking process. This texture signature improves the recall rate from 60% to 70-80% when compared to state only data association. Compared to a standard Central Processing Unit (CPU) implementation, there is a signi cant speed-up ratio

    Laser-Based Detection and Tracking of Moving Obstacles to Improve Perception of Unmanned Ground Vehicles

    Get PDF
    El objetivo de esta tesis es desarrollar un sistema que mejore la etapa de percepción de vehículos terrestres no tripulados (UGVs) heterogéneos, consiguiendo con ello una navegación robusta en términos de seguridad y ahorro energético en diferentes entornos reales, tanto interiores como exteriores. La percepción debe tratar con obstáculos estáticos y dinámicos empleando sensores heterogéneos, tales como, odometría, sensor de distancia láser (LIDAR), unidad de medida inercial (IMU) y sistema de posicionamiento global (GPS), para obtener la información del entorno con la precisión más alta, permitiendo mejorar las etapas de planificación y evitación de obstáculos. Para conseguir este objetivo, se propone una etapa de mapeado de obstáculos dinámicos (DOMap) que contiene la información de los obstáculos estáticos y dinámicos. La propuesta se basa en una extensión del filtro de ocupación bayesiana (BOF) incluyendo velocidades no discretizadas. La detección de velocidades se obtiene con Flujo Óptico sobre una rejilla de medidas LIDAR discretizadas. Además, se gestionan las oclusiones entre obstáculos y se añade una etapa de seguimiento multi-hipótesis, mejorando la robustez de la propuesta (iDOMap). La propuesta ha sido probada en entornos simulados y reales con diferentes plataformas robóticas, incluyendo plataformas comerciales y la plataforma (PROPINA) desarrollada en esta tesis para mejorar la colaboración entre equipos de humanos y robots dentro del proyecto ABSYNTHE. Finalmente, se han propuesto métodos para calibrar la posición del LIDAR y mejorar la odometría con una IMU

    Contributions to Localization, Mapping and Navigation in Mobile Robotics

    Get PDF
    This thesis focuses on the problem of enabling mobile robots to autonomously build world models of their environments and to employ them as a reference to self–localization and navigation. For mobile robots to become truly autonomous and useful, they must be able of reliably moving towards the locations required by their tasks. This simple requirement gives raise to countless problems that have populated research in the mobile robotics community for the last two decades. Among these issues, two of the most relevant are: (i) secure autonomous navigation, that is, moving to a target avoiding collisions and (ii) the employment of an adequate world model for robot self-referencing within the environment and also for locating places of interest. The present thesis introduces several contributions to both research fields. Among the contributions of this thesis we find a novel approach to extend SLAM to large-scale scenarios by means of a seamless integration of geometric and topological map building in a probabilistic framework that estimates the hybrid metric-topological (HMT) state space of the robot path. The proposed framework unifies the research areas of topological mapping, reasoning on topological maps and metric SLAM, providing also a natural integration of SLAM and the “robot awakening” problem. Other contributions of this thesis cover a wide variety of topics, such as optimal estimation in particle filters, a new probabilistic observation model for laser scanners based on consensus theory, a novel measure of the uncertainty in grid mapping, an efficient method for range-only SLAM, a grounded method for partitioning large maps into submaps, a multi-hypotheses approach to grid map matching, and a mathematical framework for extending simple obstacle avoidance methods to realistic robots

    Fusion of heterogenous sensor data in border surveillance

    Get PDF
    Wide area surveillance has become of critical importance particularly for border control between countries where vast forested land border areas are to be monitored. In this paper we address the problem of the automatic detection of activity in forbidden areas, namely forested land border areas. In order to avoid false detections, often triggered in dense vegetation with single sensors such as radar, in this paper we present a multi sensor fusion and tracking system using passive infrared detectors in combination with automatic person detection from thermal and visual video camera images. The approach combines weighted maps with a rule engine that associates data from multiple weighted maps. The proposed approach is tested on real data collected by the EU FOLDOUT project in a location representative of a range of forested EU borders. The results show that the proposed approach can eliminate single-sensor false detections and enhance accuracy by up to 50%

    Underwater localization using imaging sonars in 3D environments

    Get PDF
    This work proposes a localization method using a mechanically scanned imaging sonar (MSIS), which stands out by its low cost and weight. The proposed method implements a Particle Filter, a Bayesian Estimator, and introduces a measurement model based on sonar simulation theory. To the best of author’s knowledge, there is no similar approach in the literature, as sonar simulation current methods target in syntethic data generation, mostly for object recognition . This stands as the major contribution of the thesis as allows the introduction of the computation of intensity values provided by imaging sonars, while maitaining compability with the already used methods, such as range extraction. Simulations shows the efficiency of the method as well its viability to the utilization of imaging sonar in underwater localization. The new approach make possible, under certain constraints, the extraction of 3D information from a sensor considered, in the literature, as 2D and also in situations where there is no reference at the same horizontal plane of the sensor transducer scanning axis. The localization in complex 3D environment is also an advantage provided by the proposed method.Este trabalho propõe um método de localização utilizando um sonar do tipo MSIS (Mechanically Scanned Imaging Sonar ), o qual se destaca por seu baixo custo e peso. O método implementa um filtro de partículas, um estimador Bayesiano, e introduz um modelo de medição baseado na teoria de simulação de sonares. No conhecimento do autor não há uma abordagem similar na literatura, uma vez que os métodos atuais de simulação de sonar visam a geração de dados sintéticos para o reconhecimento de objetos. Esta é a maior contribuição da tese pois permite a a computação dos valores de intensidade fornecidos pelos sonares do tipo imaging e ao mesmo tempo é compatível com os métodos já utilizados, como extração de distância. Simulações mostram o bom desempenho do método, assim como sua viabilidade para o uso de imaging sonars na localização submarina. A nova abordagem tornou possível, sob certas restrições, a extração de informações 3D de um sensor considerado, na literatura, como somente 2D e também em situações em que não há nehnuma referência no mesmo plano horizontal do eixo de escaneamento do transdutor. A localização em ambientes 3D complexos é também uma vantagem proporcionada pelo método proposto

    Concept of a Robust & Training-free Probabilistic System for Real-time Intention Analysis in Teams

    Get PDF
    Die Arbeit beschäftigt sich mit der Analyse von Teamintentionen in Smart Environments (SE). Die fundamentale Aussage der Arbeit ist, dass die Entwicklung und Integration expliziter Modelle von Nutzeraufgaben einen wichtigen Beitrag zur Entwicklung mobiler und ubiquitärer Softwaresysteme liefern können. Die Arbeit sammelt Beschreibungen von menschlichem Verhalten sowohl in Gruppensituationen als auch Problemlösungssituationen. Sie untersucht, wie SE-Projekte die Aktivitäten eines Nutzers modellieren, und liefert ein Teamintentionsmodell zur Ableitung und Auswahl geplanten Teamaktivitäten mittels der Beobachtung mehrerer Nutzer durch verrauschte und heterogene Sensoren. Dazu wird ein auf hierarchischen dynamischen Bayes’schen Netzen basierender Ansatz gewählt

    Advances and Applications of Dezert-Smarandache Theory (DSmT) for Information Fusion (Collected Works), Vol. 4

    Get PDF
    The fourth volume on Advances and Applications of Dezert-Smarandache Theory (DSmT) for information fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics. The contributions (see List of Articles published in this book, at the end of the volume) have been published or presented after disseminating the third volume (2009, http://fs.unm.edu/DSmT-book3.pdf) in international conferences, seminars, workshops and journals. First Part of this book presents the theoretical advancement of DSmT, dealing with Belief functions, conditioning and deconditioning, Analytic Hierarchy Process, Decision Making, Multi-Criteria, evidence theory, combination rule, evidence distance, conflicting belief, sources of evidences with different importance and reliabilities, importance of sources, pignistic probability transformation, Qualitative reasoning under uncertainty, Imprecise belief structures, 2-Tuple linguistic label, Electre Tri Method, hierarchical proportional redistribution, basic belief assignment, subjective probability measure, Smarandache codification, neutrosophic logic, Evidence theory, outranking methods, Dempster-Shafer Theory, Bayes fusion rule, frequentist probability, mean square error, controlling factor, optimal assignment solution, data association, Transferable Belief Model, and others. More applications of DSmT have emerged in the past years since the apparition of the third book of DSmT 2009. Subsequently, the second part of this volume is about applications of DSmT in correlation with Electronic Support Measures, belief function, sensor networks, Ground Moving Target and Multiple target tracking, Vehicle-Born Improvised Explosive Device, Belief Interacting Multiple Model filter, seismic and acoustic sensor, Support Vector Machines, Alarm classification, ability of human visual system, Uncertainty Representation and Reasoning Evaluation Framework, Threat Assessment, Handwritten Signature Verification, Automatic Aircraft Recognition, Dynamic Data-Driven Application System, adjustment of secure communication trust analysis, and so on. Finally, the third part presents a List of References related with DSmT published or presented along the years since its inception in 2004, chronologically ordered

    Machine Learning in Robotic Navigation:Deep Visual Localization and Adaptive Control

    Get PDF
    The work conducted in this thesis contributes to the robotic navigation field by focusing on different machine learning solutions: supervised learning with (deep) neural networks, unsupervised learning, and reinforcement learning.First, we propose a semi-supervised machine learning approach that can dynamically update the robot controller's parameters using situational analysis through feature extraction and unsupervised clustering. The results show that the robot can adapt to the changes in its surroundings, resulting in a thirty percent improvement in navigation speed and stability.Then, we train multiple deep neural networks for estimating the robot's position in the environment using ground truth information provided by a classical localization and mapping approach. We prepare two image-based localization datasets in 3D simulation and compare the results of a traditional multilayer perceptron, a stacked denoising autoencoder, and a convolutional neural network (CNN). The experiment results show that our proposed inception based CNNs without pooling layers perform very well in all the environments. Finally, we propose a two-stage learning framework for visual navigation in which the experience of the agent during exploration of one goal is shared to learn to navigate to other goals. The multi-goal Q-function learns to traverse the environment by using the provided discretized map. Transfer learning is applied to the multi-goal Q-function from a maze structure to a 2D simulator and is finally deployed in a 3D simulator where the robot uses the estimated locations from the position estimator deep CNNs. The results show a significant improvement when multi-goal reinforcement learning is used
    corecore