1,370 research outputs found

    Context Exploitation in Data Fusion

    Get PDF
    Complex and dynamic environments constitute a challenge for existing tracking algorithms. For this reason, modern solutions are trying to utilize any available information which could help to constrain, improve or explain the measurements. So called Context Information (CI) is understood as information that surrounds an element of interest, whose knowledge may help understanding the (estimated) situation and also in reacting to that situation. However, context discovery and exploitation are still largely unexplored research topics. Until now, the context has been extensively exploited as a parameter in system and measurement models which led to the development of numerous approaches for the linear or non-linear constrained estimation and target tracking. More specifically, the spatial or static context is the most common source of the ambient information, i.e. features, utilized for recursive enhancement of the state variables either in the prediction or the measurement update of the filters. In the case of multiple model estimators, context can not only be related to the state but also to a certain mode of the filter. Common practice for multiple model scenarios is to represent states and context as a joint distribution of Gaussian mixtures. These approaches are commonly referred as the join tracking and classification. Alternatively, the usefulness of context was also demonstrated in aiding the measurement data association. Process of formulating a hypothesis, which assigns a particular measurement to the track, is traditionally governed by the empirical knowledge of the noise characteristics of sensors and operating environment, i.e. probability of detection, false alarm, clutter noise, which can be further enhanced by conditioning on context. We believe that interactions between the environment and the object could be classified into actions, activities and intents, and formed into structured graphs with contextual links translated into arcs. By learning the environment model we will be able to make prediction on the target\u2019s future actions based on its past observation. Probability of target future action could be utilized in the fusion process to adjust tracker confidence on measurements. By incorporating contextual knowledge of the environment, in the form of a likelihood function, in the filter measurement update step, we have been able to reduce uncertainties of the tracking solution and improve the consistency of the track. The promising results demonstrate that the fusion of CI brings a significant performance improvement in comparison to the regular tracking approaches

    Underwater mapping using a SONAR

    Get PDF
    Este estudo explora a capacidade física do raio de um sonar mecânico para construir um mapa do ambiente envolvente e encontrar a localização do veículo nesse mesmo mapa. Os dados do sonar alimentam um extrator de pontos de interesse e um algoritmo SLAM. Esse algoritmo é composto por uma implementação de Octomaps, junto com um filtro de partículas. Vários testes foram executados dentro de um ambiente estruturado e os resultados desses testes são demonstrados neste estudo.This study explores the physical capabilities of the beam of a mechanical scanning imaging sonar to build a map of the surrounding environment and to find the location of the vehicle within the map. The data from the sonar feed into a feature extractor and a SLAM algorithm. The SLAM algorithm is composed of an Octomaps implementation together with a particle filter. Several tests were ran within a structured environment and the map of the structured environment as well as the location of the vehicle are presented

    Active Mapping and Robot Exploration: A Survey

    Get PDF
    Simultaneous localization and mapping responds to the problem of building a map of the environment without any prior information and based on the data obtained from one or more sensors. In most situations, the robot is driven by a human operator, but some systems are capable of navigating autonomously while mapping, which is called native simultaneous localization and mapping. This strategy focuses on actively calculating the trajectories to explore the environment while building a map with a minimum error. In this paper, a comprehensive review of the research work developed in this field is provided, targeting the most relevant contributions in indoor mobile robotics.This research was funded by the ELKARTEK project ELKARBOT KK-2020/00092 of the Basque Government

    Robot Mapping with Real-Time Incremental Localization Using Expectation Maximization

    Get PDF
    This research effort explores and develops a real-time sonar-based robot mapping and localization algorithm that provides pose correction within the context of a single room, to be combined with pre-existing global localization techniques, and thus produce a single, well-formed map of an unknown environment. Our algorithm implements an expectation maximization algorithm that is based on the notion of the alpha-beta functions of a Hidden Markov Model. It performs a forward alpha calculation as an integral component of the occupancy grid mapping procedure using local maps in place of a single global map, and a backward beta calculation that considers the prior local map, a limited step that enables real-time processing. Real-time localization is an extremely difficult task that continues to be the focus of much research in the field, and most advances in localization have been achieved in an off-line context. The results of our research into and implementation of realtime localization showed limited success, generating improved maps in a number of cases, but not all-a trade-off between real-time and off-line processing. However, we believe there is ample room for extension to our approach that promises a more consistently successful real-time localization algorithm

    Multiple Integrated Navigation Sensors for Improving Occupancy Grid FastSLAM

    Get PDF
    An autonomous vehicle must accurately observe its location within the environment to interact with objects and accomplish its mission. When its environment is unknown, the vehicle must construct a map detailing its surroundings while using it to maintain an accurate location. Such a vehicle is faced with the circularly defined Simultaneous Localization and Mapping (SLAM) problem. However difficult, SLAM is a critical component of autonomous vehicle exploration with applications to search and rescue. To current knowledge, this research presents the first SLAM solution to integrate stereo cameras, inertial measurements, and vehicle odometry into a Multiple Integrated Navigation Sensor (MINS) path. The implementation combines the MINS path with LIDAR to observe and map the environment using the FastSLAM algorithm. In real-world tests, a mobile ground vehicle equipped with these sensors completed a 140 meter loop around indoor hallways. This SLAM solution produces a path that closes the loop and remains within 1 meter of truth, reducing the error 92% from an image-inertial navigation system and 79% from odometry FastSLAM
    corecore