12 research outputs found

    Consistent Map Building Based on Sensor Fusion for Indoor Service Robot

    Get PDF

    A Bayesian framework for optimal motion planning with uncertainty

    Get PDF
    Modeling robot motion planning with uncertainty in a Bayesian framework leads to a computationally intractable stochastic control problem. We seek hypotheses that can justify a separate implementation of control, localization and planning. In the end, we reduce the stochastic control problem to path- planning in the extended space of poses x covariances; the transitions between states are modeled through the use of the Fisher information matrix. In this framework, we consider two problems: minimizing the execution time, and minimizing the final covariance, with an upper bound on the execution time. Two correct and complete algorithms are presented. The first is the direct extension of classical graph-search algorithms in the extended space. The second one is a back-projection algorithm: uncertainty constraints are propagated backward from the goal towards the start state

    Learning-based Localizability Estimation for Robust LiDAR Localization

    Full text link
    LiDAR-based localization and mapping is one of the core components in many modern robotic systems due to the direct integration of range and geometry, allowing for precise motion estimation and generation of high quality maps in real-time. Yet, as a consequence of insufficient environmental constraints present in the scene, this dependence on geometry can result in localization failure, happening in self-symmetric surroundings such as tunnels. This work addresses precisely this issue by proposing a neural network-based estimation approach for detecting (non-)localizability during robot operation. Special attention is given to the localizability of scan-to-scan registration, as it is a crucial component in many LiDAR odometry estimation pipelines. In contrast to previous, mostly traditional detection approaches, the proposed method enables early detection of failure by estimating the localizability on raw sensor measurements without evaluating the underlying registration optimization. Moreover, previous approaches remain limited in their ability to generalize across environments and sensor types, as heuristic-tuning of degeneracy detection thresholds is required. The proposed approach avoids this problem by learning from a collection of different environments, allowing the network to function over various scenarios. Furthermore, the network is trained exclusively on simulated data, avoiding arduous data collection in challenging and degenerate, often hard-to-access, environments. The presented method is tested during field experiments conducted across challenging environments and on two different sensor types without any modifications. The observed detection performance is on par with state-of-the-art methods after environment-specific threshold tuning.Comment: 8 pages, 7 figures, 4 table

    A Power-Performance Approach to Comparing Sensor Families, with application to comparing neuromorphic to traditional vision sensors

    Full text link
    Abstract—There is considerable freedom in choosing the sensors to be equipped on a robot. Currently many sensing technologies are available (radar, lidar, vision sensors, time-of-flight cameras, etc.). For each class, there are additional choices regarding the exact sensor parameters (spatial resolution, frame rate, etc.). Which sensor is best? In general, this question needs to be qualified. It depends on the task. In an estimation task, the answer depends on the prior for the signal. In a control task, the answer depends exactly on which are the sufficient statistics for computing the control signal. This paper shows that an ulterior qualification that needs to be made: the answer depends on the power available for sensing, even when the task is fixed. We define the “power-performance ” curve as the performance attainable on a task for a given level of sensing power. We show that this approach is well suited to comparing a traditional CMOS sensor with the recently available “neuromorphic ” sensors. We discuss estimation tasks with different priors for the signal. We find priors for which one sensor dominates the other and vice-versa, priors for which they are equivalent, and priors for which the answer depends on the power available. This shows that comparing sensors is a quite delicate problem. It also suggests that the optimal architecture might have more that one sensor, and would switch sensors on and off according to the performance level required instantaneously. I

    INSTRUCTIONS FOR PREPARATION OF CAMERA-READY MANUSCRIPTS FOR BULLETIN OF GRADUATE SCIENCE AND ENGINEERING, ENGINEERING STUDIES

    Get PDF
    In the field of autonomous mobile robotics, reliable localization performance is essential. However, there are real environments where localization is a failure. In this paper, we propose a method for estimating localizability based on occupancy grid maps. Localizability indicates the reliability of localization. There are several approaches to estimate localizability, and we propose a method using local map correlations. The covariance matrix of the Gaussian distribution from local map correlations is used to estimate localizability. In this way, we can estimate the magnitude of the localization error and the characteristics of the error. The experiment confirmed the characteristics of the distribution of correlations for each location on occupancy grid maps. And the localizability of the whole map was estimated using an occupancy grid map containing a vast and complex. The simulation experiment results showed that the proposed method could estimate localization error and the characteristics of the error on occupancy grid maps. The proposed method was confirmed to be effective in estimating localizability

    GPS-LiDAR sensor fusion aided by 3D city models for UAVs

    Get PDF
    Recently, there has been an increase in outdoor applications for small-scale Unmanned Aerial Vehicles (UAVs), such as 3D modelling, filming, surveillance, and search and rescue. To perform these tasks safely and reliably, a continuous and accurate estimate of the UAVs’ positions is needed. Global Positioning System (GPS) receivers are commonly used for this purpose. However, navigating in urban areas using only GPS is challenging, since satellite signals might be reflected or blocked by buildings, resulting in multipath errors or non-line-of-sight (NLOS) situations. In such cases, additional on-board sensors are desirable to improve global positioning of the UAV. Light Detection and Ranging (LiDAR), one such sensor, provides a real-time point cloud of its surroundings. In a dense urban environment, LiDAR is able to detect a large number of features of surrounding structures, such as buildings, as opposed to in an open-sky environment. This characteristic of LiDAR complements GPS, which is accurate in open-sky environments, but may suffer large errors in urban areas. To fuse GPS and LiDAR measurements, Kalman Filtering and its variations are commonly used. However, it is important, yet challenging, to accurately characterize the error covariance of the sensor measurements. In this thesis, we propose a GPS-LiDAR fusion technique with a novel method for efficiently modelling the error covariance in position measurements based on LiDAR point clouds. For GPS measurements, we eliminate NLOS satellites and model the covariance based on the measurement signal-to-noise ratio (SNR) values. We use the LiDAR point clouds in two ways: to estimate incremental motion by matching consecutive point clouds; and, to estimate global pose by matching with a 3D city model. We aim to characterize the error covariance matrices in these two aspects as a function of the distribution of features in the LiDAR point cloud. To estimate the incremental motion between two consecutive LiDAR point clouds, we use the Iterative Closest Point (ICP) algorithm. We perform simulations in different environments to showcase the dependence of ICP on features in the point cloud. While navigating in urban areas, we expect the LiDAR to detect structured objects, such as buildings, which are primarily composed of surfaces and edges. Thus, we develop an efficient way for modelling the error covariance in the estimated incremental position based on each surface and edge feature point in the point cloud. A surface point helps to estimate motion of the LiDAR perpendicular to the surface, while an edge point helps to estimate motion of the LiDAR perpendicular to the edge. We treat each feature point independently and combine their individual error covariance to obtain a total error covariance ellipsoid for the estimated incremental position. For our 3D city model, we use elevation data of the State of Illinois available online and combine it with building information extracted from OpenStreetMap, a crowd-sourced mapping platform. We again use the ICP algorithm to match the LiDAR point cloud with our 3D city model, which provides us with an estimate of the UAV's global pose. Additionally, we also use the 3D city model to determine and eliminate NLOS GPS satellites. We use remaining pseudorange measurements from the on-board GPS receiver and a stationary reference receiver to create a vector of double-difference measurements. We create a covariance matrix for the GPS double-difference measurement vector based on SNR of the individual pseudorange measurements. Finally, all the above measurements and error covariance matrices are provided as an input to an Unscented Kalman Filter (UKF). The states of the filter include the globally referenced pose of the UAV. Before implementation, we perform an observability analysis for our filter. To validate our algorithm, we conduct UAV experiments in GPS-challenged urban environments on the University of Illinois at Urbana-Champaign campus. We observe that our model for the covariance ellipsoid from on-board LiDAR point clouds accurately represents the position errors and improves the filter output. We demonstrate a clear improvement in the UAV's global pose estimates using the proposed sensor fusion technique

    Interlacing Self-Localization, Moving Object Tracking and Mapping for 3D Range Sensors

    Get PDF
    This work presents a solution for autonomous vehicles to detect arbitrary moving traffic participants and to precisely determine the motion of the vehicle. The solution is based on three-dimensional images captured with modern range sensors like e.g. high-resolution laser scanners. As result, objects are tracked and a detailed 3D model is built for each object and for the static environment. The performance is demonstrated in challenging urban environments that contain many different objects

    Study and application of motion measurement methods by means of opto-electronics systems - Studio e applicazione di metodi di misura del moto mediante sistemi opto-elettronici

    Get PDF
    This thesis addresses the problem of localizing a vehicle in unstructured environments through on-board instrumentation that does not require infrastructure modifications. Two widely used opto-electronic systems which allow for non-contact measurements have been chosen: camera and laser range finder. Particular attention is paid to the definition of a set of procedures for processing the environment information acquired with the instruments in order to provide both accuracy and robustness to measurement noise. An important contribute of this work is the development of a robust and reliable algorithm for associating data that has been integrated in a graph based SLAM framework also taking into account uncertainty thus leading to an optimal vehicle motion estimation. Moreover, the localization of the vehicle can be achieved in a generic environment since the developed global localization solution does not necessarily require the identification of landmarks in the environment, neither natural nor artificial. Part of the work is dedicated to a thorough comparative analysis of the state-of-the-art scan matching methods in order to choose the best one to be employed in the solution pipeline. In particular this investigation has highlighted that a dense scan matching approach can ensure good performances in many typical environments. Several experiments in different environments, also with large scales, denote the effectiveness of the global localization system developed. While the laser range data have been exploited for the global localization, a robust visual odometry has been investigated. The results suggest that the use of camera can overcome the situations in which the solution achieved by the laser scanner has a low accuracy. In particular the global localization framework can be applied also to the camera sensor, in order to perform a sensor fusion between two complementary instrumentations and so obtain a more reliable localization system. The algorithms have been tested for 2D indoor environments, nevertheless it is expected that they are well suited also for 3D and outdoors

    Vehicle localization with enhanced robustness for urban automated driving

    Get PDF

    Statistical modelling of algorithms for signal processing in systems based on environment perception

    Get PDF
    One cornerstone for realising automated driving systems is an appropriate handling of uncertainties in the environment perception and situation interpretation. Uncertainties arise due to noisy sensor measurements or the unknown future evolution of a traffic situation. This work contributes to the understanding of these uncertainties by modelling and propagating them with parametric probability distributions
    corecore