10,593 research outputs found

    Continually improving large scale long term visual navigation of a vehicle in dynamic urban environments

    Get PDF
    Abstract-This paper is about long term navigation in dynamic environments. In previous work we introduced a framework which stored distinct visual appearances of a workspace, known as experiences. These are used to improve localisation on future visits. In this work we introduce a new introspective process, executed between sorties, thats aims by careful discovery of the relationships between experiences, to further improve the performance of our system. We evaluate our new approach on 37km of stereo data captured over a three month period

    An optimization technique for positioning multiple maps for self-driving car's autonomous navigation

    Get PDF
    International audienceSelf-driving car's navigation requires a very precise localization covering wide areas and long distances. Moreover, they have to do it at faster speeds than conventional mobile robots. This paper reports on an efficient technique to optimize the position of a sequence of maps along a journey. We take advantage of the short-term precision and reduced space on disk of the localization using 2D occupancy grid maps, from now on called sub-maps, as well as, the long-term global consistency of a Kalman filter that fuses odometry and GPS measurements. In our approach, horizontal planar LiDARs and odometry measurements are used to perform 2D-SLAM generating the sub-maps, and the EKF to generate the trajectory followed by the car in global coordinates. During the trip, after finishing each sub-map, a relaxation process is applied to a set of the last sub-maps to position them globally using both, global and map's local path. The importance of this method lies on its performance, expending low computing resources, so it can work in real time on a computer with conventional characteristics and on its robustness which makes it suitable for being used on a self-driving car as it doesn't depend excessively on the availability of GPS signal or the eventual appearance of moving objects around the car. Extensive testing has been performed in the suburbs and in the downtown of Nantes (France) covering a distance of 25 kilometers with different traffic conditions obtaining satisfactory results for autonomous driving

    Duckietown: An Innovative Way to Teach Autonomy

    Get PDF
    Teaching robotics is challenging because it is a multidisciplinary, rapidly evolving and experimental discipline that integrates cutting-edge hardware and software. This paper describes the course design and first implementation of Duckietown, a vehicle autonomy class that experiments with teaching innovations in addition to leveraging modern educational theory for improving student learning. We provide a robot to every student, thanks to a minimalist platform design, to maximize active learning; and introduce a role-play aspect to increase team spirit, by modeling the entire class as a fictional start-up (Duckietown Engineering Co.). The course formulation leverages backward design by formalizing intended learning outcomes (ILOs) enabling students to appreciate the challenges of: (a) heterogeneous disciplines converging in the design of a minimal self-driving car, (b) integrating subsystems to create complex system behaviors, and (c) allocating constrained computational resources. Students learn how to assemble, program, test and operate a self-driving car (Duckiebot) in a model urban environment (Duckietown), as well as how to implement and document new features in the system. Traditional course assessment tools are complemented by a full scale demonstration to the general public. The “duckie” theme was chosen to give a gender-neutral, friendly identity to the robots so as to improve student involvement and outreach possibilities. All of the teaching materials and code is released online in the hope that other institutions will adopt the platform and continue to evolve and improve it, so to keep pace with the fast evolution of the field.National Science Foundation (U.S.) (Award IIS #1318392)National Science Foundation (U.S.) (Award #1405259

    A multisensor SLAM for dense maps of large scale environments under poor lighting conditions

    Get PDF
    This thesis describes the development and implementation of a multisensor large scale autonomous mapping system for surveying tasks in underground mines. The hazardous nature of the underground mining industry has resulted in a push towards autonomous solutions to the most dangerous operations, including surveying tasks. Many existing autonomous mapping techniques rely on approaches to the Simultaneous Localization and Mapping (SLAM) problem which are not suited to the extreme characteristics of active underground mining environments. Our proposed multisensor system has been designed from the outset to address the unique challenges associated with underground SLAM. The robustness, self-containment and portability of the system maximize the potential applications.The multisensor mapping solution proposed as a result of this work is based on a fusion of omnidirectional bearing-only vision-based localization and 3D laser point cloud registration. By combining these two SLAM techniques it is possible to achieve some of the advantages of both approaches – the real-time attributes of vision-based SLAM and the dense, high precision maps obtained through 3D lasers. The result is a viable autonomous mapping solution suitable for application in challenging underground mining environments.A further improvement to the robustness of the proposed multisensor SLAM system is a consequence of incorporating colour information into vision-based localization. Underground mining environments are often dominated by dynamic sources of illumination which can cause inconsistent feature motion during localization. Colour information is utilized to identify and remove features resulting from illumination artefacts and to improve the monochrome based feature matching between frames.Finally, the proposed multisensor mapping system is implemented and evaluated in both above ground and underground scenarios. The resulting large scale maps contained a maximum offset error of ±30mm for mapping tasks with lengths over 100m

    A modular hybrid SLAM for the 3D mapping of large scale environments

    Get PDF
    Underground mining environments pose many unique challenges to the task of creating extensive, survey quality 3D maps. The extreme characteristics of such environments require a modular mapping solution which has no dependency on Global Positioning Systems (GPS), physical odometry, a priori information or motion model simplification. These restrictions rule out many existing 3D mapping approaches. This work examines a hybrid approach to mapping, fusing omnidirectional vision and 3D range data to produce an automatically registered, accurate and dense 3D map. A series of discrete 3D laser scans are registered through a combination of vision based bearing-only localization and scan matching with the Iterative Closest Point (ICP) algorithm. Depth information provided by the laser scans is used to correctly scale the bearing-only feature map, which in turn supplies an initial pose estimate for a registration algorithm to build the 3D map and correct localization drift. The resulting extensive maps require no external instrumentation or a priori information. Preliminary testing demonstrated the ability of the hybrid system to produce a highly accurate 3D map of an extensive indoor space

    Autonomous navigation for guide following in crowded indoor environments

    No full text
    The requirements for assisted living are rapidly changing as the number of elderly patients over the age of 60 continues to increase. This rise places a high level of stress on nurse practitioners who must care for more patients than they are capable. As this trend is expected to continue, new technology will be required to help care for patients. Mobile robots present an opportunity to help alleviate the stress on nurse practitioners by monitoring and performing remedial tasks for elderly patients. In order to produce mobile robots with the ability to perform these tasks, however, many challenges must be overcome. The hospital environment requires a high level of safety to prevent patient injury. Any facility that uses mobile robots, therefore, must be able to ensure that no harm will come to patients whilst in a care environment. This requires the robot to build a high level of understanding about the environment and the people with close proximity to the robot. Hitherto, most mobile robots have used vision-based sensors or 2D laser range finders. 3D time-of-flight sensors have recently been introduced and provide dense 3D point clouds of the environment at real-time frame rates. This provides mobile robots with previously unavailable dense information in real-time. I investigate the use of time-of-flight cameras for mobile robot navigation in crowded environments in this thesis. A unified framework to allow the robot to follow a guide through an indoor environment safely and efficiently is presented. Each component of the framework is analyzed in detail, with real-world scenarios illustrating its practical use. Time-of-flight cameras are relatively new sensors and, therefore, have inherent problems that must be overcome to receive consistent and accurate data. I propose a novel and practical probabilistic framework to overcome many of the inherent problems in this thesis. The framework fuses multiple depth maps with color information forming a reliable and consistent view of the world. In order for the robot to interact with the environment, contextual information is required. To this end, I propose a region-growing segmentation algorithm to group points based on surface characteristics, surface normal and surface curvature. The segmentation process creates a distinct set of surfaces, however, only a limited amount of contextual information is available to allow for interaction. Therefore, a novel classifier is proposed using spherical harmonics to differentiate people from all other objects. The added ability to identify people allows the robot to find potential candidates to follow. However, for safe navigation, the robot must continuously track all visible objects to obtain positional and velocity information. A multi-object tracking system is investigated to track visible objects reliably using multiple cues, shape and color. The tracking system allows the robot to react to the dynamic nature of people by building an estimate of the motion flow. This flow provides the robot with the necessary information to determine where and at what speeds it is safe to drive. In addition, a novel search strategy is proposed to allow the robot to recover a guide who has left the field-of-view. To achieve this, a search map is constructed with areas of the environment ranked according to how likely they are to reveal the guide’s true location. Then, the robot can approach the most likely search area to recover the guide. Finally, all components presented are joined to follow a guide through an indoor environment. The results achieved demonstrate the efficacy of the proposed components

    Don't Look Back: Robustifying Place Categorization for Viewpoint- and Condition-Invariant Place Recognition

    Full text link
    When a human drives a car along a road for the first time, they later recognize where they are on the return journey typically without needing to look in their rear-view mirror or turn around to look back, despite significant viewpoint and appearance change. Such navigation capabilities are typically attributed to our semantic visual understanding of the environment [1] beyond geometry to recognizing the types of places we are passing through such as "passing a shop on the left" or "moving through a forested area". Humans are in effect using place categorization [2] to perform specific place recognition even when the viewpoint is 180 degrees reversed. Recent advances in deep neural networks have enabled high-performance semantic understanding of visual places and scenes, opening up the possibility of emulating what humans do. In this work, we develop a novel methodology for using the semantics-aware higher-order layers of deep neural networks for recognizing specific places from within a reference database. To further improve the robustness to appearance change, we develop a descriptor normalization scheme that builds on the success of normalization schemes for pure appearance-based techniques such as SeqSLAM [3]. Using two different datasets - one road-based, one pedestrian-based, we evaluate the performance of the system in performing place recognition on reverse traversals of a route with a limited field of view camera and no turn-back-and-look behaviours, and compare to existing state-of-the-art techniques and vanilla off-the-shelf features. The results demonstrate significant improvements over the existing state of the art, especially for extreme perceptual challenges that involve both great viewpoint change and environmental appearance change. We also provide experimental analyses of the contributions of the various system components.Comment: 9 pages, 11 figures, ICRA 201
    • …
    corecore