288 research outputs found

    Two different tools for three-dimensional mapping: DE-based scan matching and feature-based loop detection

    Get PDF
    An autonomous robot must obtain information about its surroundings to accomplish multiple tasks that are greatly improved when this information is efficiently incorporated into amap. Some examples are navigation, manipulation, localization, etc. This mapping problem has been an important research area in mobile robotics during last decades. It does not have a unique solution and can be divided into multiple sub-problems. Two different aspects of the mobile robot mapping problem are addressed in this work. First, we have developed a Differential Evolution-based scan matching algorithm that operates with high accuracy in three-dimensional environments. The map obtained by an autonomous robot must be consistent after registration. It is basic to detect when the robot is navigating around a previously visited place in order to minimize the accumulated error. This phase, which is called loop detection, is the second aspect studied here. We have developed an algorithm that extracts the most important features from two different three-dimensional laser scans in order to obtain a loop indicator that is used to detect when the robot is visiting a known place. This approach allows the introduction of very different characteristics in the descriptor. First, the surface features include the geometric forms of the scan (lines, planes, and spheres). Second, the numerical features are values that describe several numerical properties of the measurements: volume, average range, curvature, etc. Both algorithms have been tested with real data to demonstrate that these are efficient tools to be used in mapping tasks.Publicad

    Autonomous Navigation in Complex Indoor and Outdoor Environments with Micro Aerial Vehicles

    Get PDF
    Micro aerial vehicles (MAVs) are ideal platforms for surveillance and search and rescue in confined indoor and outdoor environments due to their small size, superior mobility, and hover capability. In such missions, it is essential that the MAV is capable of autonomous flight to minimize operator workload. Despite recent successes in commercialization of GPS-based autonomous MAVs, autonomous navigation in complex and possibly GPS-denied environments gives rise to challenging engineering problems that require an integrated approach to perception, estimation, planning, control, and high level situational awareness. Among these, state estimation is the first and most critical component for autonomous flight, especially because of the inherently fast dynamics of MAVs and the possibly unknown environmental conditions. In this thesis, we present methodologies and system designs, with a focus on state estimation, that enable a light-weight off-the-shelf quadrotor MAV to autonomously navigate complex unknown indoor and outdoor environments using only onboard sensing and computation. We start by developing laser and vision-based state estimation methodologies for indoor autonomous flight. We then investigate fusion from heterogeneous sensors to improve robustness and enable operations in complex indoor and outdoor environments. We further propose estimation algorithms for on-the-fly initialization and online failure recovery. Finally, we present planning, control, and environment coverage strategies for integrated high-level autonomy behaviors. Extensive online experimental results are presented throughout the thesis. We conclude by proposing future research opportunities

    Two different tools for three-dimensional mapping: DE-based scan matching and feature-based loop detection

    Get PDF
    An autonomous robot must obtain information about its surroundings to accomplish multiple tasks that are greatly improved when this information is efficiently incorporated into a map. Some examples are navigation, manipulation, localization, etc. This mapping problem has been an important research area in mobile robotics during last decades. It does not have a unique solution and can be divided into multiple sub-problems. Two different aspects of the mobile robot mapping problem are addressed in this work. First, we have developed a Differential Evolution-based scan matching algorithm that operates with high accuracy in three-dimensional environments. The map obtained by an autonomous robot must be consistent after registration. It is basic to detect when the robot is navigating around a previously visited place in order to minimize the accumulated error. This phase, which is called loop detection, is the second aspect studied here. We have developed an algorithm that extracts the most important features from two different three-dimensional laser scans in order to obtain a loop indicator that is used to detect when the robot is visiting a known place. This approach allows the introduction of very different characteristics in the descriptor. First, the surface features include the geometric forms of the scan (lines, planes, and spheres). Second, the numerical features are values that describe several numerical properties of the measurements: volume, average range, curvature, etc. Both algorithms have been tested with real data to demonstrate that these are efficient tools to be used in mapping task

    Two Dimensional Positioning and Heading Solution for Flying Vehicles Using a Line-Scanning Laser Radar (LADAR)

    Get PDF
    Emerging technology in small autonomous flying vehicles requires the systems to have a precise navigation solution in order to perform tasks. In many critical environments, such as indoors, GPS is unavailable necessitating the development of supplemental aiding sensors to determine precise position. This research investigates the use of a line scanning laser radar (LADAR) as a standalone two dimensional position and heading navigation solution and sets up the device for augmentation into existing navigation systems. A fast histogram correlation method is developed to operate in real-time on board the vehicle providing position and heading updates at a rate of 10 Hz. LADAR navigation methods are adapted to 3 dimensions with a simulation built to analyze performance loss due attitude changes during flight. These simulations are then compared to experimental results collected using SICK LD-OEM 1000 mounted a cart traversing. The histogram correlation algorithm applied in this work was shown to successfully navigate a realistic environment where a quadrotor in short flights of less than 5 min in larger rooms. Application in hallways show great promise providing a stable heading along with tracking movement perpendicular to the hallway

    Fast, Autonomous Flight in GPS-Denied and Cluttered Environments

    Full text link
    One of the most challenging tasks for a flying robot is to autonomously navigate between target locations quickly and reliably while avoiding obstacles in its path, and with little to no a-priori knowledge of the operating environment. This challenge is addressed in the present paper. We describe the system design and software architecture of our proposed solution, and showcase how all the distinct components can be integrated to enable smooth robot operation. We provide critical insight on hardware and software component selection and development, and present results from extensive experimental testing in real-world warehouse environments. Experimental testing reveals that our proposed solution can deliver fast and robust aerial robot autonomous navigation in cluttered, GPS-denied environments.Comment: Pre-peer reviewed version of the article accepted in Journal of Field Robotic

    Vision-Inertial SLAM using Natural Features in Outdoor Environments

    Get PDF
    Simultaneous Localization and Mapping (SLAM) is a recursive probabilistic inferencing process used for robot navigation when Global Positioning Systems (GPS) are unavailable. SLAM operates by building a map of the robot environment, while concurrently localizing the robot within this map. The ultimate goal of SLAM is to operate anywhere using the environment's natural features as landmarks. Such a goal is difficult to achieve for several reasons. Firstly, different environments contain different types of natural features, each exhibiting large variance in its shape and appearance. Secondly, objects look differently from different viewpoints and it is therefore difficult to always recognize them. Thirdly, in most outdoor environments it is not possible to predict the motion of a vehicle using wheel encoders because of errors caused by slippage. Finally, the design of a SLAM system to operate in a large-scale outdoor setting is in itself a challenge. The above issues are addressed as follows. Firstly, a camera is used to recognize the environmental context (e. g. , indoor office, outdoor park) by analyzing the holistic spectral content of images of the robot's surroundings. A type of feature (e. g. , trees for a park) is then chosen for SLAM that is likely observable in the recognized setting. A novel tree detection system is introduced, which is based on perceptually organizing the content of images into quasi-vertical structures and marking those structures that intersect ground level as tree trunks. Secondly, a new tree recognition system is proposed, which is based on extracting Scale Invariant Feature Transform (SIFT) features on each tree trunk region and matching trees in feature space. Thirdly, dead-reckoning is performed via an Inertial Navigation System (INS), bounded by non-holonomic constraints. INS are insensitive to slippage and varying ground conditions. Finally, the developed Computer Vision and Inertial systems are integrated within the framework of an Extended Kalman Filter into a working Vision-INS SLAM system, named VisSLAM. VisSLAM is tested on data collected during a real test run in an outdoor unstructured environment. Three test scenarios are proposed, ranging from semi-automatic detection, recognition, and initialization to a fully automated SLAM system. The first two scenarios are used to verify the presented inertial and Computer Vision algorithms in the context of localization, where results indicate accurate vehicle pose estimation for the majority of its journey. The final scenario evaluates the application of the proposed systems for SLAM, where results indicate successful operation for a long portion of the vehicle journey. Although the scope of this thesis is to operate in an outdoor park setting using tree trunks as landmarks, the developed techniques lend themselves to other environments using different natural objects as landmarks

    Camera Marker Networks for Pose Estimation and Scene Understanding in Construction Automation and Robotics.

    Full text link
    The construction industry faces challenges that include high workplace injuries and fatalities, stagnant productivity, and skill shortage. Automation and Robotics in Construction (ARC) has been proposed in the literature as a potential solution that makes machinery easier to collaborate with, facilitates better decision-making, or enables autonomous behavior. However, there are two primary technical challenges in ARC: 1) unstructured and featureless environments; and 2) differences between the as-designed and the as-built. It is therefore impossible to directly replicate conventional automation methods adopted in industries such as manufacturing on construction sites. In particular, two fundamental problems, pose estimation and scene understanding, must be addressed to realize the full potential of ARC. This dissertation proposes a pose estimation and scene understanding framework that addresses the identified research gaps by exploiting cameras, markers, and planar structures to mitigate the identified technical challenges. A fast plane extraction algorithm is developed for efficient modeling and understanding of built environments. A marker registration algorithm is designed for robust, accurate, cost-efficient, and rapidly reconfigurable pose estimation in unstructured and featureless environments. Camera marker networks are then established for unified and systematic design, estimation, and uncertainty analysis in larger scale applications. The proposed algorithms' efficiency has been validated through comprehensive experiments. Specifically, the speed, accuracy and robustness of the fast plane extraction and the marker registration have been demonstrated to be superior to existing state-of-the-art algorithms. These algorithms have also been implemented in two groups of ARC applications to demonstrate the proposed framework's effectiveness, wherein the applications themselves have significant social and economic value. The first group is related to in-situ robotic machinery, including an autonomous manipulator for assembling digital architecture designs on construction sites to help improve productivity and quality; and an intelligent guidance and monitoring system for articulated machinery such as excavators to help improve safety. The second group emphasizes human-machine interaction to make ARC more effective, including a mobile Building Information Modeling and way-finding platform with discrete location recognition to increase indoor facility management efficiency; and a 3D scanning and modeling solution for rapid and cost-efficient dimension checking and concise as-built modeling.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113481/1/cforrest_1.pd

    Real Time UAV Altitude, Attitude and Motion Estimation form Hybrid Stereovision

    Get PDF
    International audienceKnowledge of altitude, attitude and motion is essential for an Unmanned Aerial Vehicle during crit- ical maneuvers such as landing and take-off. In this paper we present a hybrid stereoscopic rig composed of a fisheye and a perspective camera for vision-based navigation. In contrast to classical stereoscopic systems based on feature matching, we propose methods which avoid matching between hybrid views. A plane-sweeping approach is proposed for estimating altitude and de- tecting the ground plane. Rotation and translation are then estimated by decoupling: the fisheye camera con- tributes to evaluating attitude, while the perspective camera contributes to estimating the scale of the trans- lation. The motion can be estimated robustly at the scale, thanks to the knowledge of the altitude. We propose a robust, real-time, accurate, exclusively vision-based approach with an embedded C++ implementation. Although this approach removes the need for any non-visual sensors, it can also be coupled with an Inertial Measurement Unit

    BO-ICP: Initialization of Iterative Closest Point Based on Bayesian Optimization

    Full text link
    Typical algorithms for point cloud registration such as Iterative Closest Point (ICP) require a favorable initial transform estimate between two point clouds in order to perform a successful registration. State-of-the-art methods for choosing this starting condition rely on stochastic sampling or global optimization techniques such as branch and bound. In this work, we present a new method based on Bayesian optimization for finding the critical initial ICP transform. We provide three different configurations for our method which highlights the versatility of the algorithm to both find rapid results and refine them in situations where more runtime is available such as offline map building. Experiments are run on popular data sets and we show that our approach outperforms state-of-the-art methods when given similar computation time. Furthermore, it is compatible with other improvements to ICP, as it focuses solely on the selection of an initial transform, a starting point for all ICP-based methods.Comment: IEEE International Conference on Robotics and Automation 202
    • …
    corecore