74 research outputs found

    Motion Planning

    Get PDF
    Motion planning is a fundamental function in robotics and numerous intelligent machines. The global concept of planning involves multiple capabilities, such as path generation, dynamic planning, optimization, tracking, and control. This book has organized different planning topics into three general perspectives that are classified by the type of robotic applications. The chapters are a selection of recent developments in a) planning and tracking methods for unmanned aerial vehicles, b) heuristically based methods for navigation planning and routes optimization, and c) control techniques developed for path planning of autonomous wheeled platforms

    Robot Mapping and Navigation in Real-World Environments

    Get PDF
    Robots can perform various tasks, such as mapping hazardous sites, taking part in search-and-rescue scenarios, or delivering goods and people. Robots operating in the real world face many challenges on the way to the completion of their mission. Essential capabilities required for the operation of such robots are mapping, localization and navigation. Solving all of these tasks robustly presents a substantial difficulty as these components are usually interconnected, i.e., a robot that starts without any knowledge about the environment must simultaneously build a map, localize itself in it, analyze the surroundings and plan a path to efficiently explore an unknown environment. In addition to the interconnections between these tasks, they highly depend on the sensors used by the robot and on the type of the environment in which the robot operates. For example, an RGB camera can be used in an outdoor scene for computing visual odometry, or to detect dynamic objects but becomes less useful in an environment that does not have enough light for cameras to operate. The software that controls the behavior of the robot must seamlessly process all the data coming from different sensors. This often leads to systems that are tailored to a particular robot and a particular set of sensors. In this thesis, we challenge this concept by developing and implementing methods for a typical robot navigation pipeline that can work with different types of the sensors seamlessly both, in indoor and outdoor environments. With the emergence of new range-sensing RGBD and LiDAR sensors, there is an opportunity to build a single system that can operate robustly both in indoor and outdoor environments equally well and, thus, extends the application areas of mobile robots. The techniques presented in this thesis aim to be used with both RGBD and LiDAR sensors without adaptations for individual sensor models by using range image representation and aim to provide methods for navigation and scene interpretation in both static and dynamic environments. For a static world, we present a number of approaches that address the core components of a typical robot navigation pipeline. At the core of building a consistent map of the environment using a mobile robot lies point cloud matching. To this end, we present a method for photometric point cloud matching that treats RGBD and LiDAR sensors in a uniform fashion and is able to accurately register point clouds at the frame rate of the sensor. This method serves as a building block for the further mapping pipeline. In addition to the matching algorithm, we present a method for traversability analysis of the currently observed terrain in order to guide an autonomous robot to the safe parts of the surrounding environment. A source of danger when navigating difficult to access sites is the fact that the robot may fail in building a correct map of the environment. This dramatically impacts the ability of an autonomous robot to navigate towards its goal in a robust way, thus, it is important for the robot to be able to detect these situations and to find its way home not relying on any kind of map. To address this challenge, we present a method for analyzing the quality of the map that the robot has built to date, and safely returning the robot to the starting point in case the map is found to be in an inconsistent state. The scenes in dynamic environments are vastly different from the ones experienced in static ones. In a dynamic setting, objects can be moving, thus making static traversability estimates not enough. With the approaches developed in this thesis, we aim at identifying distinct objects and tracking them to aid navigation and scene understanding. We target these challenges by providing a method for clustering a scene taken with a LiDAR scanner and a measure that can be used to determine if two clustered objects are similar that can aid the tracking performance. All methods presented in this thesis are capable of supporting real-time robot operation, rely on RGBD or LiDAR sensors and have been tested on real robots in real-world environments and on real-world datasets. All approaches have been published in peer-reviewed conference papers and journal articles. In addition to that, most of the presented contributions have been released publicly as open source software

    Learning cognitive maps: Finding useful structure in an uncertain world

    Get PDF
    In this chapter we will describe the central mechanisms that influence how people learn about large-scale space. We will focus particularly on how these mechanisms enable people to effectively cope with both the uncertainty inherent in a constantly changing world and also with the high information content of natural environments. The major lessons are that humans get by with a less is more approach to building structure, and that they are able to quickly adapt to environmental changes thanks to a range of general purpose mechanisms. By looking at abstract principles, instead of concrete implementation details, it is shown that the study of human learning can provide valuable lessons for robotics. Finally, these issues are discussed in the context of an implementation on a mobile robot. © 2007 Springer-Verlag Berlin Heidelberg

    Sampling-Based Exploration Strategies for Mobile Robot Autonomy

    Get PDF
    A novel, sampling-based exploration strategy is introduced for Unmanned Ground Vehicles (UGV) to efficiently map large GPS-deprived underground environments. It is compared to state-of-the-art approaches and performs on a similar level, while it is not designed for a specific robot or sensor configuration like the other approaches. The introduced exploration strategy, which is called Random-Sampling-Based Next-Best View Exploration (RNE), uses a Rapidly-exploring Random Graph (RRG) to find possible view points in an area around the robot. They are compared with a computation-efficient Sparse Ray Polling (SRP) in a voxel grid to find the next-best view for the exploration. Each node in the exploration graph built with RRG is evaluated regarding the ability of the UGV to traverse it, which is derived from an occupancy grid map. It is also used to create a topology-based graph where nodes are placed centrally to reduce the risk of collisions and increase the amount of observable space. Nodes that fall outside the local exploration area are stored in a global graph and are connected with a Traveling Salesman Problem solver to explore them later

    Using an insect mushroom body circuit to encode route memory in complex natural environments

    Get PDF
    Ants, like many other animals, use visual memory to follow extended routes through complex environments, but it is unknown how their small brains implement this capability. The mushroom body neuropils have been identified as a crucial memory circuit in the insect brain, but their function has mostly been explored for simple olfactory association tasks. We show that a spiking neural model of this circuit originally developed to describe fruitfly (Drosophila melanogaster) olfactory association, can also account for the ability of desert ants (Cataglyphis velox) to rapidly learn visual routes through complex natural environments. We further demonstrate that abstracting the key computational principles of this circuit, which include one-shot learning of sparse codes, enables the theoretical storage capacity of the ant mushroom body to be estimated at hundreds of independent images

    Airborne Navigation by Fusing Inertial and Camera Data

    Get PDF
    Unmanned aircraft systems (UASs) are often used as measuring system. Therefore, precise knowledge of their position and orientation are required. This thesis provides research in the conception and realization of a system which combines GPS-assisted inertial navigation systems with the advances in the area of camera-based navigation. It is presented how these complementary approaches can be used in a joint framework. In contrast to widely used concepts utilizing only one of the two approaches, a more robust overall system is realized. The presented algorithms are based on the mathematical concepts of rigid body motions. After derivation of the underlying equations, the methods are evaluated in numerical studies and simulations. Based on the results, real-world systems are used to collect data, which is evaluated and discussed. Two approaches for the system calibration, which describes the offsets between the coordinate systems of the sensors, are proposed. The first approach integrates the parameters of the system calibration in the classical bundle adjustment. The optimization is presented very descriptive in a graph based formulation. Required is a high precision INS and data from a measurement flight. In contrast to classical methods, a flexible flight course can be used and no cost intensive ground control points are required. The second approach enables the calibration of inertial navigation systems with a low positional accuracy. Line observations are used to optimize the rotational part of the offsets. Knowledge of the offsets between the coordinate systems of the sensors allows transforming measurements bidirectional. This is the basis for a fusion concept combining measurements from the inertial navigation system with an approach for the visual navigation. As a result, more robust estimations of the own position and orientation are achieved. Moreover, the map created from the camera images is georeferenced. It is shown how this map can be used to navigate an unmanned aerial system back to its starting position in the case of a disturbed or failed GPS reception. The high precision of the map allows the navigation through previously unexplored area by taking into consideration the maximal drift for the camera-only navigation. The evaluated concept provides insight into the possibility of the robust navigation of unmanned aerial systems with complimentary sensors. The constantly increasing computing power allows the evaluation of big amounts of data and the development of new concept to fuse the information. Future navigation systems will use the data of all available sensors to achieve the best navigation solution at any time

    How is an ant navigation algorithm affected by visual parameters and ego-motion?

    Get PDF
    Ants typically use path integration and vision for navigation when the environment precludes the use of pheromones for trails. Recent simulations have been able to accurately mimic the retinotopic navigation behaviour of these ants using simple models of movement and memory of unprocessed visual images. Naturally it is interesting to test these navigation algorithms in more realistic circumstances, particularly with actual route data from the ant, in an accurate facsimile of the ant world and with visual input that draws on the characteristics of the animal. While increasing the complexity of the visual processing to include skyline extraction, inhomogeneous sampling and motion processing was conjectured to improve the performance of the simulations, the reverse appears to be the case. Examining closely the assumptions about motion, analysis of ants in the field shows that they experience considerable displacement of the head which when applied to the simulation leads to significant degradation in performance. The family of simulations rely upon continuous visual monitoring of the scene to determine heading and it was decided to test whether the animals were similarly dependent on this input. A field study demonstrated that ants with only visual navigation cues can return the nest when largely facing away from the direction of travel (moving backwards) and so it appears that ant visual navigation is not a process of continuous retinotopic image matching. We conclude ants may use vision to determine an initial heading by image matching and then continue to follow this direction using their celestial compass, or they may use a rotationally invariant form of the visual world for continuous course correction

    Exploration, navigation and localization for mobile robots.

    Get PDF
    he main goal of this thesis is the advancement of the state of the art in mobile robot autonomy. In order to achieve this objective, several contributions have been presented that tackle well defined problems in the areas of localization, navigation and exploration. The very first contribution is focused on the task of robustly finding the localization of a mobile robot in an outdoor environment. Specifically, the presented technique introduces a key methodolgy to perform sensor fusion of a global localization sensor so ubiquitous as a GPS device, within the context of a particle filter based Monte Carlo localization system. We focus on the management of multiple sensor data sources under noisy and conflicting readings. This strategy allows for a reduced uncertainty in the robot pose estimation, as well as improved robustness of the system. The second contribution presents a completely integrated navigation system running within a constrained and highly dynamic platform like a quadrotor, applied to full 3D environments. The navigation stack comprises a Simultaneous Localization and Mapping (SLAM) system for RGB-D cameras that provides both the robot pose and an obstacle map of the environment, as well as a 4D path planner capable of finding obstacle free and kinematically feasible trajectories for the quadrotor to navigate this environment. The third contribution introduces a novel approach for autonomous exploration of unknown environments with robust homing. We present a technique to predict possible environment structures in the unseen parts of the robot's surroundings based on previously explored environments. We exploit this belief to predict possible loop closures that the robot may experience when exploring an unknown part of the scene. This allows the robot to actively reduce the uncertainty in its belief through its exploration actions. Also, we introduce a robust homing system that addresses the problem of returning a robot operating in an unknown environment to its starting position even if the underlying SLAM system fails. All contributions where designed, implemented and tested on real autonomous robots: a self-driving car, a micro aerial vehicle and an underground exploration platform

    Underwater Vehicles

    Get PDF
    For the latest twenty to thirty years, a significant number of AUVs has been created for the solving of wide spectrum of scientific and applied tasks of ocean development and research. For the short time period the AUVs have shown the efficiency at performance of complex search and inspection works and opened a number of new important applications. Initially the information about AUVs had mainly review-advertising character but now more attention is paid to practical achievements, problems and systems technologies. AUVs are losing their prototype status and have become a fully operational, reliable and effective tool and modern multi-purpose AUVs represent the new class of underwater robotic objects with inherent tasks and practical applications, particular features of technology, systems structure and functional properties
    • …
    corecore