194 research outputs found

    High-Dimensional Motion Planning and Learning Under Uncertain Conditions

    Get PDF
    Many existing path planning methods do not adequately account for uncertainty. Without uncertainty these existing techniques work well, but in real world environments they struggle due to inaccurate sensor models, arbitrarily moving obstacles, and uncertain action consequences. For example, picking up and storing childrens toys is a simple task for humans. Yet, for a robotic household robot the task can be daunting. The room must be modeled with sensors, which may or may not detect all the strewn toys. The robot must be able to detect and avoid the child who may be moving the very toys that the robot is tasked with cleaning. Finally, if the robot missteps and places a foot on a toy, it must be able to compensate for the unexpected consequences of its actions. This example demonstrates that even simple human tasks are fraught with uncertainties that must be accounted for in robotic path planning algorithms. This work presents the first steps towards migrating sampling-based path planning methods to real world environments by addressing three different types of uncertainty: (1) model uncertainty, (2) spatio-temporal obstacle uncertainty (moving obstacles) and (3) action consequence uncertainty. Uncertainty is encoded directly into path planning through a data structure in order to successfully and efficiently identify safe robot paths in sensed environments with noise. This encoding produces comparable clearance paths to other planning methods which are a known for high clearance, but at an order of magnitude less computational cost. It also shows that formal control theory methods combined with path planning provides a technique that has a 95% collision-free navigation rate with 300 moving obstacles. Finally, it demonstrates that reinforcement learning can be combined with planning data structures to autonomously learn motion controls of a seven degree of freedom robot despite a low computational cost despite the number of dimensions

    Neural Network based Robot 3D Mapping and Navigation using Depth Image Camera

    Get PDF
    Robotics research has been developing rapidly in the past decade. However, in order to bring robots into household or office environments and cooperate well with humans, it is still required more research works. One of the main problems is robot localization and navigation. To be able to accomplish its missions, the mobile robot needs to solve problems of localizing itself in the environment, finding the best path and navigate to the goal. The navigation methods can be categorized into map-based navigation and map-less navigation. In this research we propose a method based on neural networks, using a depth image camera to solve the robot navigation problem. By using a depth image camera, the surrounding environment can be recognized regardless of the lighting conditions. A neural network-based approach is fast enough for robot navigation in real-time which is important to develop the full autonomous robots.In our method, mapping and annotating of the surrounding environment is done by the robot using a Feed-Forward Neural Network and a CNN network. The 3D map not only contains the geometric information of the environments but also their semantic contents. The semantic contents are important for robots to accomplish their tasks. For instance, consider the task “Go to cabinet to take a medicine”. The robot needs to know the position of the cabinet and medicine which is not supplied by solely the geometrical map. A Feed-Forward Neural Network is trained to convert the depth information from depth images into 3D points in real-world coordination. A CNN network is trained to segment the image into classes. By combining the two neural networks, the objects in the environment are segmented and their positions are determined.We implemented the proposed method using the mobile humanoid robot. Initially, the robot moves in the environment and build the 3D map with objects placed in their positions. Then, the robot utilizes the developed 3D map for goal-directed navigation.The experimental results show good performance in terms of the 3D map accuracy and robot navigation. Most of the objects in the working environments are classified by the trained CNN. Un-recognized objects are classified by Feed-Forward Neural Network. As a result, the generated maps reflected exactly working environments and can be applied for robots to safely navigate in them. The 3D geometric maps can be generated regardless of the lighting conditions. The proposed localization method is robust even in texture-less environments which are the toughest environments in the field of vision-based localization.博士(工学)法政大学 (Hosei University

    Autonomous Navigation for Unmanned Aerial Systems - Visual Perception and Motion Planning

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Motion Planning under Uncertainty for Autonomous Navigation of Mobile Robots and UAVs

    Get PDF
    This thesis presents a reliable and efficient motion planning approach based on state lattices for the autonomous navigation of mobile robots and UAVs. The proposal retrieves optimal paths in terms of safety and traversal time, and deals with the kinematic constraints and the motion and sensing uncertainty at planning time. The efficiency is improved by a novel graduated fidelity state lattice which adapts to the obstacles in the map and the maneuverability of the robot, and by a new multi-resolution heuristic which reduces the computational complexity. The motion planner also includes a novel method to reliably estimate the probability of collision of the paths considering the uncertainty in heading and the robot dimensions

    A Comprehensive Review on Autonomous Navigation

    Full text link
    The field of autonomous mobile robots has undergone dramatic advancements over the past decades. Despite achieving important milestones, several challenges are yet to be addressed. Aggregating the achievements of the robotic community as survey papers is vital to keep the track of current state-of-the-art and the challenges that must be tackled in the future. This paper tries to provide a comprehensive review of autonomous mobile robots covering topics such as sensor types, mobile robot platforms, simulation tools, path planning and following, sensor fusion methods, obstacle avoidance, and SLAM. The urge to present a survey paper is twofold. First, autonomous navigation field evolves fast so writing survey papers regularly is crucial to keep the research community well-aware of the current status of this field. Second, deep learning methods have revolutionized many fields including autonomous navigation. Therefore, it is necessary to give an appropriate treatment of the role of deep learning in autonomous navigation as well which is covered in this paper. Future works and research gaps will also be discussed

    Appearance and Geometry Assisted Visual Navigation in Urban Areas

    Get PDF
    Navigation is a fundamental task for mobile robots in applications such as exploration, surveillance, and search and rescue. The task involves solving the simultaneous localization and mapping (SLAM) problem, where a map of the environment is constructed. In order for this map to be useful for a given application, a suitable scene representation needs to be defined that allows spatial information sharing between robots and also between humans and robots. High-level scene representations have the benefit of being more robust and having higher exchangeability for interpretation. With the aim of higher level scene representation, in this work we explore high-level landmarks and their usage using geometric and appearance information to assist mobile robot navigation in urban areas. In visual SLAM, image registration is a key problem. While feature-based methods such as scale-invariant feature transform (SIFT) matching are popular, they do not utilize appearance information as a whole and will suffer from low-resolution images. We study appearance-based methods and propose a scale-space integrated Lucas-Kanade’s method that can estimate geometric transformations and also take into account image appearance with different resolutions. We compare our method against state-of-the-art methods and show that our method can register images efficiently with high accuracy. In urban areas, planar building facades (PBFs) are basic components of the quasirectilinear environment. Hence, segmentation and mapping of PBFs can increase a robot’s abilities of scene understanding and localization. We propose a vision-based PBF segmentation and mapping technique that combines both appearance and geometric constraints to segment out planar regions. Then, geometric constraints such as reprojection errors, orientation constraints, and coplanarity constraints are used in an optimization process to improve the mapping of PBFs. A major issue in monocular visual SLAM is scale drift. While depth sensors, such as lidar, are free from scale drift, this type of sensors are usually more expensive compared to cameras. To enable low-cost mobile robots equipped with monocular cameras to obtain accurate position information, we use a 2D lidar map to rectify imprecise visual SLAM results using planar structures. We propose a two-step optimization approach assisted by a penalty function to improve on low-quality local minima results. Robot paths for navigation can be either automatically generated by a motion planning algorithm or provided by a human. In both cases, a scene representation of the environment, i.e., a map, is useful to specify meaningful tasks for the robot. However, SLAM results usually produce a sparse scene representation that consists of low-level landmarks, such as point clouds, which are neither convenient nor intuitive to use for task specification. We present a system that allows users to program mobile robots using high-level landmarks from appearance data
    corecore