37 research outputs found

    Learning to Fly by Crashing

    Full text link
    How do you learn to navigate an Unmanned Aerial Vehicle (UAV) and avoid obstacles? One approach is to use a small dataset collected by human experts: however, high capacity learning algorithms tend to overfit when trained with little data. An alternative is to use simulation. But the gap between simulation and real world remains large especially for perception problems. The reason most research avoids using large-scale real data is the fear of crashes! In this paper, we propose to bite the bullet and collect a dataset of crashes itself! We build a drone whose sole purpose is to crash into objects: it samples naive trajectories and crashes into random objects. We crash our drone 11,500 times to create one of the biggest UAV crash dataset. This dataset captures the different ways in which a UAV can crash. We use all this negative flying data in conjunction with positive data sampled from the same trajectories to learn a simple yet powerful policy for UAV navigation. We show that this simple self-supervised model is quite effective in navigating the UAV even in extremely cluttered environments with dynamic obstacles including humans. For supplementary video see: https://youtu.be/u151hJaGKU

    An expanded square pattern technique in swarm of quadcopters for exploration algorithm

    Get PDF
    The exploration algorithm is one of the most important roles in the searching mechanism. In robotics field, the exploration algorithm deals with the implementation of the robot to enlarge the information over a particular environment. In other words, the implementation of exploration algorithm into a robot is intended to survey the situation or condition of a specific area. A variety of techniques has been developed, even the biological systems also become an inspiration to be reckoned. In this paper, we proposed a swarmbased exploration algorithm with expanded square pattern using a quadcopter to explore an unknown area. In this algorithm, the expanded square pattern is conducted by a series of distance around a fixed reference point. We simulate the swarm-based exploration algorithm with expanded square pattern using a VREP simulator. The existing exploration algorithms that have been identified are also simulated to be compared with the proposed algorithm. In order to analysed and evaluate the performance of all algorithms, the data of simulation is documented. Some comparisons are conducted such as the performance of all algorithms, the performance of a group of the quadcopter, the covered spaces and the cooperation among groups

    J-MOD2^{2}: Joint Monocular Obstacle Detection and Depth Estimation

    Full text link
    In this work, we propose an end-to-end deep architecture that jointly learns to detect obstacles and estimate their depth for MAV flight applications. Most of the existing approaches either rely on Visual SLAM systems or on depth estimation models to build 3D maps and detect obstacles. However, for the task of avoiding obstacles this level of complexity is not required. Recent works have proposed multi task architectures to both perform scene understanding and depth estimation. We follow their track and propose a specific architecture to jointly estimate depth and obstacles, without the need to compute a global map, but maintaining compatibility with a global SLAM system if needed. The network architecture is devised to exploit the joint information of the obstacle detection task, that produces more reliable bounding boxes, with the depth estimation one, increasing the robustness of both to scenario changes. We call this architecture J-MOD2^{2}. We test the effectiveness of our approach with experiments on sequences with different appearance and focal lengths and compare it to SotA multi task methods that jointly perform semantic segmentation and depth estimation. In addition, we show the integration in a full system using a set of simulated navigation experiments where a MAV explores an unknown scenario and plans safe trajectories by using our detection model

    A Simulation-Based Study of Maze-Solving-Robot Navigation for Educational Purposes

    Get PDF
    The point of education in the early stage of studying robotics is understanding its basic principles joyfully. Therefore, this paper creates a simulation program of indoor navigations using an open-source code in Python to make navigation and control algorithms easier and more attractive to understand and develop. We propose the maze-solving-robot simulation as a teaching medium in class to help students imagine and connect the robot theory to its actual movement. The simulation code is built for free to learn, improve, and extend in robotics courses or assignments. A maze-solving robot study case is then done as an example of implementing navigation algorithms. Five algorithms are compared, such as Random Mouse, Wall Follower, Pledge, Tremaux, and Dead-End Filling. Each algorithm is simulated a hundred times in every type of the proposed mazes, namely mazes with dead ends, loops only, and both dead ends and loops. The observed indicators of the algorithms are the success rate of the robots reaching the finish lines and the number of steps taken. The simulation results show that each algorithm has different characteristics that should be considered before being chosen. The recommendation of when-to-use the algorithms is discussed in this paper as an example of the output simulation analysis for studying robotics

    Stereo vision-based obstacle avoidance for micro air vehicles using an egocylindrical image space representation

    Get PDF
    Micro air vehicles which operate autonomously at low altitude in cluttered environments require a method for onboard obstacle avoidance for safe operation. Previous methods deploy either purely reactive approaches, mapping low-level visual features directly to actuator inputs to maneuver the vehicle around the obstacle, or deliberative methods that use on-board 3-D sensors to create a 3-D, voxel-based world model, which is then used to generate collision free 3-D trajectories. In this paper, we use forward-looking stereo vision with a large horizontal and vertical field of view and project range from stereo into a novel robot-centered, cylindrical, inverse range map we call an egocylinder. With this implementation we reduce the complexity of our world representation from a 3D map to a 2.5D image-space representation, which supports very efficient motion planning and collision checking, and allows to implement configuration space expansion as an image processing function directly on the egocylinder. Deploying a fast reactive motion planner directly on the configuration space expanded egocylinder image, we demonstrate the effectiveness of this new approach experimentally in an indoor environment
    corecore