322 research outputs found

    Vision based obstacle detection for all-terrain robots

    Get PDF
    Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e de ComputadoresThis dissertation presents a solution to the problem of obstacle detection in all-terrain environments,with particular interest for mobile robots equipped with a stereo vision sensor. Despite the advantages of vision, over other kind of sensors, such as low cost, light weight and reduced energetic footprint, its usage still presents a series of challenges. These include the difficulty in dealing with the considerable amount of generated data, and the robustness required to manage high levels of noise. Such problems can be diminished by making hard assumptions, like considering that the terrain in front of the robot is planar. Although computation can be considerably saved, such simplifications are not necessarily acceptable in more complex environments, where the terrain may be considerably uneven. This dissertation proposes to extend a well known obstacle detector that relaxes the aforementioned planar terrain assumption, thus rendering it more adequate for unstructured environments. The proposed extensions involve: (1) the introduction of a visual saliency mechanism to focus the detection in regions most likely to contain obstacles; (2) voting filters to diminish sensibility to noise; and (3) the fusion of the detector with a complementary method to create a hybrid solution, and thus, more robust. Experimental results obtained with demanding all-terrain images show that, with the proposed extensions, an increment in terms of robustness and computational efficiency over the original algorithm is observe

    Systems for Safety and Autonomous Behavior in Cars: The DARPA Grand Challenge Experience

    Get PDF

    Reactive, Safe Navigation for Lunar and Planetary Robots

    Get PDF
    When humans return to the moon, Astronauts will be accompanied by robotic helpers. Enabling robots to safely operate near astronauts on the lunar surface has the potential to significantly improve the efficiency of crew surface operations. Safely operating robots in close proximity to astronauts on the lunar surface requires reactive obstacle avoidance capabilities not available on existing planetary robots. In this paper we present work on safe, reactive navigation using a stereo based high-speed terrain analysis and obstacle avoidance system. Advances in the design of the algorithms allow it to run terrain analysis and obstacle avoidance algorithms at full frame rate (30Hz) on off the shelf hardware. The results of this analysis are fed into a fast, reactive path selection module, enforcing the safety of the chosen actions. The key components of the system are discussed and test results are presented

    Lidar-based Obstacle Detection and Recognition for Autonomous Agricultural Vehicles

    Get PDF
    Today, agricultural vehicles are available that can drive autonomously and follow exact route plans more precisely than human operators. Combined with advancements in precision agriculture, autonomous agricultural robots can reduce manual labor, improve workflow, and optimize yield. However, as of today, human operators are still required for monitoring the environment and acting upon potential obstacles in front of the vehicle. To eliminate this need, safety must be ensured by accurate and reliable obstacle detection and avoidance systems.In this thesis, lidar-based obstacle detection and recognition in agricultural environments has been investigated. A rotating multi-beam lidar generating 3D point clouds was used for point-wise classification of agricultural scenes, while multi-modal fusion with cameras and radar was used to increase performance and robustness. Two research perception platforms were presented and used for data acquisition. The proposed methods were all evaluated on recorded datasets that represented a wide range of realistic agricultural environments and included both static and dynamic obstacles.For 3D point cloud classification, two methods were proposed for handling density variations during feature extraction. One method outperformed a frequently used generic 3D feature descriptor, whereas the other method showed promising preliminary results using deep learning on 2D range images. For multi-modal fusion, four methods were proposed for combining lidar with color camera, thermal camera, and radar. Gradual improvements in classification accuracy were seen, as spatial, temporal, and multi-modal relationships were introduced in the models. Finally, occupancy grid mapping was used to fuse and map detections globally, and runtime obstacle detection was applied on mapped detections along the vehicle path, thus simulating an actual traversal.The proposed methods serve as a first step towards full autonomy for agricultural vehicles. The study has thus shown that recent advancements in autonomous driving can be transferred to the agricultural domain, when accurate distinctions are made between obstacles and processable vegetation. Future research in the domain has further been facilitated with the release of the multi-modal obstacle dataset, FieldSAFE

    Desert RHex Technical Report: Jornada and White Sands Trip

    Get PDF
    Researchers in a variety of fields, including aeolian science, biology, and environmental science, have already made use of stationary and mobile remote sensing equipment to increase their variety of data collection opportunities. However, due to mobility challenges, remote sensing opportunities relevant to desert environments and in particular dune fields have been limited to stationary equipment. We describe here an investigative trip to two well-studied experimental deserts in New Mexico with D-RHex, a mobile remote sensing platform oriented towards desert research. D-RHex is the latest iteration of the RHex family of robots, which are six-legged, biologically inspired, small (10kg) platforms with good mobility in a variety of rough terrains, including on inclines and over obstacles of higher than robot hip height. For more information: Kod*La

    LADAR based mapping and obstacle detection system for service robots

    Get PDF
    Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de ComputadoresWhen travelling in unfamiliar environments, a mobile service robot needs to acquire information about his surroundings in order to detect and avoid obstacles and arrive safely at his destination. This dissertation presents a solution for the problem of mapping and obstacle detection in indoor/outdoor structured3 environments, with particular application on service robots equipped with a LADAR. Since this system was designed for structured environments, offroad terrains are outside the scope of this work. Also, the use of any a priori knowledge about LADAR’s surroundings is discarded, i.e. the developed mapping and obstacle detection system works in unknown environments. In this solution, it is assumed that the robot, which carries the LADAR and the mapping and obstacle detection system, is based on a planar surface which is considered to be the ground plane. The LADAR is positioned in a way suitable for a three dimensional world and an AHRS sensor is used to increase the robustness of the system to variations on robot’s attitude, which, in turn, can cause false positives on obstacle detection. The results from the experimental tests conducted in real environments through the incorporation on a physical robot suggest that the developed solution can be a good option for service robots driving in indoor/outdoor structured environments

    Progress toward multi‐robot reconnaissance and the MAGIC 2010 competition

    Full text link
    Tasks like search‐and‐rescue and urban reconnaissance benefit from large numbers of robots working together, but high levels of autonomy are needed to reduce operator requirements to practical levels. Reducing the reliance of such systems on human operators presents a number of technical challenges, including automatic task allocation, global state and map estimation, robot perception, path planning, communications, and human‐robot interfaces. This paper describes our 14‐robot team, which won the MAGIC 2010 competition. It was designed to perform urban reconnaissance missions. In the paper, we describe a variety of autonomous systems that require minimal human effort to control a large number of autonomously exploring robots. Maintaining a consistent global map, which is essential for autonomous planning and for giving humans situational awareness, required the development of fast loop‐closing, map optimization, and communications algorithms. Key to our approach was a decoupled centralized planning architecture that allowed individual robots to execute tasks myopically, but whose behavior was coordinated centrally. We will describe technical contributions throughout our system that played a significant role in its performance. We will also present results from our system both from the competition and from subsequent quantitative evaluations, pointing out areas in which the system performed well and where interesting research problems remain. © 2012 Wiley Periodicals, Inc.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/93532/1/21426_ftp.pd
    corecore