7 research outputs found

    3-D Site Mapping with the CMU Autonomous Helicopter

    No full text
    This paper describes a scanning laser rangefinder developed for integration with the Carnegie Mellon University autonomous helicopter. The combination of an unmanned, autonomous helicopter with a 3-D scanning laser rangefinder has many potential applications; such as terrain modeling or structure inspection. To achieve high accuracy (10 cm) in each 3-D measurement, careful attention must be paid to minimizing errors, in particular, errors in measuring the direction of the laser’s beam. This paper discusses our approach to minimize these errors. Results are presented from early test flights

    Vision-Based Autonomous Helicopter Research at Carnegie Mellon Robotics Institute 1991-1997

    No full text
    This paper presents an overview of the Autonomous Helicopter Project at Carnegie Mellon Robotics Institute. The advantages of an autonomous vision-guided helicopter for a number of goal applications are enumerated through possible mission scenarios. The requirements of these applications are addressed by a central goal mission for the project. Current capabilities, including vision-based stability and control, autonomous take off, trajectory following, and landing, aerial mapping, and object recognition and manipulation are presented. In conclusion, the project future directions are discussed

    Arctic Test Flights of the CMU Autonomous Helicopter

    No full text
    This paper presents our experiences during the test flights of the CMU autonomous helicopter in the Canadian Arctic, the first deployment of this technology for a real-world application. The mission required building dense topological maps of Devon Island’s Haughton crater for NASA scientists studying Mars-analog environments. The paper presents our system design and preparation, flight test results, and example maps produced by the onboard laser-based mapping system during the mission

    Research on an Autonomous Vision-Guided Helicopter

    No full text
    We present an overview of the autonomous helicopter project at Carnegie Mellon’s Robotics Institute. The goal of this project is to autonomously fly helicopters using computer vision closely integrated with other on-board sensors. We discuss a concrete example mission designed to demonstrate the viability of vision-based helicopter flight and specify the components necessary to accomplish this mission. Major components include customized vision processing hardware designed for high bandwidth and low latency processing and 6-degree-of-freedom test stand designed for realistic and safe indoor experiments using model helicopters. We describe our progress in accomplishing an indoor mission and show experimental results of estimating helicopter state with computer vision during actual flight experiments

    Toward Laser Pulse Waveform Analysis for Scene Interpretation

    No full text
    Laser based sensing for scene interpretation and obstacle detection is challenged by partially viewed targets, wiry structures, and porous objects. We propose to address such problems by looking at the laser pulse waveform. We designed a new laser sensor with off-the-shelf components. In this paper we report on the design and the evaluation of this low cost and compact sensor, suitable for mobile robot application. We determine classical parameters such as operation range, repeatability, accuracy, resolution, but we also analyze laser pulse waveforms modes and mode shape in order to extract additional information on the scene

    Integrated Air/Ground Vehicle System for Semi-Autonomous Off-Road Navigation

    No full text
    Current unmanned vehicle systems enable exploration of and travel through remote areas, but demand significant communications resources and constant human operation. DARPA and the US Army have recognized these limitations and are now pursuing semi-autonomous vehicle systems in the Future Combat Systems (FCS) program. FCS places high demands on robotic systems, which must assess mobility hazards under all weather conditions, day and night, in the presence of smoke and other airborne obscurants, and with the possibility of communications dropouts. Perhaps the most challenging mobility hazard is the "negative obstacle", such as a hole or ditch. These hazards are difficult to see from the ground, limiting maximum vehicle speed. From the air, however, these obstacles are often far easier to detect. In this context, we present a novel semi-autonomous unmanned ground vehicle (UGV) that includes a dedicated unmanned air vehicle - a "Flying Eye" (FE) - that flies ahead of the UGV to detect holes and other hazards before the onboard UGV sensors would otherwise be able detect them. This concept is shown in Figure 1. The FE can be considered a longer-range “scout” to explore terrain before the UGV must traverse it, allowing the UGV to avoid certain areas entirely because of hazards or cul-de-sacs detected by the FE. The UGV itself carries its own extensive sensor suite to address the broad range of operating conditions at slower speeds and to confirm the hazards seen from the air. In the fully developed system, the UGV will deploy the FE from a landing bay on the back of the UGV. For covert operations, the FE will return to its landing bay on the back of the UGV. This paper presents the current prototype system as well as initial field experiments on the performance of the system

    Real-Time, Multi-Perspective Perception for Unmanned Ground Vehicles

    No full text
    The most challenging technical problems facing successful autonomous UGV operation in off-road environments are reliable sensing and perception. In this paper, we describe our progress over the last year toward solving these problems in Phase II of DARPA’s PerceptOR program. We have developed a perception system that combines laser, camera, and proprioceptive sensing elements on both ground and air platforms to detect and avoid obstacles in natural terrain environments. The perception system has been rigorously tested in a variety of environments and has improved over time as problems have been identified and systematically solved. The paper describes the perception system and the autonomous vehicles, presents results from some experiments, and summarizes the current capabilities and limitations
    corecore