9 research outputs found

    Estimation, planning, and mapping for autonomous flight using an RGB-D camera in GPS-denied environments

    Get PDF
    RGB-D cameras provide both color images and per-pixel depth estimates. The richness of this data and the recent development of low-cost sensors have combined to present an attractive opportunity for mobile robotics research. In this paper, we describe a system for visual odometry and mapping using an RGB-D camera, and its application to autonomous flight. By leveraging results from recent state-of-the-art algorithms and hardware, our system enables 3D flight in cluttered environments using only onboard sensor data. All computation and sensing required for local position control are performed onboard the vehicle, reducing the dependence on an unreliable wireless link to a ground station. However, even with accurate 3D sensing and position estimation, some parts of the environment have more perceptual structure than others, leading to state estimates that vary in accuracy across the environment. If the vehicle plans a path without regard to how well it can localize itself along that path, it runs the risk of becoming lost or worse. We show how the belief roadmap algorithm prentice2009belief, a belief space extension of the probabilistic roadmap algorithm, can be used to plan vehicle trajectories that incorporate the sensing model of the RGB-D camera. We evaluate the effectiveness of our system for controlling a quadrotor micro air vehicle, demonstrate its use for constructing detailed 3D maps of an indoor environment, and discuss its limitations.United States. Office of Naval Research (Grant MURI N00014-07-1-0749)United States. Office of Naval Research (Science of Autonomy Program N00014-09-1-0641)United States. Army Research Office (MAST CTA)United States. Office of Naval Research. Multidisciplinary University Research Initiative (Grant N00014-09-1-1052)National Science Foundation (U.S.) (Contract IIS-0812671)United States. Army Research Office (Robotics Consortium Agreement W911NF-10-2-0016)National Science Foundation (U.S.). Division of Information, Robotics, and Intelligent Systems (Grant 0546467

    Die Carotisstenose - Indikationsstellung abseits vom Stenosegrad

    No full text

    On the Design and Use of a Micro Air Vehicle to Track and Avoid Adversaries

    No full text
    The MAV ’08 competition focused on the problem of using air and ground vehicles to locate and rescue hostages being held in a remote building. To execute this mission, a number of technical challenges were addressed, including designing the micro air vehicle (MAV), using the MAV to geo-locate ground targets, and planning the motion of ground vehicles to reach the hostage location without detection. In this paper, we describe the complete system designed for the MAV ’08 competition, and present our solutions to three technical challenges that were addressed within this system. First, we summarize the design of our micro air vehicle, focusing on the navigation and sensing payload. Second, we describe the vision and state estimation algorithms used to track ground features, including stationary obstacles and moving adversaries, from a sequence of images collected by the MAV. Third, we describe the planning algorithm used to generate motion plans for the ground vehicles to approach the hostage building undetected by adversaries; these adversaries are tracked by the MAV from the air. We examine different variants of a search algorithm and describe their performance under different conditions. Finally, we provide results of our system’s performance during the mission execution.United States. Army Research Office (MAST CTA)Singapore. Armed ForcesUnited States. Air Force Office of Scientific Research (contract # F9550-06-C-0088)Aurora Flight Sciences Corp.Boeing CompanyNational Energy Research Scientific Computing Center (U.S.)National Science Foundation (U.S.). Division of Information and Intelligent Systems (grant # 0546467)Massachusetts Institute of Technology. Air Vehicle Research Center (MAVRC

    Optimal surveillance coverage for teams of micro aerial vehicles in GPS-Denied environments using onboard vision

    Full text link
    This paper deals with the problem of deploying a team of flying robots to perform surveillance-coverage missions over a terrain of arbitrary morphology. In such missions, a key factor for the successful completion of the mission is the knowledge of the terrain’s morphology. The focus of this paper is on the implementation of a two-step procedure that allows us to optimally align a team of flying vehicles for the aforementioned task. Initially, a single robot constructs a map of the area using a novel monocular-vision-based approach. A state-of-the-art visual-SLAM algorithm tracks the pose of the camera while, simultaneously, autonomously, building an incremental map of the environment. The map generated is processed and serves as an input to an optimization procedure using the cognitive, adaptive methodology initially introduced in Renzaglia et al. (Proceedings of the IEEE international conference on robotics and intelligent system (IROS), Taipei, Taiwan, pp. 3314–3320, 2010). The output of this procedure is the optimal arrangement of the robots team, which maximizes the monitored area. The efficiency of our approach is demonstrated using real data collected from aerial robots in different outdoor areas
    corecore