331 research outputs found

    Mobile-manipulating UAVs for Sensor Installation, Bridge Inspection and Maintenance

    Get PDF
    Mobile manipulating UAVs have great potential for bridge inspection and maintenance. Since 2002, the PI has developed UAVs that could fly through in-and-around buildings and tunnels. Collision avoidance in such cluttered near-Earth environments has been a key challenge. The advent of light-weight, computationally powerful cameras led to breakthroughs in SLAM even though SLAM-based autonomous aerial navigation around bridges remains an unsolved problem. In 2007, the PI integrated a mobile manipulation function into UAVs, greatly extending the capabilities of UAVs from passive survey of environments with cameras to active interaction with environments using limbs. Mobile-manipulating UAVs have since been demonstrated to successfully turn valves, install sensors, open doors, and drag ropes. Their research and development face several challenges. First, limbs add weight to aircraft. Second, rotorcraft, like a quadcopter, is an under-actuated system whose stability can be easily affected by limb motions. Third, when performing a task like turning a valve, limbs demand compensation for torque-force interactions. Thus, even if battery technologies afford the additional payload of limbs, current knowledge for manipulation with under-actuated systems remains sparse. This project aims to develop and prototype a mobile-manipulating UAV for bridge maintenance and disaster cleanup through further study on SLAM technology for robust navigation, impedance controllers to ensure UAV’s stability with limb motion, and coordinated and cooperative motions of multiple limbs to perform simple tasks like bearings cleaning and crack sealing in concrete bridges. Two strategies will be explored for bridge maintenance: (a) A UAV brings and uses a can of compressed air for bridge cleaning, and (2) Two UAVs airlift, position, and operate hoses from ground, and clean bridges with air or water. The latter can be potentially implemented by including a station-keeping, lighter-than-air UAV like blimp that can airlift a hose and remain airborne for extended periods. The mobile-limbed UAVs can then pull-and-drag the hose into areas that need to be cleaned. The blimp-based approach is attractive because it is easier for a UAV to drop hose lengths rather than pull the hose up in air

    Autonomous aerial robot for high-speed search and intercept applications

    Get PDF
    In recent years, high-speed navigation and environment interaction in the context of aerial robotics has become a field of interest for several academic and industrial research studies. In particular, Search and Intercept (SaI) applications for aerial robots pose a compelling research area due to their potential usability in several environments. Nevertheless, SaI tasks involve a challenging development regarding sensory weight, onboard computation resources, actuation design, and algorithms for perception and control, among others. In this work, a fully autonomous aerial robot for high-speed object grasping has been proposed. As an additional subtask, our system is able to autonomously pierce balloons located in poles close to the surface. Our first contribution is the design of the aerial robot at an actuation and sensory level consisting of a novel gripper design with additional sensors enabling the robot to grasp objects at high speeds. The second contribution is a complete software framework consisting of perception, state estimation, motion planning, motion control, and mission control in order to rapidly and robustly perform the autonomous grasping mission. Our approach has been validated in a challenging international competition and has shown outstanding results, being able to autonomously search, follow, and grasp a moving object at 6 m/s in an outdoor environment.Agencia Estatal de InvestigaciónKhalifa Universit

    Georgia Tech Team Entry for the 2012 AUVSI International Aerial Robotics Competition

    Get PDF
    Presented at the Third International Aerial Robotics Symposium (IASR), 2012.This paper describes the details of a Quadrotor Unmanned Aerial Vehicle capable of exploring cluttered indoor areas without relying on any external navigational aids. A Simultaneous Localization and Mapping (SLAM) algorithm is used to fuse information from a laser range sensor, an inertial measurement unit, and an altitude sonar to provide relative position, velocity, and attitude information. A wall avoidance and guidance system is implemented to ensure that the vehicle explores maximum indoor area. A model reference adaptive control architecture is used to ensure stability and mitigation of uncertainties. Finally, an object detection system is implemented to identify target objects for retrieval

    A Survey of Computer Vision Methods for 2D Object Detection from Unmanned Aerial Vehicles

    Get PDF
    The spread of Unmanned Aerial Vehicles (UAVs) in the last decade revolutionized many applications fields. Most investigated research topics focus on increasing autonomy during operational campaigns, environmental monitoring, surveillance, maps, and labeling. To achieve such complex goals, a high-level module is exploited to build semantic knowledge leveraging the outputs of the low-level module that takes data acquired from multiple sensors and extracts information concerning what is sensed. All in all, the detection of the objects is undoubtedly the most important low-level task, and the most employed sensors to accomplish it are by far RGB cameras due to costs, dimensions, and the wide literature on RGB-based object detection. This survey presents recent advancements in 2D object detection for the case of UAVs, focusing on the differences, strategies, and trade-offs between the generic problem of object detection, and the adaptation of such solutions for operations of the UAV. Moreover, a new taxonomy that considers different heights intervals and driven by the methodological approaches introduced by the works in the state of the art instead of hardware, physical and/or technological constraints is proposed

    Georgia Tech Team Entry for the 2013 AUVSI International Aerial Robotics Competition

    Get PDF
    Presented at the Fifth International Aerial Robotics Competition (IARC) Symposium on Indoor Flight Issues, Grand Forks, ND, August, 201

    How hard is it to cross the room? -- Training (Recurrent) Neural Networks to steer a UAV

    Full text link
    This work explores the feasibility of steering a drone with a (recurrent) neural network, based on input from a forward looking camera, in the context of a high-level navigation task. We set up a generic framework for training a network to perform navigation tasks based on imitation learning. It can be applied to both aerial and land vehicles. As a proof of concept we apply it to a UAV (Unmanned Aerial Vehicle) in a simulated environment, learning to cross a room containing a number of obstacles. So far only feedforward neural networks (FNNs) have been used to train UAV control. To cope with more complex tasks, we propose the use of recurrent neural networks (RNN) instead and successfully train an LSTM (Long-Short Term Memory) network for controlling UAVs. Vision based control is a sequential prediction problem, known for its highly correlated input data. The correlation makes training a network hard, especially an RNN. To overcome this issue, we investigate an alternative sampling method during training, namely window-wise truncated backpropagation through time (WW-TBPTT). Further, end-to-end training requires a lot of data which often is not available. Therefore, we compare the performance of retraining only the Fully Connected (FC) and LSTM control layers with networks which are trained end-to-end. Performing the relatively simple task of crossing a room already reveals important guidelines and good practices for training neural control networks. Different visualizations help to explain the behavior learned.Comment: 12 pages, 30 figure

    Low computational SLAM for an autonomous indoor aerial inspection vehicle

    Get PDF
    The past decade has seen an increase in the capability of small scale Unmanned Aerial Vehicle (UAV) systems, made possible through technological advancements in battery, computing and sensor miniaturisation technology. This has opened a new and rapidly growing branch of robotic research and has sparked the imagination of industry leading to new UAV based services, from the inspection of power-lines to remote police surveillance. Miniaturisation of UAVs have also made them small enough to be practically flown indoors. For example, the inspection of elevated areas in hazardous or damaged structures where the use of conventional ground-based robots are unsuitable. Sellafield Ltd, a nuclear reprocessing facility in the U.K. has many buildings that require frequent safety inspections. UAV inspections eliminate the current risk to personnel of radiation exposure and other hazards in tall structures where scaffolding or hoists are required. This project focused on the development of a UAV for the novel application of semi-autonomously navigating and inspecting these structures without the need for personnel to enter the building. Development exposed a significant gap in knowledge concerning indoor localisation, specifically Simultaneous Localisation and Mapping (SLAM) for use on-board UAVs. To lower the on-board processing requirements of SLAM, other UAV research groups have employed techniques such as off-board processing, reduced dimensionality or prior knowledge of the structure, techniques not suitable to this application given the unknown nature of the structures and the risk of radio-shadows. In this thesis a novel localisation algorithm, which enables real-time and threedimensional SLAM running solely on-board a computationally constrained UAV in heavily cluttered and unknown environments is proposed. The algorithm, based on the Iterative Closest Point (ICP) method utilising approximate nearest neighbour searches and point-cloud decimation to reduce the processing requirements has successfully been tested in environments similar to that specified by Sellafield Ltd

    Chapter Operational Validation of Search and Rescue Robots

    Get PDF
    This chapter describes how the different ICARUS unmanned search and rescue tools have been evaluated and validated using operational benchmarking techniques. Two large‐scale simulated disaster scenarios were organized: a simulated shipwreck and an earthquake response scenario. Next to these simulated response scenarios, where ICARUS tools were deployed in tight interaction with real end users, ICARUS tools also participated to a real relief, embedded in a team of end users for a flood response mission. These validation trials allow us to conclude that the ICARUS tools fulfil the user requirements and goals set up at the beginning of the project

    Operational Validation of Search and Rescue Robots

    Get PDF
    This chapter describes how the different ICARUS unmanned search and rescue tools have been evaluated and validated using operational benchmarking techniques. Two large‐scale simulated disaster scenarios were organized: a simulated shipwreck and an earthquake response scenario. Next to these simulated response scenarios, where ICARUS tools were deployed in tight interaction with real end users, ICARUS tools also participated to a real relief, embedded in a team of end users for a flood response mission. These validation trials allow us to conclude that the ICARUS tools fulfil the user requirements and goals set up at the beginning of the project
    corecore