7,656 research outputs found

    UAV/UGV Autonomous Cooperation: UAV Assists UGV to Climb a Cliff by Attaching a Tether

    Full text link
    This paper proposes a novel cooperative system for an Unmanned Aerial Vehicle (UAV) and an Unmanned Ground Vehicle (UGV) which utilizes the UAV not only as a flying sensor but also as a tether attachment device. Two robots are connected with a tether, allowing the UAV to anchor the tether to a structure located at the top of a steep terrain, impossible to reach for UGVs. Thus, enhancing the poor traversability of the UGV by not only providing a wider range of scanning and mapping from the air, but also by allowing the UGV to climb steep terrains with the winding of the tether. In addition, we present an autonomous framework for the collaborative navigation and tether attachment in an unknown environment. The UAV employs visual inertial navigation with 3D voxel mapping and obstacle avoidance planning. The UGV makes use of the voxel map and generates an elevation map to execute path planning based on a traversability analysis. Furthermore, we compared the pros and cons of possible methods for the tether anchoring from multiple points of view. To increase the probability of successful anchoring, we evaluated the anchoring strategy with an experiment. Finally, the feasibility and capability of our proposed system were demonstrated by an autonomous mission experiment in the field with an obstacle and a cliff.Comment: 7 pages, 8 figures, accepted to 2019 International Conference on Robotics & Automation. Video: https://youtu.be/UzTT8Ckjz1

    A survey on fractional order control techniques for unmanned aerial and ground vehicles

    Get PDF
    In recent years, numerous applications of science and engineering for modeling and control of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) systems based on fractional calculus have been realized. The extra fractional order derivative terms allow to optimizing the performance of the systems. The review presented in this paper focuses on the control problems of the UAVs and UGVs that have been addressed by the fractional order techniques over the last decade

    Transfer Learning-Based Crack Detection by Autonomous UAVs

    Full text link
    Unmanned Aerial Vehicles (UAVs) have recently shown great performance collecting visual data through autonomous exploration and mapping in building inspection. Yet, the number of studies is limited considering the post processing of the data and its integration with autonomous UAVs. These will enable huge steps onward into full automation of building inspection. In this regard, this work presents a decision making tool for revisiting tasks in visual building inspection by autonomous UAVs. The tool is an implementation of fine-tuning a pretrained Convolutional Neural Network (CNN) for surface crack detection. It offers an optional mechanism for task planning of revisiting pinpoint locations during inspection. It is integrated to a quadrotor UAV system that can autonomously navigate in GPS-denied environments. The UAV is equipped with onboard sensors and computers for autonomous localization, mapping and motion planning. The integrated system is tested through simulations and real-world experiments. The results show that the system achieves crack detection and autonomous navigation in GPS-denied environments for building inspection

    In-Time UAV Flight-Trajectory Estimation and Tracking Using Bayesian Filters

    Get PDF
    Rapid increase of UAV operation in the next decade in areas of on-demand delivery, medical transportation services, law enforcement, traffic surveillance and several others pose potential risks to the low altitude airspace above densely populated areas. Safety assessment of airspace demands the need for a novel UAV traffic management (UTM) framework for regulation and tracking of the vehicles. Particularly for low-altitude UAV operations, quality of GPS measurements feeding into the UAV is often compromised by loss of communication link caused by presence of trees or tall buildings in proximity to the UAV flight path. Inaccurate GPS locations may yield to unreliable monitoring and inaccurate prognosis of remaining battery life and other safety metrics which rely on future expected trajectory of the UAV. This work therefore proposes a generalized monitoring and prediction methodology for autonomous UAVs using in-time GPS measurements. Firstly, a typical 4D smooth trajectory generation technique from a series of waypoint locations with associated expected times-of-arrival based on B-spline curves is presented. Initial uncertainty in the vehicle's expected cruise velocity is quantified to compute confidence intervals along the entire flight trajectory using error interval propagation approach. Further, the generated planned trajectory is considered as the prior knowledge which is updated during its flight with incoming GPS measurements in order to estimate its current location and corresponding kinematic profiles. Estimation of position is denoted in dicrete state-space representation such that position at a future time step is derived from position and velocity at current time step and expected velocity at the future time step. A linear Bayesian filtering algorithm is employed to efficiently refine position estimation from noisy GPS measurements and update the confidence intervals. Further, a dynamic re-planning strategy is implemented to incorporate unexpected detour or delay scenarios. Finally, critical challenges related to uncertainty quantification in trajectory prognosis for autonomous vehicles are identified, and potential solutions are discussed at the end of the paper. The entire monitoring framework is demonstrated on real UAV flight experiments conducted at the NASA Langley Research Center

    Vision and Learning for Deliberative Monocular Cluttered Flight

    Full text link
    Cameras provide a rich source of information while being passive, cheap and lightweight for small and medium Unmanned Aerial Vehicles (UAVs). In this work we present the first implementation of receding horizon control, which is widely used in ground vehicles, with monocular vision as the only sensing mode for autonomous UAV flight in dense clutter. We make it feasible on UAVs via a number of contributions: novel coupling of perception and control via relevant and diverse, multiple interpretations of the scene around the robot, leveraging recent advances in machine learning to showcase anytime budgeted cost-sensitive feature selection, and fast non-linear regression for monocular depth prediction. We empirically demonstrate the efficacy of our novel pipeline via real world experiments of more than 2 kms through dense trees with a quadrotor built from off-the-shelf parts. Moreover our pipeline is designed to combine information from other modalities like stereo and lidar as well if available
    corecore