10,201 research outputs found

    Real-time Heading Estimation using Perspective Features

    Get PDF
    There are a large number of commercially available quad-rotor helicopters available from various manufacturers. All of these systems rely on a low cost MEMS based inertial measurement system for stabilization and navigation. These low cost inertial systems are all subject to rapid error growth in their attitude and position estimates unless bounded by external measurements. This thesis created real-time algorithm to integrate measurements from visual cues with measurements from onboard sensors to estimate the attitude position and velocity of a quad-rotor helicopter in a local navigation frame, a system model for the ARDrone, and a feed-back controller for the vehicle\u27s heading. The ARDrone, by Parrot SA, is a low cost quad-rotor helicopter that comes equipped with a variety of sensors including a forward-looking high-definition camera. The vehicle is capable of using its onboard sensors to adequately constrain the errors for pitch and roll in all environments, however the yaw axis is still subject to drift. This work utilizes a RANSAC based vanishing point detection algorithm to provide a reliable heading reference and integrates the vanishing point based heading measurements with the system\u27s onboard heading measurements through an extended Kalman filter. In addition to estimating the drone\u27s heading, the Kalman filter also estimates the position and velocity of the drone as it moves through its environment. This system was able to provide a heading reference with an error of one degree for the drone and was shown to be capable of transitioning between vanishing points when the vehicle needed to change direction. The system also demonstrated that it was capable of generating an estimate of position and velocity. However because the position error was on the order of one meter, the estimate was not accurate enough for autonomous navigation

    Learning Pose Estimation for UAV Autonomous Navigation and Landing Using Visual-Inertial Sensor Data

    Get PDF
    In this work, we propose a robust network-in-the-loop control system for autonomous navigation and landing of an Unmanned-Aerial-Vehicle (UAV). To estimate the UAV’s absolute pose, we develop a deep neural network (DNN) architecture for visual-inertial odometry, which provides a robust alternative to traditional methods. We first evaluate the accuracy of the estimation by comparing the prediction of our model to traditional visual-inertial approaches on the publicly available EuRoC MAV dataset. The results indicate a clear improvement in the accuracy of the pose estimation up to 25% over the baseline. Finally, we integrate the data-driven estimator in the closed-loop flight control system of Airsim, a simulator available as a plugin for Unreal Engine, and we provide simulation results for autonomous navigation and landing

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP

    An Underwater SLAM System using Sonar, Visual, Inertial, and Depth Sensor

    Full text link
    This paper presents a novel tightly-coupled keyframe-based Simultaneous Localization and Mapping (SLAM) system with loop-closing and relocalization capabilities targeted for the underwater domain. Our previous work, SVIn, augmented the state-of-the-art visual-inertial state estimation package OKVIS to accommodate acoustic data from sonar in a non-linear optimization-based framework. This paper addresses drift and loss of localization -- one of the main problems affecting other packages in underwater domain -- by providing the following main contributions: a robust initialization method to refine scale using depth measurements, a fast preprocessing step to enhance the image quality, and a real-time loop-closing and relocalization method using bag of words (BoW). An additional contribution is the addition of depth measurements from a pressure sensor to the tightly-coupled optimization formulation. Experimental results on datasets collected with a custom-made underwater sensor suite and an autonomous underwater vehicle from challenging underwater environments with poor visibility demonstrate performance never achieved before in terms of accuracy and robustness

    Transfer Learning-Based Crack Detection by Autonomous UAVs

    Full text link
    Unmanned Aerial Vehicles (UAVs) have recently shown great performance collecting visual data through autonomous exploration and mapping in building inspection. Yet, the number of studies is limited considering the post processing of the data and its integration with autonomous UAVs. These will enable huge steps onward into full automation of building inspection. In this regard, this work presents a decision making tool for revisiting tasks in visual building inspection by autonomous UAVs. The tool is an implementation of fine-tuning a pretrained Convolutional Neural Network (CNN) for surface crack detection. It offers an optional mechanism for task planning of revisiting pinpoint locations during inspection. It is integrated to a quadrotor UAV system that can autonomously navigate in GPS-denied environments. The UAV is equipped with onboard sensors and computers for autonomous localization, mapping and motion planning. The integrated system is tested through simulations and real-world experiments. The results show that the system achieves crack detection and autonomous navigation in GPS-denied environments for building inspection

    Toward a unified PNT, Part 1: Complexity and context: Key challenges of multisensor positioning

    Get PDF
    The next generation of navigation and positioning systems must provide greater accuracy and reliability in a range of challenging environments to meet the needs of a variety of mission-critical applications. No single navigation technology is robust enough to meet these requirements on its own, so a multisensor solution is required. Known environmental features, such as signs, buildings, terrain height variation, and magnetic anomalies, may or may not be available for positioning. The system could be stationary, carried by a pedestrian, or on any type of land, sea, or air vehicle. Furthermore, for many applications, the environment and host behavior are subject to change. A multi-sensor solution is thus required. The expert knowledge problem is compounded by the fact that different modules in an integrated navigation system are often supplied by different organizations, who may be reluctant to share necessary design information if this is considered to be intellectual property that must be protected

    Cooperative monocular-based SLAM for multi-UAV systems in GPS-denied environments

    Get PDF
    This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.Peer ReviewedPostprint (published version
    • …
    corecore