1,382 research outputs found

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP

    Real Time and High Fidelity Quadcopter Tracking System

    Get PDF
    This project was conceived as a desired to have an affordable, flexible and physically compact tracking system for high accuracy spatial and orientation tracking. Specifically, this implementation is focused on providing a low cost motion capture system for future research. It is a tool to enable the further creation of systems that would require the use of accurate placement of landing pads, payload acquires and delivery. This system will provide the quadcopter platform a coordinate system that can be used in addition to GPS. Field research with quadcopter manufacturers, photographers, agriculture and research organizations were contact and interviewed for information on what components of a quadcopter system were lacking and what barriers currently limited desired drone operation. Distilling this information and after exploring various projects in the field of quadcopter and autonomous control, the idea was found to develop a system that could track the motion of quadcopters to jump start other projects. Specifically, live feedback was explored to be used as hardware in the loop testing systems where commands are relayed to the quadcopter and its response can be accurately measured. This can be extremely beneficial in new equipment testing such as new propeller design, motor design, and frame response. A further stretch objective for this project is to unify input commands to the quadcopter with its physical position in order to train control systems to fly new platforms running “piloted” platforms such as BetaFlight, RaceFlight and KISS platforms typically associated with drone racing as well as hobby grade semi-autonomous flight controller such as ArduPilot Mega (APM) & PixHawk

    Urban Air Mobility System Testbed Using CAVE Virtual Reality Environment

    Get PDF
    Urban Air Mobility (UAM) refers to a system of air passenger and small cargo transportation within an urban area. The UAM framework also includes other urban Unmanned Aerial Systems (UAS) services that will be supported by a mix of onboard, ground, piloted, and autonomous operations. Over the past few years UAM research has gained wide interest from companies and federal agencies as an on-demand innovative transportation option that can help reduce traffic congestion and pollution as well as increase mobility in metropolitan areas. The concepts of UAM/UAS operation in the National Airspace System (NAS) remains an active area of research to ensure safe and efficient operations. With new developments in smart vehicle design and infrastructure for air traffic management, there is a need for methods to integrate and test various components of the UAM framework. In this work, we report on the development of a virtual reality (VR) testbed using the Cave Automatic Virtual Environment (CAVE) technology for human-automation teaming and airspace operation research of UAM. Using a four-wall projection system with motion capture, the CAVE provides an immersive virtual environment with real-time full body tracking capability. We created a virtual environment consisting of San Francisco city and a vertical take-off-and-landing passenger aircraft that can fly between a downtown location and the San Francisco International Airport. The aircraft can be operated autonomously or manually by a single pilot who maneuvers the aircraft using a flight control joystick. The interior of the aircraft includes a virtual cockpit display with vehicle heading, location, and speed information. The system can record simulation events and flight data for post-processing. The system parameters are customizable for different flight scenarios; hence, the CAVE VR testbed provides a flexible method for development and evaluation of UAM framework

    A follow-me algorithm for AR.Drone using MobileNet-SSD and PID control

    Get PDF
    Treballs Finals de Grau d'Enginyeria InformĂ tica, Facultat de MatemĂ tiques, Universitat de Barcelona, Any: 2018, Director: LluĂ­s Garrido Ostermann[en] In recent years the industry of quadcopters has experimented a boost. The appearance of inexpensive drones has led to the growth of the recreational use of this vehicles, which opens the door to the creation of new applications and technologies. This thesis presents a vision-based autonomous control system for an AR.Drone 2.0. A tracking algorithm is developed using onboard vision systems without relying on additional external inputs. In particular, the tracking algorithm is the combination of a trained MobileNet-SSD object detector and a KCF tracker. The noise induced by the tracker is decreased with a Kalman filter. Furthermore, PID controllers are implemented for the motion control of the quadcopter, which process the output of the tracking algorithm to move the drone to the desired position. The final implementation was tested indoors and the system yields acceptable results

    Heterogeneous parallelization for object detection and tracking in UAVs.

    Get PDF
    Recent technical advancements in both fields of unmanned aerial vehicles (UAV) control and artificial intelligence (AI) have made a certain realm of applications possible. However, one of the main problems in integration of these two areas is the bottle-neck of computing AI applications on UAV's resource limited platform. One of the main solution for this problem is that AI and control software from one side and computing hardware mounted on UAV from the other side be adopted together based on the main constraints of the resource limited computing platform on UAV. Basically, the target constraints of such adaptation are performance, energy efficiency, and accuracy. In this paper, we propose a strategy to integrate and adopt the commonly used object detection and tracking algorithm and UAV control software to be executed on a heterogeneous resource limited computing units on a UAV. For object detection, a convolutional neural network (CNN) algorithm is used. For object tracking, a novel algorithm is proposed that can execute along with object tracking via sequential stream data. For UAV control, a Gain-Scheduled PID controller is designed that steers the UAV by continuously manipulation of the actuators based on the stream data from the tracking unit and dynamics of the UAV. All the algorithms are adopted to be executed on a heterogeneous platform including NVIDIA Jetson TX2 embedded computer and an ARM Cortex M4. The observation from real-time operation of the platform shows that using the proposed platform reduces the power consumption by 53.69% in contrast with other existing methods while having marginal penalty for object detection and tracking parts

    Position Control of an Unmanned Aerial Vehicle From a Mobile Ground Vehicle

    Get PDF
    Quadcopters have been developed with controls providing good maneuverability, simple mechanics, and the ability to hover, take-off and land vertically with precision. Due to their small size, they can get close to targets of interest and furthermore stay undetected at lower heights. The main drawbacks of a quadcopter are its high-power consumption and payload restriction, due to which, the number of onboard sensors is constrained. To overcome this limitation, vision-based localization techniques and remote control for the quadcopter are essential areas of current research. The core objective of this research is to develop a closed loop feedback system between an Unmanned Aerial Vehicle (UAV) and a mobile ground vehicle. With this closed loop system, the moving ground vehicle aims to navigate the UAV remotely. The ground vehicle uses a pure pursuit algorithm to traverse a pre-defined path. A Proportional-Integral-Derivative (PID) controller is actualized for position control and attitude stabilization of the UAV. The issue of tracking and 3D pose-estimation of the UAV in light of vision sensing is explored. An estimator to track the states of the UAV, utilizing the images obtained from a single camera mounted on the ground vehicle is developed. This estimator coupled with a Kalman filter determines the UAV\u27s three dimensional position. The relative position of the UAV with the moving ground vehicle and the control output from a joint centralized PD controller is used to navigate the UAV and follow the motion of the ground vehicle in closed loop to avoid time delays. This closed loop system is simulated in MATLAB and Simulink to validate the proposed control and estimation approach. The results obtained validate the control architecture proposed to attain closed loop feedback between the UAV and the mobile ground vehicle
    • 

    corecore