1,763 research outputs found

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP

    Motorcycle Helmet Crash Detection/Prevention System

    Get PDF
    This project proposes a motorcycle safety system that increases safety by actively helping to prevent crashes while also helping in the case that an accident does occur. The system actively helps prevent accidents by keeping the user’s eyes on the road with three additions to the typical helmet. The helmet has a heads-up-display (HUD) containing the speed of the motorcycle and turn-by-turn directions. Instead of tilting their head down the user can see their speed and directions by moving their eyes which will keep the road in their field of view. Blind-spot detection increases the user’s overall awareness of their surroundings. The helmet increases the speed of response when an emergency situation occurs. After a crash has been detected, emergency response will be notified via call and text from the user’s phone. In order to increase attention to the accident, external LEDs flash and an external speaker sounds. Keeping the driver’s eyes on the road makes for a safer driving experience which decreases the number of crashes. Having emergency response arrive quicker helps to improve the chances of a speedy recovery or even could save a person’s life

    Experimental Verification of Inertial Navigation with MEMS for Forensic Investigation of Vehicle Collision

    Get PDF
    This paper studies whether low-grade inertial sensors can be adequate source of data for the accident characterization and the estimation of vehicle trajectory near crash. Paper presents outcomes of an experiment carried out in accredited safety performance assessment facility in which full-size passenger car was crashed and the recordings of different types of motion sensors were compared to investigate practical level of accuracy of consumer grade sensors versus reference equipment and cameras. Inertial navigation system was developed by combining motion sensors of different dynamic ranges to acquire and process vehicle crash data. Vehicle position was reconstructed in three-dimensional space using strap-down inertial mechanization. Difference between the computed trajectory and the ground-truth position acquired by cameras was on decimeter level within short time window of 750 ms. Experiment findings suggest that inertial sensors of this grade, despite significant stochastic variations and imperfections, can be valuable for estimation of velocity vector change, crash severity, direction of impact force, and for estimation of vehicle trajectory in crash proximity

    Vision and Learning for Deliberative Monocular Cluttered Flight

    Full text link
    Cameras provide a rich source of information while being passive, cheap and lightweight for small and medium Unmanned Aerial Vehicles (UAVs). In this work we present the first implementation of receding horizon control, which is widely used in ground vehicles, with monocular vision as the only sensing mode for autonomous UAV flight in dense clutter. We make it feasible on UAVs via a number of contributions: novel coupling of perception and control via relevant and diverse, multiple interpretations of the scene around the robot, leveraging recent advances in machine learning to showcase anytime budgeted cost-sensitive feature selection, and fast non-linear regression for monocular depth prediction. We empirically demonstrate the efficacy of our novel pipeline via real world experiments of more than 2 kms through dense trees with a quadrotor built from off-the-shelf parts. Moreover our pipeline is designed to combine information from other modalities like stereo and lidar as well if available

    Learning to Fly by Crashing

    Full text link
    How do you learn to navigate an Unmanned Aerial Vehicle (UAV) and avoid obstacles? One approach is to use a small dataset collected by human experts: however, high capacity learning algorithms tend to overfit when trained with little data. An alternative is to use simulation. But the gap between simulation and real world remains large especially for perception problems. The reason most research avoids using large-scale real data is the fear of crashes! In this paper, we propose to bite the bullet and collect a dataset of crashes itself! We build a drone whose sole purpose is to crash into objects: it samples naive trajectories and crashes into random objects. We crash our drone 11,500 times to create one of the biggest UAV crash dataset. This dataset captures the different ways in which a UAV can crash. We use all this negative flying data in conjunction with positive data sampled from the same trajectories to learn a simple yet powerful policy for UAV navigation. We show that this simple self-supervised model is quite effective in navigating the UAV even in extremely cluttered environments with dynamic obstacles including humans. For supplementary video see: https://youtu.be/u151hJaGKU
    • …
    corecore