1,877 research outputs found

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP

    Urban Air Mobility System Testbed Using CAVE Virtual Reality Environment

    Get PDF
    Urban Air Mobility (UAM) refers to a system of air passenger and small cargo transportation within an urban area. The UAM framework also includes other urban Unmanned Aerial Systems (UAS) services that will be supported by a mix of onboard, ground, piloted, and autonomous operations. Over the past few years UAM research has gained wide interest from companies and federal agencies as an on-demand innovative transportation option that can help reduce traffic congestion and pollution as well as increase mobility in metropolitan areas. The concepts of UAM/UAS operation in the National Airspace System (NAS) remains an active area of research to ensure safe and efficient operations. With new developments in smart vehicle design and infrastructure for air traffic management, there is a need for methods to integrate and test various components of the UAM framework. In this work, we report on the development of a virtual reality (VR) testbed using the Cave Automatic Virtual Environment (CAVE) technology for human-automation teaming and airspace operation research of UAM. Using a four-wall projection system with motion capture, the CAVE provides an immersive virtual environment with real-time full body tracking capability. We created a virtual environment consisting of San Francisco city and a vertical take-off-and-landing passenger aircraft that can fly between a downtown location and the San Francisco International Airport. The aircraft can be operated autonomously or manually by a single pilot who maneuvers the aircraft using a flight control joystick. The interior of the aircraft includes a virtual cockpit display with vehicle heading, location, and speed information. The system can record simulation events and flight data for post-processing. The system parameters are customizable for different flight scenarios; hence, the CAVE VR testbed provides a flexible method for development and evaluation of UAM framework

    Control and communication systems for automated vehicles cooperation and coordination

    Get PDF
    Mención Internacional en el título de doctorThe technological advances in the Intelligent Transportation Systems (ITS) are exponentially improving over the last century. The objective is to provide intelligent and innovative services for the different modes of transportation, towards a better, safer, coordinated and smarter transport networks. The Intelligent Transportation Systems (ITS) focus is divided into two main categories; the first is to improve existing components of the transport networks, while the second is to develop intelligent vehicles which facilitate the transportation process. Different research efforts have been exerted to tackle various aspects in the fields of the automated vehicles. Accordingly, this thesis is addressing the problem of multiple automated vehicles cooperation and coordination. At first, 3DCoAutoSim driving simulator was developed in Unity game engine and connected to Robot Operating System (ROS) framework and Simulation of Urban Mobility (SUMO). 3DCoAutoSim is an abbreviation for "3D Simulator for Cooperative Advanced Driver Assistance Systems (ADAS) and Automated Vehicles Simulator". 3DCoAutoSim was tested under different circumstances and conditions, afterward, it was validated through carrying-out several controlled experiments and compare the results against their counter reality experiments. The obtained results showed the efficiency of the simulator to handle different situations, emulating real world vehicles. Next is the development of the iCab platforms, which is an abbreviation for "Intelligent Campus Automobile". The platforms are two electric golf-carts that were modified mechanically, electronically and electrically towards the goal of automated driving. Each iCab was equipped with several on-board embedded computers, perception sensors and auxiliary devices, in order to execute the necessary actions for self-driving. Moreover, the platforms are capable of several Vehicle-to-Everything (V2X) communication schemes, applying three layers of control, utilizing cooperation architecture for platooning, executing localization systems, mapping systems, perception systems, and finally several planning systems. Hundreds of experiments were carried-out for the validation of each system in the iCab platform. Results proved the functionality of the platform to self-drive from one point to another with minimal human intervention.Los avances tecnológicos en Sistemas Inteligentes de Transporte (ITS) han crecido de forma exponencial durante el último siglo. El objetivo de estos avances es el de proveer de sistemas innovadores e inteligentes para ser aplicados a los diferentes medios de transporte, con el fin de conseguir un transporte mas eficiente, seguro, coordinado e inteligente. El foco de los ITS se divide principalmente en dos categorías; la primera es la mejora de los componentes ya existentes en las redes de transporte, mientras que la segunda es la de desarrollar vehículos inteligentes que hagan más fácil y eficiente el transporte. Diferentes esfuerzos de investigación se han llevado a cabo con el fin de solucionar los numerosos aspectos asociados con la conducción autónoma. Esta tesis propone una solución para la cooperación y coordinación de múltiples vehículos. Para ello, en primer lugar se desarrolló un simulador (3DCoAutoSim) de conducción basado en el motor de juegos Unity, conectado al framework Robot Operating System (ROS) y al simulador Simulation of Urban Mobility (SUMO). 3DCoAutoSim ha sido probado en diferentes condiciones y circunstancias, para posteriormente validarlo con resultados a través de varios experimentos reales controlados. Los resultados obtenidos mostraron la eficiencia del simulador para manejar diferentes situaciones, emulando los vehículos en el mundo real. En segundo lugar, se desarrolló la plataforma de investigación Intelligent Campus Automobile (iCab), que consiste en dos carritos eléctricos de golf, que fueron modificados eléctrica, mecánica y electrónicamente para darle capacidades autónomas. Cada iCab se equipó con diferentes computadoras embebidas, sensores de percepción y unidades auxiliares, con la finalidad de transformarlos en vehículos autónomos. Además, se les han dado capacidad de comunicación multimodal (V2X), se les han aplicado tres capas de control, incorporando una arquitectura de cooperación para operación en modo tren, diferentes esquemas de localización, mapeado, percepción y planificación de rutas. Innumerables experimentos han sido realizados para validar cada uno de los diferentes sistemas incorporados. Los resultados prueban la funcionalidad de esta plataforma para realizar conducción autónoma y cooperativa con mínima intervención humana.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Francisco Javier Otamendi Fernández de la Puebla.- Secretario: Hanno Hildmann.- Vocal: Pietro Cerr

    Synthetic Datasets for Autonomous Driving: A Survey

    Full text link
    Autonomous driving techniques have been flourishing in recent years while thirsting for huge amounts of high-quality data. However, it is difficult for real-world datasets to keep up with the pace of changing requirements due to their expensive and time-consuming experimental and labeling costs. Therefore, more and more researchers are turning to synthetic datasets to easily generate rich and changeable data as an effective complement to the real world and to improve the performance of algorithms. In this paper, we summarize the evolution of synthetic dataset generation methods and review the work to date in synthetic datasets related to single and multi-task categories for to autonomous driving study. We also discuss the role that synthetic dataset plays the evaluation, gap test, and positive effect in autonomous driving related algorithm testing, especially on trustworthiness and safety aspects. Finally, we discuss general trends and possible development directions. To the best of our knowledge, this is the first survey focusing on the application of synthetic datasets in autonomous driving. This survey also raises awareness of the problems of real-world deployment of autonomous driving technology and provides researchers with a possible solution.Comment: 19 pages, 5 figure

    Vision-based active safety system for automatic stopping

    Full text link
    ntelligent systems designed to reduce highway fatalities have been widely applied in the automotive sector in the last decade. Of all users of transport systems, pedestrians are the most vulnerable in crashes as they are unprotected. This paper deals with an autonomous intelligent emergency system designed to avoid collisions with pedestrians. The system consists of a fuzzy controller based on the time-to-collision estimate – obtained via a vision-based system – and the wheel-locking probability – obtained via the vehicle’s CAN bus – that generates a safe braking action. The system has been tested in a real car – a convertible Citroën C3 Pluriel – equipped with an automated electro-hydraulic braking system capable of working in parallel with the vehicle’s original braking circuit. The system is used as a last resort in the case that an unexpected pedestrian is in the lane and all the warnings have failed to produce a response from the driver

    Adaptive SLAM with synthetic stereo dataset generation for real-time dense 3D reconstruction

    Get PDF
    International audienceIn robotic mapping and navigation, of prime importance today with the trend for autonomous cars, simultaneous localization and mapping (SLAM) algorithms often use stereo vision to extract 3D information of the surrounding world. Whereas the number of creative methods for stereo-based SLAM is continuously increasing, the variety of datasets is relatively poor and the size of their contents relatively small. This size issue is increasingly problematic, with the recent explosion of deep learning based approaches, several methods require an important amount of data. Those multiple techniques contribute to enhance the precision of both localization estimation and mapping estimation to a point where the accuracy of the sensors used to get the ground truth might be questioned. Finally, because today most of these technologies are embedded on on-board systems, the power consumption and real-time constraints turn to be key requirements. Our contribution is twofold: we propose an adaptive SLAM method that reduces the number of processed frame with minimum impact error, and we make available a synthetic flexible stereo dataset with absolute ground truth, which allows to run new benchmarks for visual odometry challenges. This dataset is available online at http://alastor.labri.fr/

    Learning high-speed flight in the wild

    Full text link
    Quadrotors are agile. Unlike most other machines, they can traverse extremely complex environments at high speeds. To date, only expert human pilots have been able to fully exploit their capabilities. Autonomous operation with onboard sensing and computation has been limited to low speeds. State-of-the-art methods generally separate the navigation problem into subtasks: sensing, mapping, and planning. Although this approach has proven successful at low speeds, the separation it builds upon can be problematic for high-speed navigation in cluttered environments. The subtasks are executed sequentially, leading to increased processing latency and a compounding of errors through the pipeline. Here, we propose an end-to-end approach that can autonomously fly quadrotors through complex natural and human-made environments at high speeds with purely onboard sensing and computation. The key principle is to directly map noisy sensory observations to collision-free trajectories in a receding-horizon fashion. This direct mapping drastically reduces processing latency and increases robustness to noisy and incomplete perception. The sensorimotor mapping is performed by a convolutional network that is trained exclusively in simulation via privileged learning: imitating an expert with access to privileged information. By simulating realistic sensor noise, our approach achieves zero-shot transfer from simulation to challenging real-world environments that were never experienced during training: dense forests, snow-covered terrain, derailed trains, and collapsed buildings. Our work demonstrates that end-to-end policies trained in simulation enable high-speed autonomous flight through challenging environments, outperforming traditional obstacle avoidance pipelines
    corecore