1,535 research outputs found

    ART/ATK: A research platform for assessing and mitigating the sim-to-real gap in robotics and autonomous vehicle engineering

    Full text link
    We discuss a platform that has both software and hardware components, and whose purpose is to support research into characterizing and mitigating the sim-to-real gap in robotics and vehicle autonomy engineering. The software is operating-system independent and has three main components: a simulation engine called Chrono, which supports high-fidelity vehicle and sensor simulation; an autonomy stack for algorithm design and testing; and a development environment that supports visualization and hardware-in-the-loop experimentation. The accompanying hardware platform is a 1/6th scale vehicle augmented with reconfigurable mountings for computing, sensing, and tracking. Since this vehicle platform has a digital twin within the simulation environment, one can test the same autonomy perception, state estimation, or controls algorithms, as well as the processors they run on, in both simulation and reality. A demonstration is provided to show the utilization of this platform for autonomy research. Future work will concentrate on augmenting ART/ATK with support for a full-sized Chevy Bolt EUV, which will be made available to this group in the immediate future.Comment: 4 pages, Presented at IROS 2022 Workshop on Miniature Robot Platforms for Full Scale Autonomous Vehicle Research. arXiv admin note: substantial text overlap with arXiv:2206.0653

    MARBLER: An Open Platform for Standarized Evaluation of Multi-Robot Reinforcement Learning Algorithms

    Full text link
    Multi-agent reinforcement learning (MARL) has enjoyed significant recent progress, thanks to deep learning. This is naturally starting to benefit multi-robot systems (MRS) in the form of multi-robot RL (MRRL). However, existing infrastructure to train and evaluate policies predominantly focus on challenges in coordinating virtual agents, and ignore characteristics important to robotic systems. Few platforms support realistic robot dynamics, and fewer still can evaluate Sim2Real performance of learned behavior. To address these issues, we contribute MARBLER: Multi-Agent RL Benchmark and Learning Environment for the Robotarium. MARBLER offers a robust and comprehensive evaluation platform for MRRL by marrying Georgia Tech's Robotarium (which enables rapid prototyping on physical MRS) and OpenAI's Gym framework (which facilitates standardized use of modern learning algorithms). MARBLER offers a highly controllable environment with realistic dynamics, including barrier certificate-based obstacle avoidance. It allows anyone across the world to train and deploy MRRL algorithms on a physical testbed with reproducibility. Further, we introduce five novel scenarios inspired by common challenges in MRS and provide support for new custom scenarios. Finally, we use MARBLER to evaluate popular MARL algorithms and provide insights into their suitability for MRRL. In summary, MARBLER can be a valuable tool to the MRS research community by facilitating comprehensive and standardized evaluation of learning algorithms on realistic simulations and physical hardware. Links to our open-source framework and the videos of real-world experiments can be found at https://shubhlohiya.github.io/MARBLER/.Comment: 7 pages, 3 figures, submitted to MRS 2023, for the associated website, see https://shubhlohiya.github.io/MARBLER

    Learning Real-world Autonomous Navigation by Self-Supervised Environment Synthesis

    Full text link
    Machine learning approaches have recently enabled autonomous navigation for mobile robots in a data-driven manner. Since most existing learning-based navigation systems are trained with data generated in artificially created training environments, during real-world deployment at scale, it is inevitable that robots will encounter unseen scenarios, which are out of the training distribution and therefore lead to poor real-world performance. On the other hand, directly training in the real world is generally unsafe and inefficient. To address this issue, we introduce Self-supervised Environment Synthesis (SES), in which, after real-world deployment with safety and efficiency requirements, autonomous mobile robots can utilize experience from the real-world deployment, reconstruct navigation scenarios, and synthesize representative training environments in simulation. Training in these synthesized environments leads to improved future performance in the real world. The effectiveness of SES at synthesizing representative simulation environments and improving real-world navigation performance is evaluated via a large-scale deployment in a high-fidelity, realistic simulator and a small-scale deployment on a physical robot

    Difference-based Deep Convolutional Neural Network for Simulation-to-reality UAV Fault Diagnosis

    Full text link
    Identifying the fault in propellers is important to keep quadrotors operating safely and efficiently. The simulation-to-reality (sim-to-real) UAV fault diagnosis methods provide a cost-effective and safe approach to detect the propeller faults. However, due to the gap between simulation and reality, classifiers trained with simulated data usually underperform in real flights. In this work, a new deep neural network (DNN) model is presented to address the above issue. It uses the difference features extracted by deep convolutional neural networks (DDCNN) to reduce the sim-to-real gap. Moreover, a new domain adaptation method is presented to further bring the distribution of the real-flight data closer to that of the simulation data. The experimental results show that the proposed approach can achieve an accuracy of 97.9\% in detecting propeller faults in real flight. Feature visualization was performed to help better understand our DDCNN model.Comment: 7 pages, 8 figure

    Sim2real and Digital Twins in Autonomous Driving: A Survey

    Full text link
    Safety and cost are two important concerns for the development of autonomous driving technologies. From the academic research to commercial applications of autonomous driving vehicles, sufficient simulation and real world testing are required. In general, a large scale of testing in simulation environment is conducted and then the learned driving knowledge is transferred to the real world, so how to adapt driving knowledge learned in simulation to reality becomes a critical issue. However, the virtual simulation world differs from the real world in many aspects such as lighting, textures, vehicle dynamics, and agents' behaviors, etc., which makes it difficult to bridge the gap between the virtual and real worlds. This gap is commonly referred to as the reality gap (RG). In recent years, researchers have explored various approaches to address the reality gap issue, which can be broadly classified into two categories: transferring knowledge from simulation to reality (sim2real) and learning in digital twins (DTs). In this paper, we consider the solutions through the sim2real and DTs technologies, and review important applications and innovations in the field of autonomous driving. Meanwhile, we show the state-of-the-arts from the views of algorithms, models, and simulators, and elaborate the development process from sim2real to DTs. The presentation also illustrates the far-reaching effects of the development of sim2real and DTs in autonomous driving

    Autonomous shock sensing using bi-stable triboelectric generators and MEMS electrostatic levitation actuators

    Get PDF
    This work presents an automatic threshold shock-sensing trigger system that consists of a bi-stable triboelectric transducer and a levitation-based electrostatic mechanism. The bi-stable mechanism is sensitive to mechanical shocks and releases impact energy when the shock is strong enough. A triboelectric generator produces voltage when it receives a mechanical shock. The voltage is proportional to the mechanical shock. When the voltage exceed a certain level, the initially pulled-in Microelectromechanical system (MEMS) switch is opened and can disconnect the current in a safety electronic system. The MEMS switch combines two mechanisms of gap-closing (parallel-plate electrodes) with electrostatic levitation (side electrodes) to provide bi-directional motions. The switch is initially closed from a small bias voltage on the gap-closing electrodes. The voltage from the bi-stable generator is connected to the side electrodes. When the shock goes beyond a threshold, the upward force caused by the side electrodes on the switch becomes strong enough to peel off the switch from the closed position. The threshold shock the system can detect is tunable using two control parameters. These two tuning parameters are the axial force on the bi- stable system (clamped-clamped beam) and the bias voltage on the MEMS switch (gap-closing electrodes). The actuation in macro-scale is thus directly connected to a sensor-switch mechanism in micro-scale. This chain makes an autonomous actuation and sensing stand-alone system that has potential application on air bag deployment devices and powerline protection systems. We provide a theoretical frame work of the entire system validated by experimental results

    Contrastive Learning for Enhancing Robust Scene Transfer in Vision-based Agile Flight

    Full text link
    Scene transfer for vision-based mobile robotics applications is a highly relevant and challenging problem. The utility of a robot greatly depends on its ability to perform a task in the real world, outside of a well-controlled lab environment. Existing scene transfer end-to-end policy learning approaches often suffer from poor sample efficiency or limited generalization capabilities, making them unsuitable for mobile robotics applications. This work proposes an adaptive multi-pair contrastive learning strategy for visual representation learning that enables zero-shot scene transfer and real-world deployment. Control policies relying on the embedding are able to operate in unseen environments without the need for finetuning in the deployment environment. We demonstrate the performance of our approach on the task of agile, vision-based quadrotor flight. Extensive simulation and real-world experiments demonstrate that our approach successfully generalizes beyond the training domain and outperforms all baselines

    Towards self-attention based visual navigation in the real world

    Full text link
    Vision guided navigation requires processing complex visual information to inform task-orientated decisions. Applications include autonomous robots, self-driving cars, and assistive vision for humans. A key element is the extraction and selection of relevant features in pixel space upon which to base action choices, for which Machine Learning techniques are well suited. However, Deep Reinforcement Learning agents trained in simulation often exhibit unsatisfactory results when deployed in the real-world due to perceptual differences known as the reality gap\textit{reality gap}. An approach that is yet to be explored to bridge this gap is self-attention. In this paper we (1) perform a systematic exploration of the hyperparameter space for self-attention based navigation of 3D environments and qualitatively appraise behaviour observed from different hyperparameter sets, including their ability to generalise; (2) present strategies to improve the agents' generalisation abilities and navigation behaviour; and (3) show how models trained in simulation are capable of processing real world images meaningfully in real time. To our knowledge, this is the first demonstration of a self-attention based agent successfully trained in navigating a 3D action space, using less than 4000 parameters.Comment: Submitted to The 2022 Australian Conference on Robotics and Automation (ACRA 2022

    Simulation and Visualisation Software for an Elastic Aircraft for High Altitudes based on Game Engine Technology

    Get PDF
    The aim of this thesis work was to design and develop a simulation and visualization platform based on game engine technology, that could be applied to any robotic system and would provide tools for representing the robot, visualizing the environment around it in a high level of detail and also provide means of sampling this environment in order to enable external simulation of interactions between the robot and its surroundings. The main design goal is for the platform to be able to have external physics simulations (robot and robot-environment interactions) entirely separated from the game engine environment. To this end, Unreal Engine 4 (UE4) has been chosen and the platform was implemented as a modular UE4 project, by making use of engine-specific structures. Interfacing between these modules and external ones has been achieved by designing and implementing a middleware interface for the platform, therefore enabling access to the middlewares data transfer system. Finally, this software-in-the-loop chain created between the UE4 modules and the external modules with the middleware as a transfer point has been evaluated in terms of feasibility and functionality by conducting tests on the various modules and interfaces thereof. The outcome is a powerful, flexible and ready-to-use simulation and visualization platform that can be easily adapted to any robotic system and provides the necessary means to accurately sample a customizable, high-quality environment in the vicinity of the robot

    Dynamic Handover: Throw and Catch with Bimanual Hands

    Full text link
    Humans throw and catch objects all the time. However, such a seemingly common skill introduces a lot of challenges for robots to achieve: The robots need to operate such dynamic actions at high-speed, collaborate precisely, and interact with diverse objects. In this paper, we design a system with two multi-finger hands attached to robot arms to solve this problem. We train our system using Multi-Agent Reinforcement Learning in simulation and perform Sim2Real transfer to deploy on the real robots. To overcome the Sim2Real gap, we provide multiple novel algorithm designs including learning a trajectory prediction model for the object. Such a model can help the robot catcher has a real-time estimation of where the object will be heading, and then react accordingly. We conduct our experiments with multiple objects in the real-world system, and show significant improvements over multiple baselines. Our project page is available at \url{https://binghao-huang.github.io/dynamic_handover/}.Comment: Accepted at CoRL 2023. https://binghao-huang.github.io/dynamic_handover
    corecore