550 research outputs found

    Dynamic Motion Modelling for Legged Robots

    Full text link
    An accurate motion model is an important component in modern-day robotic systems, but building such a model for a complex system often requires an appreciable amount of manual effort. In this paper we present a motion model representation, the Dynamic Gaussian Mixture Model (DGMM), that alleviates the need to manually design the form of a motion model, and provides a direct means of incorporating auxiliary sensory data into the model. This representation and its accompanying algorithms are validated experimentally using an 8-legged kinematically complex robot, as well as a standard benchmark dataset. The presented method not only learns the robot's motion model, but also improves the model's accuracy by incorporating information about the terrain surrounding the robot

    Proprioceptive localization for a quadrupedal robot on known terrain

    Get PDF
    We present a novel method for the localization of a legged robot on known terrain using only proprioceptive sensors such as joint encoders and an inertial measurement unit. In contrast to other proprioceptive pose estimation techniques, this method allows for global localization (i.e., localization with large initial uncertainty) without the use of exteroceptive sensors. This is made possible by establishing a measurement model based on the feasibility of putative poses on known terrain given observed joint angles and attitude measurements. Results are shown that demonstrate that the method performs better than dead-reckoning, and is also able to perform global localization from large initial uncertainty

    RobustStateNet: Robust ego vehicle state estimation for Autonomous Driving

    Get PDF
    Control of an ego vehicle for Autonomous Driving (AD) requires an accurate definition of its state. Implementation of various model-based Kalman Filtering (KF) techniques for state estimation is prevalent in the literature. These algorithms use measurements from IMU and input signals from steering and wheel encoders for motion prediction with physics-based models, and a Global Navigation Satellite System(GNSS) for global localization. Such methods are widely investigated and majorly focus on increasing the accuracy of the estimation. Ego motion prediction in these approaches does not model the sensor failure modes and assumes completely known dynamics with motion and measurement model noises. In this work, we propose a novel Recurrent Neural Network (RNN) based motion predictor that parallelly models the sensor measurement dynamics and selectively fuses the features to increase the robustness of prediction, in particular in scenarios where we witness sensor failures. This motion predictor is integrated into a KF-like framework, RobustStateNet that takes a global position from the GNSS sensor and updates the predicted state. We demonstrate that the proposed state estimation routine outperforms the Model-Based KF and KalmanNet architecture in terms of estimation accuracy and robustness. The proposed algorithms are validated in the modified NuScenes CAN bus dataset, designed to simulate various types of sensor failures

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP

    Adapting Monte Carlo Localization to Utilize Floor and Wall Texture Data

    Get PDF
    Monte Carlo Localization (MCL) is an algorithm that allows a robot to determine its location when provided a map of its surroundings. Particles, consisting of a location and an orientation, represent possible positions where the robot could be on the map. The probability of the robot being at each particle is calculated based on sensor input. Traditionally, MCL only utilizes the position of objects for localization. This thesis explores using wall and floor surface textures to help the algorithm determine locations more accurately. Wall textures are captured by using a laser range finder to detect patterns in the surface. Floor textures are determined by using an inertial measurement unit (IMU) to capture acceleration vectors which represent the roughness of the floor. Captured texture data is classified by an artificial neural network and used in probability calculations. The best variations of Texture MCL improved accuracy by 19.1\% and 25.1\% when all particles and the top fifty particles respectively were used to calculate the robot\u27s estimated position. All implementations achieved comparable performance speeds when run in real-time on-board a robot
    corecore