349 research outputs found

    Proprioceptive Invariant Robot State Estimation

    Full text link
    This paper reports on developing a real-time invariant proprioceptive robot state estimation framework called DRIFT. A didactic introduction to invariant Kalman filtering is provided to make this cutting-edge symmetry-preserving approach accessible to a broader range of robotics applications. Furthermore, this work dives into the development of a proprioceptive state estimation framework for dead reckoning that only consumes data from an onboard inertial measurement unit and kinematics of the robot, with two optional modules, a contact estimator and a gyro filter for low-cost robots, enabling a significant capability on a variety of robotics platforms to track the robot's state over long trajectories in the absence of perceptual data. Extensive real-world experiments using a legged robot, an indoor wheeled robot, a field robot, and a full-size vehicle, as well as simulation results with a marine robot, are provided to understand the limits of DRIFT

    Unsupervised Contact Learning for Humanoid Estimation and Control

    Full text link
    This work presents a method for contact state estimation using fuzzy clustering to learn contact probability for full, six-dimensional humanoid contacts. The data required for training is solely from proprioceptive sensors - endeffector contact wrench sensors and inertial measurement units (IMUs) - and the method is completely unsupervised. The resulting cluster means are used to efficiently compute the probability of contact in each of the six endeffector degrees of freedom (DoFs) independently. This clustering-based contact probability estimator is validated in a kinematics-based base state estimator in a simulation environment with realistic added sensor noise for locomotion over rough, low-friction terrain on which the robot is subject to foot slip and rotation. The proposed base state estimator which utilizes these six DoF contact probability estimates is shown to perform considerably better than that which determines kinematic contact constraints purely based on measured normal force.Comment: Submitted to the IEEE International Conference on Robotics and Automation (ICRA) 201

    Unsupervised Contact Learning for Humanoid Estimation and Control

    Full text link
    This work presents a method for contact state estimation using fuzzy clustering to learn contact probability for full, six-dimensional humanoid contacts. The data required for training is solely from proprioceptive sensors - endeffector contact wrench sensors and inertial measurement units (IMUs) - and the method is completely unsupervised. The resulting cluster means are used to efficiently compute the probability of contact in each of the six endeffector degrees of freedom (DoFs) independently. This clustering-based contact probability estimator is validated in a kinematics-based base state estimator in a simulation environment with realistic added sensor noise for locomotion over rough, low-friction terrain on which the robot is subject to foot slip and rotation. The proposed base state estimator which utilizes these six DoF contact probability estimates is shown to perform considerably better than that which determines kinematic contact constraints purely based on measured normal force.Comment: Submitted to the IEEE International Conference on Robotics and Automation (ICRA) 201

    Exploration of a hybrid locomotion robot

    Get PDF
    In this work, a hybrid locomotion robotic platform is evaluated. This system combines the benefits of both rolling and walking, with the intent on having the ability to traverse variable terrain. A quadruped leg-wheeled robot was designed, built, and tested. Experimental trials were conducted to demonstrate the overall feasibility of the design. Finally, important conclusions about the effectiveness and value of hybrid locomotion were reached. Posturecontrol is specifically identified as an effective area with great potential

    Mechatronics Design of a Mecanum Wheeled Mobile Robot

    Get PDF

    Learning inertial odometry for dynamic legged robot state estimation

    Get PDF
    This paper introduces a novel proprioceptive state estimator for legged robots based on a learned displacement measurement from IMU data. Recent research in pedestrian tracking has shown that motion can be inferred from inertial data using convolutional neural networks. A learned inertial displacement measurement can improve state estimation in challenging scenarios where leg odometry is unreliable, such as slipping and compressible terrains. Our work learns to estimate a displacement measurement from IMU data which is then fused with traditional leg odometry. Our approach greatly reduces the drift of proprioceptive state estimation, which is critical for legged robots deployed in vision and lidar denied environments such as foggy sewers or dusty mines. We compared results from an EKF and an incremental fixed-lag factor graph estimator using data from several real robot experiments crossing challenging terrains. Our results show a reduction of relative pose error by 37% in challenging scenarios when compared to a traditional kinematic-inertial estimator without learned measurement. We also demonstrate a 22% reduction in error when used with vision systems in visually degraded environments such as an underground mine

    New Method for Localization and Human Being Detection using UWB Technology: Helpful Solution for Rescue Robots

    No full text
    International audienceTwo challenges for rescue robots are to detect human beings and to have an accurate positioning system. In indoor positioning, GPS receivers cannot be used due to the reflections or attenuation caused by obstacles. To detect human beings, sensors such as thermal camera, ultrasonic and microphone can be embedded on the rescue robot. The drawback of these sensors is the detection range. These sensors have to be in close proximity to the victim in order to detect it. UWB technology is then very helpful to ensure precise localization of the rescue robot inside the disaster site and detect human beings. We propose a new method to both detect human beings and locate the rescue robot at the same time. To achieve these goals we optimize the design of UWB pulses based on B-splines. The spectral effectiveness is optimized so the symbols are easier to detect and the mitigation with noise is reduced. Our positioning system performs to locate the rescue robot with an accuracy about 2 centimeters. During some tests we discover that UWB signal characteristics abruptly change after passing through a human body. Our system uses this particular signature to detect human body

    Inertial learning and haptics for legged robot state estimation in visually challenging environments

    Get PDF
    Legged robots have enormous potential to automate dangerous or dirty jobs because they are capable of traversing a wide range of difficult terrains such as up stairs or through mud. However, a significant challenge preventing widespread deployment of legged robots is a lack of robust state estimation, particularly in visually challenging conditions such as darkness or smoke. In this thesis, I address these challenges by exploiting proprioceptive sensing from inertial, kinematic and haptic sensors to provide more accurate state estimation when visual sensors fail. Four different methods are presented, including the use of haptic localisation, terrain semantic localisation, learned inertial odometry, and deep learning to infer the evolution of IMU biases. The first approach exploits haptics as a source of proprioceptive localisation by comparing geometric information to a prior map. The second method expands on this concept by fusing both semantic and geometric information, allowing for accurate localisation on diverse terrain. Next, I combine new techniques in inertial learning with classical IMU integration and legged robot kinematics to provide more robust state estimation. This is further developed to use only IMU data, for an application entirely different from robotics: 3D reconstruction of bone with a handheld ultrasound scanner. Finally, I present the novel idea of using deep learning to infer the evolution of IMU biases, improving state estimation in exteroceptive systems where vision fails. Legged robots have the potential to benefit society by automating dangerous, dull, or dirty jobs and by assisting first responders in emergency situations. However, there remain many unsolved challenges to the real-world deployment of legged robots, including accurate state estimation in vision-denied environments. The work presented in this thesis takes a step towards solving these challenges and enabling the deployment of legged robots in a variety of applications
    • …
    corecore