100 research outputs found

    Drift-free humanoid state estimation fusing kinematic, inertial and LIDAR sensing

    Get PDF
    This paper describes an algorithm for the probabilistic fusion of sensor data from a variety of modalities (inertial, kinematic and LIDAR) to produce a single consistent position estimate for a walking humanoid. Of specific interest is our approach for continuous LIDAR-based localization which maintains reliable drift-free alignment to a prior map using a Gaussian Particle Filter. This module can be bootstrapped by constructing the map on-the-fly and performs robustly in a variety of challenging field situations. We also discuss a two-tier estimation hierarchy which preserves registration to this map and other objects in the robot’s vicinity while also contributing to direct low-level control of a Boston Dynamics Atlas robot. Extensive experimental demonstrations illustrate how the approach can enable the humanoid to walk over uneven terrain without stopping (for tens of minutes), which would otherwise not be possible. We characterize the performance of the estimator for each sensor modality and discuss the computational requirements.United States. Air Force Research Laboratory (Award FA8750-12-1-0321

    Direct Visual SLAM Fusing Proprioception for a Humanoid Robot

    Get PDF

    Unsupervised Contact Learning for Humanoid Estimation and Control

    Full text link
    This work presents a method for contact state estimation using fuzzy clustering to learn contact probability for full, six-dimensional humanoid contacts. The data required for training is solely from proprioceptive sensors - endeffector contact wrench sensors and inertial measurement units (IMUs) - and the method is completely unsupervised. The resulting cluster means are used to efficiently compute the probability of contact in each of the six endeffector degrees of freedom (DoFs) independently. This clustering-based contact probability estimator is validated in a kinematics-based base state estimator in a simulation environment with realistic added sensor noise for locomotion over rough, low-friction terrain on which the robot is subject to foot slip and rotation. The proposed base state estimator which utilizes these six DoF contact probability estimates is shown to perform considerably better than that which determines kinematic contact constraints purely based on measured normal force.Comment: Submitted to the IEEE International Conference on Robotics and Automation (ICRA) 201

    Unsupervised Contact Learning for Humanoid Estimation and Control

    Full text link
    This work presents a method for contact state estimation using fuzzy clustering to learn contact probability for full, six-dimensional humanoid contacts. The data required for training is solely from proprioceptive sensors - endeffector contact wrench sensors and inertial measurement units (IMUs) - and the method is completely unsupervised. The resulting cluster means are used to efficiently compute the probability of contact in each of the six endeffector degrees of freedom (DoFs) independently. This clustering-based contact probability estimator is validated in a kinematics-based base state estimator in a simulation environment with realistic added sensor noise for locomotion over rough, low-friction terrain on which the robot is subject to foot slip and rotation. The proposed base state estimator which utilizes these six DoF contact probability estimates is shown to perform considerably better than that which determines kinematic contact constraints purely based on measured normal force.Comment: Submitted to the IEEE International Conference on Robotics and Automation (ICRA) 201

    Outlier-Robust State Estimation for Humanoid Robots*

    Get PDF
    Contemporary humanoids are equipped with visual and LiDAR sensors that are effectively utilized for Visual Odometry (VO) and LiDAR Odometry (LO). Unfortunately, such measurements commonly suffer from outliers in a dynamic environment, since frequently it is assumed that only the robot is in motion and the world is static. To this end, robust state estimation schemes are mandatory in order for humanoids to symbiotically co-exist with humans in their daily dynamic environments. In this article, the robust Gaussian Error-State Kalman Filter for humanoid robot locomotion is presented. The introduced method automatically detects and rejects outliers without relying on any prior knowledge on measurement distributions or finely tuned thresholds. Subsequently, the proposed method is quantitatively and qualitatively assessed in realistic conditions with the full-size humanoid robot WALK-MAN v2.0 and the mini-size humanoid robot NAO to demonstrate its accuracy and robustness when outlier VOLO measurements are present. Finally, in order to reinforce further research endeavours, our implementation is released as an open-source ROS/C++package

    Robust Contact State Estimation in Humanoid Walking Gaits

    Get PDF

    Hybrid Contact Preintegration for Visual-Inertial-Contact State Estimation Using Factor Graphs

    Full text link
    The factor graph framework is a convenient modeling technique for robotic state estimation where states are represented as nodes, and measurements are modeled as factors. When designing a sensor fusion framework for legged robots, one often has access to visual, inertial, joint encoder, and contact sensors. While visual-inertial odometry has been studied extensively in this framework, the addition of a preintegrated contact factor for legged robots has been only recently proposed. This allowed for integration of encoder and contact measurements into existing factor graphs, however, new nodes had to be added to the graph every time contact was made or broken. In this work, to cope with the problem of switching contact frames, we propose a hybrid contact preintegration theory that allows contact information to be integrated through an arbitrary number of contact switches. The proposed hybrid modeling approach reduces the number of required variables in the nonlinear optimization problem by only requiring new states to be added alongside camera or selected keyframes. This method is evaluated using real experimental data collected from a Cassie-series robot where the trajectory of the robot produced by a motion capture system is used as a proxy for ground truth. The evaluation shows that inclusion of the proposed preintegrated hybrid contact factor alongside visual-inertial navigation systems improves estimation accuracy as well as robustness to vision failure, while its generalization makes it more accessible for legged platforms.Comment: Detailed derivations are provided in the supplementary material document listed under "Ancillary files

    A Study on Low-Drift State Estimation for Humanoid Locomotion, Using LiDAR and Kinematic-Inertial Data Fusion

    Get PDF
    Several humanoid robots will require to navigate in unsafe and unstructured environments, such as those after a disaster, for human assistance and support. To achieve this, humanoids require to construct in real-time, accurate maps of the environment and localize in it by estimating their base/pelvis state without any drift, using computationally efficient mapping and state estimation algorithms. While a multitude of Simultaneous Localization and Mapping (SLAM) algorithms exist, their localization relies on the existence of repeatable landmarks, which might not always be available in unstructured environments. Several studies also use stop-and-map procedures to map the environment before traversal, but this is not ideal for scenarios where the robot needs to be continuously moving to keep for instance the task completion time short. In this paper, we present a novel combination of the state-of-the-art odometry and mapping based on LiDAR data and state estimation based on the kinematics-inertial data of the humanoid. We present experimental evaluation of the introduced state estimation on the full-size humanoid robot WALK-MAN while performing locomotion tasks. Through this combination, we prove that it is possible to obtain low-error, high frequency estimates of the state of the robot, while moving and mapping the environment on the go

    Real-time Coordinate Estimation for Self-Localization of the Humanoid Robot Soccer BarelangFC

    Get PDF
    In implementation, of the humanoid robot soccer consists of more than three robots when played soccer on the field. All the robots needed to be played the soccer as human done such as seeking, chasing, dribbling and kicking the ball. To do all of these commands, it is required a real-time localization system so that each robot will understand not only the robot position itself but also the other robots and even the object on the field’s environment. However, in real-time implementation and due to the limited ability of the robot computation, it is necessary to determine a method which has fast computation and able to save much memory. Therefore, in this paper we presented a real-time localization implementation method using the odometry and Monte Carlo Localization (MCL) method. In order to verify the performance of this method, some experiment has been carried out in real-time application. From the experimental result, the proposed method able to estimate the coordinate of each robot position in X and Y position on the field.Dalam implementasinya, robot humanoid soccer terdiri lebih dari tiga robot di lapangan ketika sedang bermain bola. Semua robot diharapkan dapat memainkan sepak bola seperti manusia seperti mencari, mengejar, menggiring bola dan menendang bola. Untuk melakukan semua perintah tersebut, diperlukan sistem lokalisasi real-time sehingga setiap robot tidak hanya memahami posisi robotnya sendiri tetapi juga robot-robot lain bahkan objek yang berada di sekitar lapangan. Namun dalam implementasi real-time dan karena keterbatasan kemampuan komputasi robot, diperlukan suatu metode komputasi yang cepat dan mampu menghemat banyak memori. Oleh karena itu, dalam makalah ini menyajikan metode implementasi lokalisasi real-time dengan menggunakan metode odometry and Monte Carlo Localization (MCL). Untuk memverifikasi kinerja metode ini, beberapa percobaan telah dilakukan dalam aplikasi real-time. Dari hasil percobaan, metode yang diusulkan mampu mengestimasi koordinat posisi robot pada posisi X dan Y di lapangan ketika sedang bermain bola

    Robust dense visual SLAM using sensor fusion and motion segmentation

    Get PDF
    Visual simultaneous localisation and mapping (SLAM) is an important technique for enabling mobile robots to navigate autonomously within their environments. Using cameras, robots reconstruct a representation of their environment and simultaneously localise themselves within it. A dense visual SLAM system produces a high-resolution and detailed reconstruction of the environment which can be used for obstacle avoidance or semantic reasoning. State-of-the-art dense visual SLAM systems demonstrate robust performance and impressive accuracy in ideal conditions. However, these techniques are based on requirements which limit the extent to which they can be deployed in real applications. Fundamentally, they require constant scene illumination, smooth camera motion and no moving objects being present in the scene. Overcoming these requirements is not trivial and significant effort is needed to make dense visual SLAM approaches more robust to real-world conditions. The objective of this thesis is to develop dense visual SLAM systems which are more robust to real-world visually challenging conditions. For this, we leverage sensor fusion and motion segmentation for situations where camera data is unsuitable. The first contribution is a visual SLAM system for the NASA Valkyrie humanoid robot which is robust to the robot’s operation. It is based on a sensor fusion approach which combines visual SLAM and leg odometry to demonstrate increased robustness to illumination changes and fast camera motion. Second, we research methods for robust visual odometry in the presence of moving objects. We propose a formulation for joint visual odometry and motion segmentation that demonstrates increased robustness in scenes with moving objects compared to state-of-the-art approaches. We then extend this method using inertial information from a gyroscope to compare the contributions of motion segmentation and motion prior integration for robustness to scene dynamics. As part of this study we provide a dataset recorded in scenes with different numbers of moving objects. In conclusion, we find that both motion segmentation and motion prior integration are necessary for achieving significantly better results in real-world conditions. While motion priors increase robustness, motion segmentation increases the accuracy of the reconstruction results through filtering of moving objects.Edinburgh Centre for RoboticsEngineering and Physical Sciences Research Council (EPSRC
    • …
    corecore