616 research outputs found

    Learning to See Physical Properties with Active Sensing Motor Policies

    Full text link
    Knowledge of terrain's physical properties inferred from color images can aid in making efficient robotic locomotion plans. However, unlike image classification, it is unintuitive for humans to label image patches with physical properties. Without labeled data, building a vision system that takes as input the observed terrain and predicts physical properties remains challenging. We present a method that overcomes this challenge by self-supervised labeling of images captured by robots during real-world traversal with physical property estimators trained in simulation. To ensure accurate labeling, we introduce Active Sensing Motor Policies (ASMP), which are trained to explore locomotion behaviors that increase the accuracy of estimating physical parameters. For instance, the quadruped robot learns to swipe its foot against the ground to estimate the friction coefficient accurately. We show that the visual system trained with a small amount of real-world traversal data accurately predicts physical parameters. The trained system is robust and works even with overhead images captured by a drone despite being trained on data collected by cameras attached to a quadruped robot walking on the ground.Comment: In CoRL 2023. Website: https://gmargo11.github.io/active-sensing-loco

    Integrating Reconfigurable Foot Design, Multi-modal Contact Sensing, and Terrain Classification for Bipedal Locomotion

    Full text link
    The ability of bipedal robots to adapt to diverse and unstructured terrain conditions is crucial for their deployment in real-world environments. To this end, we present a novel, bio-inspired robot foot design with stabilizing tarsal segments and a multifarious sensor suite involving acoustic, capacitive, tactile, temperature, and acceleration sensors. A real-time signal processing and terrain classification system is developed and evaluated. The sensed terrain information is used to control actuated segments of the foot, leading to improved ground contact and stability. The proposed framework highlights the potential of the sensor-integrated adaptive foot for intelligent and adaptive locomotion.Comment: 7 pages, 6 figure

    FootTile: a Rugged Foot Sensor for Force and Center of Pressure Sensing in Soft Terrain

    Full text link
    In this paper we present FootTile, a foot sensor for reaction force and center of pressure sensing in challenging terrain. We compare our sensor design to standard biomechanical devices, force plates and pressure plates. We show that FootTile can accurately estimate force and pressure distribution during legged locomotion. FootTile weighs 0.9g, has a sampling rate of 330Hz, a footprint of 10 by 10mm and can easily be adapted in sensor range to the required load case. In three experiments we validate: first the performance of the individual sensor, second an array of FootTiles for center of pressure sensing and third the ground reaction force estimation during locomotion in granular substrate. We then go on to show the accurate sensing capabilities of the waterproof sensor in liquid mud, as a showcase for real world rough terrain use

    Comparative Study of Different Methods in Vibration-Based Terrain Classification for Wheeled Robots with Shock Absorbers

    Get PDF
    open access articleAutonomous robots that operate in the field can enhance their security and efficiency by accurate terrain classification, which can be realized by means of robot-terrain interaction-generated vibration signals. In this paper, we explore the vibration-based terrain classification (VTC), in particular for a wheeled robot with shock absorbers. Because the vibration sensors are usually mounted on the main body of the robot, the vibration signals are dampened significantly, which results in the vibration signals collected on different terrains being more difficult to discriminate. Hence, the existing VTC methods applied to a robot with shock absorbers may degrade. The contributions are two-fold: (1) Several experiments are conducted to exhibit the performance of the existing feature-engineering and feature-learning classification methods; and (2) According to the long short-term memory (LSTM) network, we propose a one-dimensional convolutional LSTM (1DCL)-based VTC method to learn both spatial and temporal characteristics of the dampened vibration signals. The experiment results demonstrate that: (1) The feature-engineering methods, which are efficient in VTC of the robot without shock absorbers, are not so accurate in our project; meanwhile, the feature-learning methods are better choices; and (2) The 1DCL-based VTC method outperforms the conventional methods with an accuracy of 80.18%, which exceeds the second method (LSTM) by 8.23%

    Multi-segmented Adaptive Feet for Versatile Legged Locomotion in Natural Terrain

    Full text link
    Most legged robots are built with leg structures from serially mounted links and actuators and are controlled through complex controllers and sensor feedback. In comparison, animals developed multi-segment legs, mechanical coupling between joints, and multi-segmented feet. They run agile over all terrains, arguably with simpler locomotion control. Here we focus on developing foot mechanisms that resist slipping and sinking also in natural terrain. We present first results of multi-segment feet mounted to a bird-inspired robot leg with multi-joint mechanical tendon coupling. Our one- and two-segment, mechanically adaptive feet show increased viable horizontal forces on multiple soft and hard substrates before starting to slip. We also observe that segmented feet reduce sinking on soft substrates compared to ball-feet and cylinder-feet. We report how multi-segmented feet provide a large range of viable centre of pressure points well suited for bipedal robots, but also for quadruped robots on slopes and natural terrain. Our results also offer a functional understanding of segmented feet in animals like ratite birds

    Learning Image-Conditioned Dynamics Models for Control of Under-actuated Legged Millirobots

    Full text link
    Millirobots are a promising robotic platform for many applications due to their small size and low manufacturing costs. Legged millirobots, in particular, can provide increased mobility in complex environments and improved scaling of obstacles. However, controlling these small, highly dynamic, and underactuated legged systems is difficult. Hand-engineered controllers can sometimes control these legged millirobots, but they have difficulties with dynamic maneuvers and complex terrains. We present an approach for controlling a real-world legged millirobot that is based on learned neural network models. Using less than 17 minutes of data, our method can learn a predictive model of the robot's dynamics that can enable effective gaits to be synthesized on the fly for following user-specified waypoints on a given terrain. Furthermore, by leveraging expressive, high-capacity neural network models, our approach allows for these predictions to be directly conditioned on camera images, endowing the robot with the ability to predict how different terrains might affect its dynamics. This enables sample-efficient and effective learning for locomotion of a dynamic legged millirobot on various terrains, including gravel, turf, carpet, and styrofoam. Experiment videos can be found at https://sites.google.com/view/imageconddy

    Real-time Digital Double Framework to Predict Collapsible Terrains for Legged Robots

    Get PDF
    Inspired by the digital twinning systems, a novel real-time digital double framework is developed to enhance robot perception of the terrain conditions. Based on the very same physical model and motion control, this work exploits the use of such simulated digital double synchronized with a real robot to capture and extract discrepancy information between the two systems, which provides high dimensional cues in multiple physical quantities to represent differences between the modelled and the real world. Soft, non-rigid terrains cause common failures in legged locomotion, whereby visual perception solely is insufficient in estimating such physical properties of terrains. We used digital double to develop the estimation of the collapsibility, which addressed this issue through physical interactions during dynamic walking. The discrepancy in sensory measurements between the real robot and its digital double are used as input of a learning-based algorithm for terrain collapsibility analysis. Although trained only in simulation, the learned model can perform collapsibility estimation successfully in both simulation and real world. Our evaluation of results showed the generalization to different scenarios and the advantages of the digital double to reliably detect nuances in ground conditions.Comment: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Preprint version. Accepted June 202

    Inertial learning and haptics for legged robot state estimation in visually challenging environments

    Get PDF
    Legged robots have enormous potential to automate dangerous or dirty jobs because they are capable of traversing a wide range of difficult terrains such as up stairs or through mud. However, a significant challenge preventing widespread deployment of legged robots is a lack of robust state estimation, particularly in visually challenging conditions such as darkness or smoke. In this thesis, I address these challenges by exploiting proprioceptive sensing from inertial, kinematic and haptic sensors to provide more accurate state estimation when visual sensors fail. Four different methods are presented, including the use of haptic localisation, terrain semantic localisation, learned inertial odometry, and deep learning to infer the evolution of IMU biases. The first approach exploits haptics as a source of proprioceptive localisation by comparing geometric information to a prior map. The second method expands on this concept by fusing both semantic and geometric information, allowing for accurate localisation on diverse terrain. Next, I combine new techniques in inertial learning with classical IMU integration and legged robot kinematics to provide more robust state estimation. This is further developed to use only IMU data, for an application entirely different from robotics: 3D reconstruction of bone with a handheld ultrasound scanner. Finally, I present the novel idea of using deep learning to infer the evolution of IMU biases, improving state estimation in exteroceptive systems where vision fails. Legged robots have the potential to benefit society by automating dangerous, dull, or dirty jobs and by assisting first responders in emergency situations. However, there remain many unsolved challenges to the real-world deployment of legged robots, including accurate state estimation in vision-denied environments. The work presented in this thesis takes a step towards solving these challenges and enabling the deployment of legged robots in a variety of applications
    corecore