12 research outputs found

    Embracing Safe Contacts with Contact-aware Planning and Control

    Full text link
    Unlike human beings that can employ the entire surface of their limbs as a means to establish contact with their environment, robots are typically programmed to interact with their environments via their end-effectors, in a collision-free fashion, to avoid damaging their environment. In a departure from such a traditional approach, this work presents a contact-aware controller for reference tracking that maintains interaction forces on the surface of the robot below a safety threshold in the presence of both rigid and soft contacts. Furthermore, we leveraged the proposed controller to extend the BiTRRT sample-based planning method to be contact-aware, using a simplified contact model. The effectiveness of our framework is demonstrated in hardware experiments using a Franka robot in a setup inspired by the Amazon stowing task. A demo video of our results can be seen here: https://youtu.be/2WeYytauhNgComment: RSS 2023. Workshop: Experiment-oriented Locomotion and Manipulation Researc

    ViSE: Vision-Based 3D Online Shape Estimation of Continuously Deformable Robots

    Full text link
    The precise control of soft and continuum robots requires knowledge of their shape. The shape of these robots has, in contrast to classical rigid robots, infinite degrees of freedom. To partially reconstruct the shape, proprioceptive techniques use built-in sensors resulting in inaccurate results and increased fabrication complexity. Exteroceptive methods so far rely on placing reflective markers on all tracked components and triangulating their position using multiple motion-tracking cameras. Tracking systems are expensive and infeasible for deformable robots interacting with the environment due to marker occlusion and damage. Here, we present a regression approach for 3D shape estimation using a convolutional neural network. The proposed approach takes advantage of data-driven supervised learning and is capable of real-time marker-less shape estimation during inference. Two images of a robotic system are taken simultaneously at 25 Hz from two different perspectives, and are fed to the network, which returns for each pair the parameterized shape. The proposed approach outperforms marker-less state-of-the-art methods by a maximum of 4.4% in estimation accuracy while at the same time being more robust and requiring no prior knowledge of the shape. The approach can be easily implemented due to only requiring two color cameras without depth and not needing an explicit calibration of the extrinsic parameters. Evaluations on two types of soft robotic arms and a soft robotic fish demonstrate our method's accuracy and versatility on highly deformable systems in real-time. The robust performance of the approach against different scene modifications (camera alignment and brightness) suggests its generalizability to a wider range of experimental setups, which will benefit downstream tasks such as robotic grasping and manipulation

    Real Robot Challenge 2022: Learning Dexterous Manipulation from Offline Data in the Real World

    Full text link
    Experimentation on real robots is demanding in terms of time and costs. For this reason, a large part of the reinforcement learning (RL) community uses simulators to develop and benchmark algorithms. However, insights gained in simulation do not necessarily translate to real robots, in particular for tasks involving complex interactions with the environment. The Real Robot Challenge 2022 therefore served as a bridge between the RL and robotics communities by allowing participants to experiment remotely with a real robot - as easily as in simulation. In the last years, offline reinforcement learning has matured into a promising paradigm for learning from pre-collected datasets, alleviating the reliance on expensive online interactions. We therefore asked the participants to learn two dexterous manipulation tasks involving pushing, grasping, and in-hand orientation from provided real-robot datasets. An extensive software documentation and an initial stage based on a simulation of the real set-up made the competition particularly accessible. By giving each team plenty of access budget to evaluate their offline-learned policies on a cluster of seven identical real TriFinger platforms, we organized an exciting competition for machine learners and roboticists alike. In this work we state the rules of the competition, present the methods used by the winning teams and compare their results with a benchmark of state-of-the-art offline RL algorithms on the challenge datasets

    Modeling and Preliminary Analysis of the Impact of Meteorological Conditions on the COVID-19 Epidemic

    No full text
    Since the COVID-19 epidemic outbreak at the end of 2019, many studies regarding the impact of meteorological factors on the attack have been carried out, and inconsistent conclusions have been reached, indicating the issue’s complexity. To more accurately identify the effects and patterns of meteorological factors on the epidemic, we used a combination of logistic regression (LgR) and partial least squares regression (PLSR) modeling to investigate the possible effects of common meteorological factors, including air temperature, relative humidity, wind speed, and surface pressure, on the transmission of the COVID-19 epidemic. Our analysis shows that: (1) Different countries and regions show spatial heterogeneity in the number of diagnosed patients of the epidemic, but this can be roughly classified into three types: “continuous growth”, “staged shock”, and “finished”; (2) Air temperature is the most significant meteorological factor influencing the transmission of the COVID-19 epidemic. Except for a few areas, regional air temperature changes and the transmission of the epidemic show a significant positive correlation, i.e., an increase in air temperature is conducive to the spread of the epidemic; (3) In different countries and regions studied, wind speed, relative humidity, and surface pressure show inconsistent correlation (and significance) with the number of diagnosed cases but show some regularity

    Wearable LIG Flexible Stress Sensor Based on Spider Web Bionic Structure

    No full text
    Bionic structures are widely used in scientific research. Through the observation and study of natural biological structure, it is found that spider web structure is composed of many radial silk lines protruding from the center and spiral silk lines surrounding the center. It has high stability and high sensitivity, and is especially suitable for the production of sensors. In this study, a flexible graphene sensor based on a spider web bionic structure is reported. Graphene, with its excellent mechanical properties and high electrical conductivity, is an ideal material for making sensors. In this paper, laser-induced graphene (LIG) is used as a sensing material to make a spider web structure, which is encapsulated onto a polydimethylsiloxane (PDMS) substrate to make a spider web structured graphene flexible strain sensor. The study found that the stress generated by the sensor of the spider web structure in the process of stretching and torsion can be evenly distributed in the spider web structure, which has excellent resonance ability, and the overall structure shows good structural robustness. In the experimental test, it is shown that the flexible stress sensor with spider web structure achieves high sensitivity (GF is 36.8), wide working range (0–35%), low hysteresis (260 ms), high repeatability and stability, and has long-term durability. In addition, the manufacturing process of the whole sensor is simple and convenient, and the manufactured sensor is economical and durable. It shows excellent stability in finger flexion and extension, fist clenching, and arm flexion and extension applications. This shows that the sensor can be widely used in wearable sensing devices and the detection of human biological signals. Finally, it has certain development potential in the practical application of medical health, motion detection, human-computer interaction and other fields
    corecore