8 research outputs found

    Multimodal material identification through recursive tactile sensing

    Get PDF
    Tactile sensing has recently been used in robotics for object identification, grasping, and material identification. Although human tactile sensing is multimodal, existing material recognition approaches use vibration information only. Moreover, material identification through tactile sensing can be solved as an continuous process, yet state of the art approaches use a batch approach where readings are taken for at least one second. This work proposes a recursive multimodal (vibration and thermal) tactile material identification approach. Using the frequency response of the vibration induced by the material and a set of thermal features, we show that it is possible to accurately identify materials in less than half a second. We conducted an exhaustive comparison of our approach with commonly used vibration descriptors and machine learning algorithms for material identification such as k-Nearest Neighbour, Artificial Neural Network and Support Vector Machines. Experimental results show that our approach identifies materials faster than existing techniques and increase the classification accuracy when multiple sensor modalities are used

    A force and thermal sensing skin for robots in human environments

    Get PDF
    Working together, heated and unheated temperature sensors can recognize contact with different materials and contact with the human body. As such, distributing these sensors across a robot’s body could be beneficial for operation in human environments. We present a stretchable fabric-based skin with force and thermal sensors that is suitable for covering areas of a robot’s body, including curved surfaces. It also adds a layer of compliance that conforms to manipulated objects, improving thermal sensing. Our design addresses thermal sensing challenges, such as the time to heat the sensors, the efficiency of sensing, and the distribution of sensors across the skin. It incorporates small self-heated temperature sensors on the surface of the skin that directly make contact with objects, improving the sensors’ response times. Our approach seeks to fully cover the robot’s body with large force sensing taxels, but treats temperature sensors as small, point-like sensors sparsely distributed across the skin. We present a mathematical model to help predict how many of these point-like temperature sensors should be used in order to increase the likelihood of them making contact with an object. To evaluate our design, we conducted tests in which a robot arm used a cylindrical end effector covered with skin to slide objects and press on objects made from four different materials. After assessing the safety of our design, we also had the robot make contact with the forearms and clothed shoulders of 10 human participants. With 2.0 s of contact, the actively-heated temperature sensors enabled binary classification accuracy over 90% for the majority of material pairs. The system could more rapidly distinguish between materials with large differences in their thermal effusivities (e.g., 90% accuracy for pine wood vs. aluminum with 0.5 s of contact). For discrimination between humans vs. the four materials, the skin’s force and thermal sensing modalities achieved 93% classification accuracy with 0.5 s of contact. Overall, our results suggest that our skin design could enable robots to recognize contact with distinct task-relevant materials and humans while performing manipulation tasks in human environments.M.S

    Tactile Perception And Visuotactile Integration For Robotic Exploration

    Get PDF
    As the close perceptual sibling of vision, the sense of touch has historically received less than deserved attention in both human psychology and robotics. In robotics, this may be attributed to at least two reasons. First, it suffers from the vicious cycle of immature sensor technology, which causes industry demand to be low, and then there is even less incentive to make existing sensors in research labs easy to manufacture and marketable. Second, the situation stems from a fear of making contact with the environment, avoided in every way so that visually perceived states do not change before a carefully estimated and ballistically executed physical interaction. Fortunately, the latter viewpoint is starting to change. Work in interactive perception and contact-rich manipulation are on the rise. Good reasons are steering the manipulation and locomotion communities’ attention towards deliberate physical interaction with the environment prior to, during, and after a task. We approach the problem of perception prior to manipulation, using the sense of touch, for the purpose of understanding the surroundings of an autonomous robot. The overwhelming majority of work in perception for manipulation is based on vision. While vision is a fast and global modality, it is insufficient as the sole modality, especially in environments where the ambient light or the objects therein do not lend themselves to vision, such as in darkness, smoky or dusty rooms in search and rescue, underwater, transparent and reflective objects, and retrieving items inside a bag. Even in normal lighting conditions, during a manipulation task, the target object and fingers are usually occluded from view by the gripper. Moreover, vision-based grasp planners, typically trained in simulation, often make errors that cannot be foreseen until contact. As a step towards addressing these problems, we present first a global shape-based feature descriptor for object recognition using non-prehensile tactile probing alone. Then, we investigate in making the tactile modality, local and slow by nature, more efficient for the task by predicting the most cost-effective moves using active exploration. To combine the local and physical advantages of touch and the fast and global advantages of vision, we propose and evaluate a learning-based method for visuotactile integration for grasping

    Bringing a Humanoid Robot Closer to Human Versatility : Hard Realtime Software Architecture and Deep Learning Based Tactile Sensing

    Get PDF
    For centuries, it has been a vision of man to create humanoid robots, i.e., machines that not only resemble the shape of the human body, but have similar capabilities, especially in dextrously manipulating their environment. But only in recent years it has been possible to build actual humanoid robots with many degrees of freedom (DOF) and equipped with torque controlled joints, which are a prerequisite for sensitively acting in the world. In this thesis, we extend DLR's advanced mobile torque controlled humanoid robot Agile Justin into two important directions to get closer to human versatility. First, we enable Agile Justin, which was originally built as a research platform for dextrous mobile manipulation, to also be able to execute complex dynamic manipulation tasks. We demonstrate this with the challenging task of catching up to two simultaneously thrown balls with its hands. Second, we equip Agile Justin with highly developed and deep learning based tactile sensing capabilities that are critical for dextrous fine manipulation. We demonstrate its tactile capabilities with the delicate task of identifying an objects material simply by gently sweeping with a fingertip over its surface. Key for the realization of complex dynamic manipulation tasks is a software framework that allows for a component based system architecture to cope with the complexity and parallel and distributed computational demands of deep sensor-perception-planning-action loops -- but under tight timing constraints. This thesis presents the communication layer of our aRDx (agile robot development -- next generation) software framework that provides hard realtime determinism and optimal transport of data packets with zero-copy for intra- and inter-process and copy-once for distributed communication. In the implementation of the challenging ball catching application on Agile Justin, we take full advantage of aRDx's performance and advanced features like channel synchronization. Besides developing the challenging visual ball tracking using only onboard sensing while everything is moving and the automatic and self-contained calibration procedure to provide the necessary precision, the major contribution is the unified generation of the reaching motion for the arms. The catch point selection, motion planning and the joint interpolation steps are subsumed in one nonlinear constrained optimization problem which is solved in realtime and allows for the realization of different catch behaviors. For the highly sensitive task of tactile material classification with a flexible pressure-sensitive skin on Agile Justin's fingertip, we present our deep convolutional network architecture TactNet-II. The input is the raw 16000 dimensional complex and noisy spatio-temporal tactile signal generated when sweeping over an object's surface. For comparison, we perform a thorough human performance experiment with 15 subjects which shows that Agile Justin reaches superhuman performance in the high-level material classification task (What material id?), as well as in the low-level material differentiation task (Are two materials the same?). To increase the sample efficiency of TactNet-II, we adapt state of the art deep end-to-end transfer learning to tactile material classification leading to an up to 15 fold reduction in the number of training samples needed. The presented methods led to six publication awards and award finalists and international media coverage but also worked robustly at many trade fairs and lab demos
    corecore