3,291 research outputs found

    Active Clothing Material Perception using Tactile Sensing and Deep Learning

    Full text link
    Humans represent and discriminate the objects in the same category using their properties, and an intelligent robot should be able to do the same. In this paper, we build a robot system that can autonomously perceive the object properties through touch. We work on the common object category of clothing. The robot moves under the guidance of an external Kinect sensor, and squeezes the clothes with a GelSight tactile sensor, then it recognizes the 11 properties of the clothing according to the tactile data. Those properties include the physical properties, like thickness, fuzziness, softness and durability, and semantic properties, like wearing season and preferred washing methods. We collect a dataset of 153 varied pieces of clothes, and conduct 6616 robot exploring iterations on them. To extract the useful information from the high-dimensional sensory output, we applied Convolutional Neural Networks (CNN) on the tactile data for recognizing the clothing properties, and on the Kinect depth images for selecting exploration locations. Experiments show that using the trained neural networks, the robot can autonomously explore the unknown clothes and learn their properties. This work proposes a new framework for active tactile perception system with vision-touch system, and has potential to enable robots to help humans with varied clothing related housework.Comment: ICRA 2018 accepte

    Connecting Look and Feel: Associating the visual and tactile properties of physical materials

    Full text link
    For machines to interact with the physical world, they must understand the physical properties of objects and materials they encounter. We use fabrics as an example of a deformable material with a rich set of mechanical properties. A thin flexible fabric, when draped, tends to look different from a heavy stiff fabric. It also feels different when touched. Using a collection of 118 fabric sample, we captured color and depth images of draped fabrics along with tactile data from a high resolution touch sensor. We then sought to associate the information from vision and touch by jointly training CNNs across the three modalities. Through the CNN, each input, regardless of the modality, generates an embedding vector that records the fabric's physical property. By comparing the embeddings, our system is able to look at a fabric image and predict how it will feel, and vice versa. We also show that a system jointly trained on vision and touch data can outperform a similar system trained only on visual data when tested purely with visual inputs

    Improved GelSight Tactile Sensor for Measuring Geometry and Slip

    Full text link
    A GelSight sensor uses an elastomeric slab covered with a reflective membrane to measure tactile signals. It measures the 3D geometry and contact force information with high spacial resolution, and successfully helped many challenging robot tasks. A previous sensor, based on a semi-specular membrane, produces high resolution but with limited geometry accuracy. In this paper, we describe a new design of GelSight for robot gripper, using a Lambertian membrane and new illumination system, which gives greatly improved geometric accuracy while retaining the compact size. We demonstrate its use in measuring surface normals and reconstructing height maps using photometric stereo. We also use it for the task of slip detection, using a combination of information about relative motions on the membrane surface and the shear distortions. Using a robotic arm and a set of 37 everyday objects with varied properties, we find that the sensor can detect translational and rotational slip in general cases, and can be used to improve the stability of the grasp.Comment: IEEE/RSJ International Conference on Intelligent Robots and System

    Material perception and action : The role of material properties in object handling

    Get PDF
    This dissertation is about visual perception of material properties and their role in preparation for object handling. Usually before an object is touched or picked-up we estimate its size and shape based on visual features to plan the grip size of our hand. After we have touched the object, the grip size is adjusted according to the provided haptic feedback and the object is handled safely. Similarly, we anticipate the required grip force to handle the object without slippage, based on its visual features and prior experience with similar objects. Previous studies on object handling have mostly examined object characteristics that are typical for object recognition, e.g., size, shape, weight, but in the recent years there has been a growing interest in object characteristics that are more typical to the type of material the object is made from. That said, in a series of studies we investigated the role of perceived material properties in decision-making and object handling, in which both digitally rendered materials and real objects made of different types of materials were presented to human subjects and a humanoid robot. Paper I is a reach-to-grasp study where human subjects were examined using motion capture technology. In this study, participants grasped and lifted paper cups that varied in appearance (i.e., matte vs. glossy) and weight. Here we were interested in both the temporal and spatial components of prehension to examine the role of material properties in grip preparation, and how visual features contribute to inferred hardness before haptic feedback has become available. We found the temporal and spatial components were not exclusively governed by the expected weight of the paper cups, instead glossiness and expected hardness has a significant role as well. In paper II, which is a follow-up on Paper I, we investigated the grip force component of prehension using the same experimental stimuli as used in paper I. In a similar experimental set up, using force sensors we examined the early grip force magnitudes applied by human subjects when grasping and lifting the same paper cups as used in Paper I. Here we found that early grip force scaling was not only guided by the object weight, but the visual characteristics of the material (i.e., matte vs. glossy) had a role as well. Moreover, the results suggest that grip force scaling during the initial object lifts is guided by expected hardness that is to some extend based on visual material properties. Paper III is a visual judgment task where psychophysical measurements were used to examine how the material properties, roughness and glossiness, influence perceived bounce height and consequently perceived hardness. In a paired-comparison task, human subjects observed a bouncing ball bounce on various surface planes and judged their bounce height. Here we investigated, what combination of surface properties, i.e., roughness or glossiness, makes a surface plane to be perceived bounceable. The results demonstrate that surface planes with rough properties are believed to afford higher bounce heights for the bouncing ball, compared to surface planes with smooth properties. Interestingly, adding shiny properties to the rough and smooth surface planes, reduced the judged difference, as if surface planes with gloss are believed to afford higher bounce heights irrespective of how smooth or rough the surface plane is beneath. This suggests that perceived bounce height involves not only the physical elements of the bounce height, but also the visual characteristics of the material properties of the surface planes the ball bounces on. In paper IV we investigated the development of material knowledge using a robotic system. A humanoid robot explored real objects made of different types of materials, using both camera and haptic systems. The objects varied in visual appearances (e.g., texture, color, shape, size), weight, and hardness, and in two experiments, the robot picked up and placed the experimental objects several times using its arm. Here we used the haptic signals from the servos controlling the arm and the shoulder of the robot, to obtain measurements of the weight and hardness of the objects, and the camera system to collect data on the visual features of the objects. After the robot had repeatedly explored the objects, an associative learning model was created based on the training data to demonstrate how the robotic system could produce multi-modal mapping between the visual and haptic features of the objects. In sum, in this thesis we show that visual material properties and prior knowledge of how materials look like and behave like has a significant role in action planning

    How do robots take two parts apart

    Get PDF
    This research is a natural progression of efforts which begun with the introduction of a new research paradigm in machine perception, called Active Perception. There it was stated that Active Perception is a problem of intelligent control strategies applied to data acquisition processes which will depend on the current state of the data interpretation, including recognition. The disassembly/assembly problem is treated as an Active Perception problem, and a method for autonomous disassembly based on this framework is presented

    Haptic Hybrid Prototyping (HHP): An AR Application for Texture Evaluation with Semantic Content in Product Design

    Get PDF
    The manufacture of prototypes is costly in economic and temporal terms and in order to carry this out it is necessary to accept certain deviations with respect to the final finishes. This article proposes haptic hybrid prototyping, a haptic-visual product prototyping method created to help product design teams evaluate and select semantic information conveyed between product and user through texturing and ribs of a product in early stages of conceptualization. For the evaluation of this tool, an experiment was realized in which the haptic experience was compared during the interaction with final products and through the HHP. As a result, it was observed that the answers of the interviewees coincided in both situations in 81% of the cases. It was concluded that the HHP enables us to know the semantic information transmitted through haptic-visual means between product and user as well as being able to quantify the clarity with which this information is transmitted. Therefore, this new tool makes it possible to reduce the manufacturing lead time of prototypes as well as the conceptualization phase of the product, providing information on the future success of the product in the market and its economic return

    Low-level Modality Specific and Higher-order Amodal Processing in the Haptic and Visual Domains

    Get PDF
    The aim of the current study is to further investigate cross- and multi-modal object processing with the intent of increasing our understanding of the differential contributions of modal and amodal object processing in the visual and haptic domains. The project is an identification and information extraction study. The main factors are modality (vision or haptics), stimulus type (tools or animals) and level (naming and output). Each participant went through four different trials: Visual naming and size, Haptic naming and size. Naming consisted of verbally naming the item; Size (size comparison) consisted of verbally indicating if the current item is larger or smaller than a reference object. Stimuli consisted of plastic animals and tools. All stimuli are readily recognizable, and easily be manipulated with one hand. The actual figurines and tools were used for haptic trials, and digital photographs were used for visual trials (appendix 1 and 2). The main aim was to investigate modal and amodal processing in visual and haptic domains. The results suggest a strong effect, of modality type with visual object recognition being faster in comparison to haptic object recognition leading to a modality (visual-haptic) specific effect. It was also observed that tools were processed faster than animals regardless of the modality type. There was interaction reported between the factors supporting the notion that once naming is accomplished, if subsequent size processing, whether it is in the visual or haptic domain, results in similar reaction times this would be an indication of, non-modality specific or amodal processing. Thus, through using animal and tool figurines, we investigated modal and amodal processing in visual and haptic domains

    A comparison of three materials used for tactile symbols to communicate colour to children and young people with visual impairments

    Get PDF
    A series of 14 tactile symbols were developed to represent different colours and shades for children and young people who are blind or have visual impairment. A study compared three different methods for representing the symbols: (1) embroidered thread, (2) heated ‘swell’ paper, and (3) representation in plastic using Additive Manufacturing (AM; three-dimensional printing). The results show that for all three materials, the recognition of particular symbols varied between 2.40 and 3.95 s. The average times for the three materials across all colours were 2.26 s for AM material, 3.20 s for swell paper, and 4.03 s for embroidered symbols. These findings can be explained by the fact that the AM material (polylactide) is firmer and more easily perceived tactually than the other two materials. While AM plastic offers a potentially useful means to communicate colours for appropriate objects, traditional media are still important in certain contexts
    corecore