50 research outputs found

    Active Clothing Material Perception using Tactile Sensing and Deep Learning

    Full text link
    Humans represent and discriminate the objects in the same category using their properties, and an intelligent robot should be able to do the same. In this paper, we build a robot system that can autonomously perceive the object properties through touch. We work on the common object category of clothing. The robot moves under the guidance of an external Kinect sensor, and squeezes the clothes with a GelSight tactile sensor, then it recognizes the 11 properties of the clothing according to the tactile data. Those properties include the physical properties, like thickness, fuzziness, softness and durability, and semantic properties, like wearing season and preferred washing methods. We collect a dataset of 153 varied pieces of clothes, and conduct 6616 robot exploring iterations on them. To extract the useful information from the high-dimensional sensory output, we applied Convolutional Neural Networks (CNN) on the tactile data for recognizing the clothing properties, and on the Kinect depth images for selecting exploration locations. Experiments show that using the trained neural networks, the robot can autonomously explore the unknown clothes and learn their properties. This work proposes a new framework for active tactile perception system with vision-touch system, and has potential to enable robots to help humans with varied clothing related housework.Comment: ICRA 2018 accepte

    An AI-Based Model for Texture Classification from Vibrational Feedback: Towards Development of Self-Adapting Sensory Robotic Prosthesis

    Get PDF
    This paper presents a novel method of tuning vibration parameters to elicit specific perceptions of texture using vibration artefacts detected in EMG signals. Though often used for prosthetic control, sensory feedback modalities like vibration can be used to convey proprioceptive or sensory information. Literature has shown that the presence of sensory feedback in prosthesis can improve embodiment and control of prosthetic devices. However, it is not widely adopted in daily prosthesis use, due in large part to the daily change in perception and interpretation of the sensory modality. This results in daily parameter adjustments so that sensory perception can be maintained over time. A method therefore needs to be established to maintain perception generated by modalities like vibrations. This paper investigates modulating the vibration parameters based on how the vibrations dissipate in the surrounding tissue from the stimuli. This is with the aim of correlating dissipation of vibration to specific perceptions of texture. Participants were asked to control vibration motor parameters to elicit the perception of three different grades of sandpaper, provided to them for reference. Once the vibration parameters were chosen a CNN algorithm identified and categorized the artefact features along equidistantly spaced EMG electrodes. Participants were asked to repeat this experiment on three separate days and on the fourth was asked to complete a texture identification task. The task involved identifying the texture of the sandpaper based on their previously chosen parameters and compared the results to tuning against an AI-based algorithm using the dissipation of the vibration artefacts

    Networking Architecture and Key Technologies for Human Digital Twin in Personalized Healthcare: A Comprehensive Survey

    Full text link
    Digital twin (DT), refers to a promising technique to digitally and accurately represent actual physical entities. One typical advantage of DT is that it can be used to not only virtually replicate a system's detailed operations but also analyze the current condition, predict future behaviour, and refine the control optimization. Although DT has been widely implemented in various fields, such as smart manufacturing and transportation, its conventional paradigm is limited to embody non-living entities, e.g., robots and vehicles. When adopted in human-centric systems, a novel concept, called human digital twin (HDT) has thus been proposed. Particularly, HDT allows in silico representation of individual human body with the ability to dynamically reflect molecular status, physiological status, emotional and psychological status, as well as lifestyle evolutions. These prompt the expected application of HDT in personalized healthcare (PH), which can facilitate remote monitoring, diagnosis, prescription, surgery and rehabilitation. However, despite the large potential, HDT faces substantial research challenges in different aspects, and becomes an increasingly popular topic recently. In this survey, with a specific focus on the networking architecture and key technologies for HDT in PH applications, we first discuss the differences between HDT and conventional DTs, followed by the universal framework and essential functions of HDT. We then analyze its design requirements and challenges in PH applications. After that, we provide an overview of the networking architecture of HDT, including data acquisition layer, data communication layer, computation layer, data management layer and data analysis and decision making layer. Besides reviewing the key technologies for implementing such networking architecture in detail, we conclude this survey by presenting future research directions of HDT

    Tactile Avatar: Tactile Sensing System Mimicking Human Tactile Cognition

    Get PDF
    As a surrogate for human tactile cognition, an artificial tactile perception and cognition system are proposed to produce smooth/soft and rough tactile sensations by its user's tactile feeling; and named this system as ā€œtactile avatarā€. A piezoelectric tactile sensor is developed to record dynamically various physical information such as pressure, temperature, hardness, sliding velocity, and surface topography. For artificial tactile cognition, the tactile feeling of humans to various tactile materials ranging from smooth/soft to rough are assessed and found variation among participants. Because tactile responses vary among humans, a deep learning structure is designed to allow personalization through training based on individualized histograms of human tactile cognition and recording physical tactile information. The decision error in each avatar system is less than 2% when 42 materials are used to measure the tactile data with 100 trials for each material under 1.2N of contact force with 4cm sāˆ’1 of sliding velocity. As a tactile avatar, the machine categorizes newly experienced materials based on the tactile knowledge obtained from training data. The tactile sensation showed a high correlation with the specific user's tendency. This approach can be applied to electronic devices with tactile emotional exchange capabilities, as well as advanced digital experiences. Ā© 2021 The Authors. Advanced Science published by Wiley-VCH GmbH1

    Tactile Perception And Visuotactile Integration For Robotic Exploration

    Get PDF
    As the close perceptual sibling of vision, the sense of touch has historically received less than deserved attention in both human psychology and robotics. In robotics, this may be attributed to at least two reasons. First, it suffers from the vicious cycle of immature sensor technology, which causes industry demand to be low, and then there is even less incentive to make existing sensors in research labs easy to manufacture and marketable. Second, the situation stems from a fear of making contact with the environment, avoided in every way so that visually perceived states do not change before a carefully estimated and ballistically executed physical interaction. Fortunately, the latter viewpoint is starting to change. Work in interactive perception and contact-rich manipulation are on the rise. Good reasons are steering the manipulation and locomotion communitiesā€™ attention towards deliberate physical interaction with the environment prior to, during, and after a task. We approach the problem of perception prior to manipulation, using the sense of touch, for the purpose of understanding the surroundings of an autonomous robot. The overwhelming majority of work in perception for manipulation is based on vision. While vision is a fast and global modality, it is insufficient as the sole modality, especially in environments where the ambient light or the objects therein do not lend themselves to vision, such as in darkness, smoky or dusty rooms in search and rescue, underwater, transparent and reflective objects, and retrieving items inside a bag. Even in normal lighting conditions, during a manipulation task, the target object and fingers are usually occluded from view by the gripper. Moreover, vision-based grasp planners, typically trained in simulation, often make errors that cannot be foreseen until contact. As a step towards addressing these problems, we present first a global shape-based feature descriptor for object recognition using non-prehensile tactile probing alone. Then, we investigate in making the tactile modality, local and slow by nature, more efficient for the task by predicting the most cost-effective moves using active exploration. To combine the local and physical advantages of touch and the fast and global advantages of vision, we propose and evaluate a learning-based method for visuotactile integration for grasping

    Study of soft materials, flexible electronics, and machine learning for fully portable and wireless brain-machine interfaces

    Get PDF
    Over 300,000 individuals in the United States are afflicted with some form of limited motor function from brainstem or spinal-cord related injury resulting in quadriplegia or some form of locked-in syndrome. Conventional brain-machine interfaces used to allow for communication or movement require heavy, rigid components, uncomfortable headgear, excessive numbers of electrodes, and bulky electronics with long wires that result in greater data artifacts and generally inadequate performance. Wireless, wearable electroencephalograms, along with dry non-invasive electrodes can be utilized to allow recording of brain activity on a mobile subject to allow for unrestricted movement. Additionally, multilayer microfabricated flexible circuits, when combined with a soft materials platform allows for imperceptible wearable data acquisition electronics for long term recording. This dissertation aims to introduce new electronics and training paradigms for brain-machine interfaces to provide remedies in the form of communication and movement for these individuals. Here, training is optimized by generating a virtual environment from which a subject can achieve immersion using a VR headset in order to train and familiarize with the system. Advances in hardware and implementation of convolutional neural networks allow for rapid classification and low-latency target control. Integration of materials, mechanics, circuit and electrode design results in an optimized brain-machine interface allowing for rehabilitation and overall improved quality of life.Ph.D

    Instrumentation, Data, And Algorithms For Visually Understanding Haptic Surface Properties

    Get PDF
    Autonomous robots need to efficiently walk over varied surfaces and grasp diverse objects. We hypothesize that the association between how such surfaces look and how they physically feel during contact can be learned from a database of matched haptic and visual data recorded from various end-effectors\u27 interactions with hundreds of real-world surfaces. Testing this hypothesis required the creation of a new multimodal sensing apparatus, the collection of a large multimodal dataset, and development of a machine-learning pipeline. This thesis begins by describing the design and construction of the Portable Robotic Optical/Tactile ObservatioN PACKage (PROTONPACK, or Proton for short), an untethered handheld sensing device that emulates the capabilities of the human senses of vision and touch. Its sensory modalities include RGBD vision, egomotion, contact force, and contact vibration. Three interchangeable end-effectors (a steel tooling ball, an OptoForce three-axis force sensor, and a SynTouch BioTac artificial fingertip) allow for different material properties at the contact point and provide additional tactile data. We then detail the calibration process for the motion and force sensing systems, as well as several proof-of-concept surface discrimination experiments that demonstrate the reliability of the device and the utility of the data it collects. This thesis then presents a large-scale dataset of multimodal surface interaction recordings, including 357 unique surfaces such as furniture, fabrics, outdoor fixtures, and items from several private and public material sample collections. Each surface was touched with one, two, or three end-effectors, comprising approximately one minute per end-effector of tapping and dragging at various forces and speeds. We hope that the larger community of robotics researchers will find broad applications for the published dataset. Lastly, we demonstrate an algorithm that learns to estimate haptic surface properties given visual input. Surfaces were rated on hardness, roughness, stickiness, and temperature by the human experimenter and by a pool of purely visual observers. Then we trained an algorithm to perform the same task as well as infer quantitative properties calculated from the haptic data. Overall, the task of predicting haptic properties from vision alone proved difficult for both humans and computers, but a hybrid algorithm using a deep neural network and a support vector machine achieved a correlation between expected and actual regression output between approximately Ļ = 0.3 and Ļ = 0.5 on previously unseen surfaces
    corecore