428 research outputs found

    Dexterous manipulation of unknown objects using virtual contact points

    Get PDF
    The manipulation of unknown objects is a problem of special interest in robotics since it is not always possible to have exact models of the objects with which the robot interacts. This paper presents a simple strategy to manipulate unknown objects using a robotic hand equipped with tactile sensors. The hand configurations that allow the rotation of an unknown object are computed using only tactile and kinematic information, obtained during the manipulation process and reasoning about the desired and real positions of the fingertips during the manipulation. This is done taking into account that the desired positions of the fingertips are not physically reachable since they are located in the interior of the manipulated object and therefore they are virtual positions with associated virtual contact points. The proposed approach was satisfactorily validated using three fingers of an anthropomorphic robotic hand (Allegro Hand), with the original fingertips replaced by tactile sensors (WTS-FT). In the experimental validation, several everyday objects with different shapes were successfully manipulated, rotating them without the need of knowing their shape or any other physical property.Peer ReviewedPostprint (author's final draft

    Hierarchical tactile sensation integration from prosthetic fingertips enables multi-texture surface recognition\u3csup\u3e†\u3c/sup\u3e

    Get PDF
    Multifunctional flexible tactile sensors could be useful to improve the control of prosthetic hands. To that end, highly stretchable liquid metal tactile sensors (LMS) were designed, manufactured via photolithography, and incorporated into the fingertips of a prosthetic hand. Three novel contributions were made with the LMS. First, individual fingertips were used to distinguish between different speeds of sliding contact with different surfaces. Second, differences in surface textures were reliably detected during sliding contact. Third, the capacity for hierarchical tactile sensor integration was demonstrated by using four LMS signals simultaneously to distinguish between ten complex multi-textured surfaces. Four different machine learning algorithms were compared for their successful classification capabilities: K-nearest neighbor (KNN), support vector machine (SVM), random forest (RF), and neural network (NN). The time-frequency features of the LMSs were extracted to train and test the machine learning algorithms. The NN generally performed the best at the speed and texture detection with a single finger and had a 99.2 ± 0.8% accuracy to distinguish between ten different multi-textured surfaces using four LMSs from four fingers simultaneously. The capability for hierarchical multi-finger tactile sensation integration could be useful to provide a higher level of intelligence for artificial hands

    Signal and Information Processing Methods for Embedded Robotic Tactile Sensing Systems

    Get PDF
    The human skin has several sensors with different properties and responses that are able to detect stimuli resulting from mechanical stimulations. Pressure sensors are the most important type of receptors for the exploration and manipulation of objects. In the last decades, smart tactile sensing based on different sensing techniques have been developed as their application in robotics and prosthetics is considered of huge interest, mainly driven by the prospect of autonomous and intelligent robots that can interact with the environment. However, regarding object properties estimation on robots, hardness detection is still a major limitation due to the lack of techniques to estimate it. Furthermore, finding processing methods that can interpret the measured information from multiple sensors and extract relevant information is a Challenging task. Moreover, embedding processing methods and machine learning algorithms in robotic applications to extract meaningful information such as object properties from tactile data is an ongoing challenge, which is controlled by the device constraints (power constraint, memory constraints, etc.), the computational complexity of the processing and machine learning algorithms, the application requirements (real-time operations, high prediction performance). In this dissertation, we focus on the design and implementation of pre-processing methods and machine learning algorithms to handle the aforementioned challenges for a tactile sensing system in robotic application. First, we propose a tactile sensing system for robotic application. Then we present efficient preprocessing and feature extraction methods for our tactile sensors. Then we propose a learning strategy to reduce the computational cost of our processing unit in object classification using sensorized Baxter robot. Finally, we present a real-time robotic tactile sensing system for hardness classification on a resource-constrained devices. The first study represents a further assessment of the sensing system that is based on the PVDF sensors and the interface electronics developed in our lab. In particular, first, it presents the development of a skin patch (multilayer structure) that allows us to use the sensors in several applications such as robotic hand/grippers. Second, it shows the characterization of the developed skin patch. Third, it validates the sensing system. Moreover, we designed a filter to remove noise and detect touch. The experimental assessment demonstrated that the developed skin patch and the interface electronics indeed can detect different touch patterns and stimulus waveforms. Moreover, the results of the experiments defined the frequency range of interest and the response of the system to realistic interactions with the sensing system to grasp and release events. In the next study, we presented an easy integration of our tactile sensing system into Baxter gripper. Computationally efficient pre-processing techniques were designed to filter the signal and extract relevant information from multiple sensor signals, in addition to feature extraction methods. These processing methods aim in turn to reduce also the computational complexity of machine learning algorithms utilized for object classification. The proposed system and processing strategy were evaluated on object classification application by integrating our system into the gripper and we collected data by grasping multiple objects. We further proposed a learning strategy to accomplish a trade-off between the generalization accuracy and the computational cost of the whole processing unit. The proposed pre-processing and feature extraction techniques together with the learning strategy have led to models with extremely low complexity and very high generalization accuracy. Moreover, the support vector machine achieved the best trade-off between accuracy and computational cost on tactile data from our sensors. Finally, we presented the development and implementation on the edge of a real–time tactile sensing system for hardness classification on Baxter robot based on machine and deep learning algorithms. We developed and implemented in plain C a set of functions that provide the fundamental layer functionalities of the Machine learning and Deep Learning models (ML and DL), along with the pre–processing methods to extract the features and normalize the data. The models can be deployed to any device that supports C code since it does not rely on any of the existing libraries. Shallow ML/DL algorithms for the deployment on resource–constrained devices are designed. To evaluate our work, we collected data by grasping objects of different hardness and shape. Two classification problems were addressed: 5 levels of hardness classified on the same objects’ shape, and 5 levels of hardness classified on two different objects’ shape. Furthermore, optimization techniques were employed. The models and pre–processing were implemented on a resource constrained device, where we assessed the performance of the system in terms of accuracy, memory footprint, time latency, and energy consumption. We achieved for both classification problems a real-time inference (< 0.08 ms), low power consumption (i.e., 3.35 μJ), extremely small models (i.e., 1576 Byte), and high accuracy (above 98%)

    Tactile Sensing for Assistive Robotics

    Get PDF

    Zero-Shot Object Recognition Based on Haptic Attributes

    Get PDF
    International audienceRobots operating in household environments need to recognize a variety of objects. Several touch-based object recognition systems have been proposed in the last few years [2]– [5]. They map haptic data to object classes using machine learning techniques, and then use the learned mapping to recognize one of the previously encountered objects. The accuracy of these proposed methods depends on the mass of the the training samples available for each object class. On the other hand, haptic data collection is often system (robot) specific and labour intensive. One way to cope with this problem is to use a knowledge transfer based system, that can exploit object relationships to share learned models between objects. However, while knowledge-based systems, such as zero shot learning [6], have been regularly proposed for visual object recognition, a similar system is not available for haptic recognition. Here we developed [1] the first haptic zero-shot learning system that enables a robot to recognize, using haptic exploration alone, objects that it encounters for the first time. Our system first uses the so called Direct Attributes Prediction (DAP) model [7] to train on the semantic representation of objects based on a list of haptic attributes, rather than the object itself. The attributes (including physical properties such as shape, texture, material) constitute an intermediate layer relating objects, and is used for knowledge transfer. Using this layering, our system can predict the attribute-based representation of a new (previously non-trained) object and use it to infer its identity. A. System Overview An overview of our system is given in Fig. 1. Given distinct training and test data-sets Y and Z, that are described by an attribute basis a, we first associate a binary label a o m to each object o with o ∈ Y ∪ Z and m = 1. .. M. This results in a binary object-attribute matrix K. For a given attributes list during training, haptic data collected from Y are used to train a binary classifier for each attribute a m. Finally, to classify a test sample x as one of Z objects, x is introduced to each one of the learned attribute classifiers and the output attributes posteriors p(a m | x) are used to predict the corresponding object, provided that the ground truth is available in K. This extended abstract is a summary of submission [1] B. Experimental Setup To collect haptic data, we use the Shadow anthropo-morphic robotic hand equipped with a BioTac multimodal tactile sensor on each fingertip. We developed a force-based grasp controller that enables the hand to enclose an object. The joint encoder readings provides us with information on object shape, while the BioTac sensors provides us with information about objects material, texture and compliance at each fingertip 1. In order to find the appropriate list of attributes describing our object set (illustrated in Fig. 2), we used online dictionaries to collect one or multiple textual definitions of each object. From this data, we extracted 11 haptic adjectives, or descriptions that could be " felt " using our robot hand. These adjectives served as our attributes: made of porcelain, made of plastic, made of glass, made of cardboard, made of stainless steel, cylindrical, round, rectangular, concave, has a handle, has a narrow part. We grouped the attributes into material attributes, and shape attributes. During the training phase, we use the Shadow hand joint readings x sh to train an SVM classifier for each shape, and BioTacs readings x b to train an SVM classifier for each material attribute. SVM training returns a distance s m (x) measure for each sample x that gives how far x lies from the discriminant hyper-plane. We transform this score to an attribute posterior p(a m | x) using a sigmoid function

    Controlled Tactile Exploration and Haptic Object Recognition

    Get PDF
    In this paper we propose a novel method for in-hand object recognition. The method is composed of a grasp stabilization controller and two exploratory behaviours to capture the shape and the softness of an object. Grasp stabilization plays an important role in recognizing objects. First, it prevents the object from slipping and facilitates the exploration of the object. Second, reaching a stable and repeatable position adds robustness to the learning algorithm and increases invariance with respect to the way in which the robot grasps the object. The stable poses are estimated using a Gaussian mixture model (GMM). We present experimental results showing that using our method the classifier can successfully distinguish 30 objects.We also compare our method with a benchmark experiment, in which the grasp stabilization is disabled. We show, with statistical significance, that our method outperforms the benchmark method
    • …
    corecore