22 research outputs found

    Learning Spatio Temporal Tactile Features with a ConvLSTM for the Direction Of Slip Detection

    Get PDF
    Robotic manipulators have to constantly deal with the complex task of detecting whether a grasp is stable or, in contrast, whether the grasped object is slipping. Recognising the type of slippage—translational, rotational—and its direction is more challenging than detecting only stability, but is simultaneously of greater use as regards correcting the aforementioned grasping issues. In this work, we propose a learning methodology for detecting the direction of a slip (seven categories) using spatio-temporal tactile features learnt from one tactile sensor. Tactile readings are, therefore, pre-processed and fed to a ConvLSTM that learns to detect these directions with just 50 ms of data. We have extensively evaluated the performance of the system and have achieved relatively high results at the detection of the direction of slip on unseen objects with familiar properties (82.56% accuracy).Work funded by the Spanish Ministry of Economy, Industry and Competitiveness, through the project DPI2015-68087-R (predoctoral grant BES-2016-078290) as well as the European Commission and FEDER funds through the COMMANDIA (SOE2/P1/F0638) action supported by Interreg-V Sudoe

    Tactile-Driven Grasp Stability and Slip Prediction

    Get PDF
    One of the challenges in robotic grasping tasks is the problem of detecting whether a grip is stable or not. The lack of stability during a manipulation operation usually causes the slippage of the grasped object due to poor contact forces. Frequently, an unstable grip can be caused by an inadequate pose of the robotic hand or by insufficient contact pressure, or both. The use of tactile data is essential to check such conditions and, therefore, predict the stability of a grasp. In this work, we present and compare different methodologies based on deep learning in order to represent and process tactile data for both stability and slip prediction.Work funded by the Spanish Ministries of Economy, Industry and Competitiveness and Science, Innovation and Universities through the grant BES-2016-078290 and the project RTI2018-094279-B-100, respectively, as well as the European Commission and FEDER funds through the COMMANDIA project (SOE2/P1/F0638), action supported by Interreg-V Sudoe

    Neuromorphic vision-based tactile sensor for robotic grasp

    Get PDF
    Tactile sensors are developed to mimic human sense of touch in robotics. The touch sense is essential for machines to interact with environment. Several approaches have been studied to obtain rich information from the contact point to correct robot’s actions and acquire further information about the objects. Vision-based tactile sensors aim to extract tactile information by observing the contact point between the robot’s hand and environment and applying computer vision algorithms. In this thesis, a novel class of vision-based tactile sensors is proposed, "Neuromorphic Vision-Based Tactile Sensor" to estimate the contact force and classify materials in a grasp. This novel approach utilises a neuromorphic vision sensor to capture intensity changes (events) in the contact point. The triggered events represent changes in the contact force at each pixel in microseconds. The proposed sensor has a high temporal resolution and dynamic range which are suitable for high-speed robotic applications. Initially, a general framework is demonstrated to show the sensor operations. Furthermore, the relationship between events and the contact force is presented. Afterwards, methods based on Time-Delay Neural Networks (TDNN), Gaussian Process (GP) and Deep Neural Networks (DNN) are developed to estimate the contact force and classify objects material from the accumulation of events. The results indicate a low mean squared error of 0.17N against a force sensor for the force estimation using TDNN. Moreover, the objects materials are classified with 79.12% accuracy which is 30% higher compared to piezoresistive force sensors. This is followed by an approach to preserve spatio-temporal information during the learning process. Therefore, the triggered events are framed (event-frames) within a time window to preserve spatial information. Afterwards, multiple types of Long Short-Term Memory (LSTM) networks with convolutional layers are developed to estimate the contact force for objects with different size. The results are validated against a force sensor and achieve a mean squared error of less than 0.1N. Finally, algorithmic augmentation techniques are investigated to improve the networks accuracy for a wider range of force. Image-based and time-series augmentation methods are developed to generate artificial samples for training the network. A novel time-domain approach Temporal Event Shifting (TES) is proposed to augment events by preserving the spatial information of events. The results are validated on real experiments which indicate that time-domain and hybrid augmentation methods improve the networks’ accuracy significantly considering an object with a different size

    Predicción de la Estabilidad en Tareas de Agarre Robótico con Información Táctil

    Get PDF
    En tareas de manipulación robótica es de especial interés detectar si un agarre es estable o por el contrario, el objeto agarrado se desliza entre los dedos debido a un contacto inadecuado. Con frecuencia, la inestabilidad en el agarre puede ser como consecuencia de una mala pose de la mano o pinza robótica durante su ejecución y o una presión de contacto insuficiente a la hora de ejercer la tarea. El empleo de información táctil y la representación de ésta es vital para llevar a cabo la predicción de estabilidad en el agarre. En este trabajo, se presentan y comparan distintas metodologías para representar la información táctil, así como los métodos de aprendizaje más adecuados en función de la representación táctil escogida.Este trabajo ha sido financiado con Fondos Europeos de Desarrollo Regional (FEDER), Ministerio de Economía, Industria y Competitividad a través del proyecto RTI2018-094279-B-100 y la ayuda predoctoral BES-2016-078290, y también gracias al apoyo de la Comisión Europea y del programa Interreg V. Sudoe a través del proyecto SOE2/P1/F0638

    Human-Machine Interfaces using Distributed Sensing and Stimulation Systems

    Get PDF
    As the technology moves towards more natural human-machine interfaces (e.g. bionic limbs, teleoperation, virtual reality), it is necessary to develop a sensory feedback system in order to foster embodiment and achieve better immersion in the control system. Contemporary feedback interfaces presented in research use few sensors and stimulation units to feedback at most two discrete feedback variables (e.g. grasping force and aperture), whereas the human sense of touch relies on a distributed network of mechanoreceptors providing a wide bandwidth of information. To provide this type of feedback, it is necessary to develop a distributed sensing system that could extract a wide range of information during the interaction between the robot and the environment. In addition, a distributed feedback interface is needed to deliver such information to the user. This thesis proposes the development of a distributed sensing system (e-skin) to acquire tactile sensation, a first integration of distributed sensing system on a robotic hand, the development of a sensory feedback system that compromises the distributed sensing system and a distributed stimulation system, and finally the implementation of deep learning methods for the classification of tactile data. It\u2019s core focus addresses the development and testing of a sensory feedback system, based on the latest distributed sensing and stimulation techniques. To this end, the thesis is comprised of two introductory chapters that describe the state of art in the field, the objectives, and the used methodology and contributions; as well as six studies that tackled the development of human-machine interfaces

    Incipient Slip-Based Rotation Measurement via Visuotactile Sensing During In-Hand Object Pivoting

    Full text link
    In typical in-hand manipulation tasks represented by object pivoting, the real-time perception of rotational slippage has been proven beneficial for improving the dexterity and stability of robotic hands. An effective strategy is to obtain the contact properties for measuring rotation angle through visuotactile sensing. However, existing methods for rotation estimation did not consider the impact of the incipient slip during the pivoting process, which introduces measurement errors and makes it hard to determine the boundary between stable contact and macro slip. This paper describes a generalized 2-d contact model under pivoting, and proposes a rotation measurement method based on the line-features in the stick region. The proposed method was applied to the Tac3D vision-based tactile sensors using continuous marker patterns. Experiments show that the rotation measurement system could achieve an average static measurement error of 0.17 degree and an average dynamic measurement error of 1.34 degree. Besides, the proposed method requires no training data and can achieve real-time sensing during the in-hand object pivoting.Comment: 7 pages, 9 figures, submitted to ICRA 202

    Human Inspired Multi-Modal Robot Touch

    Get PDF

    Embedded Artificial Intelligence for Tactile Sensing

    Get PDF
    Electronic tactile sensing becomes an active research field whether for prosthetic applications, robotics, virtual reality or post stroke patients rehabilitation. To achieve such sensing, an array of sensors is used to retrieve human-skin like information, which is called Electronic skin (E-skin). Humans through their skins, are able to collect different types of information e.g. pressure, temperature, texture, etc. which are then passed to the nervous system, and finally to the brain in order to extract high level information from these sensory data. In order to make E-skin capable of such task, data acquired from E-skin should be filtered, processed, and then conveyed to the user (or robot). Processing these sensory information, should occur in real-time, taking in consideration the power limitation in such applications, especially prosthetic applications. The power consumption itself is related to different factors, one factor is the complexity of the algorithm e.g. number of FLOPs, and another is the memory consumption. In this thesis, I will focus on the processing of real tactile information, by 1)exploring different algorithms and methods for tactile data classification, 2)data organization and preprocessing of such tactile data and 3)hardware implementation. More precisely the focus will be on deep learning algorithms for tactile data processing mainly CNNs and RNNs, with energy-efficient embedded implementations. The proposed solution has proved less memory, FLOPs, and latency compared to the state of art (including tensorial SVM), applied to real tactile sensors data. Keywords: E-skin, tactile data processing, deep learning, CNN, RNN, LSTM, GRU, embedded, energy-efficient algorithms, edge computing, artificial intelligence

    Action Conditioned Tactile Prediction: a case study on slip prediction

    Get PDF
    Tactile predictive models can be useful across several robotic manipulation tasks, e.g. robotic pushing, robotic grasping, slip avoidance, and in-hand manipulation. However, available tactile prediction models are mostly studied for image-based tactile sensors and there is no comparison study indicating the best performing models. In this paper, we presented two novel data-driven action-conditioned models for predicting tactile signals during real-world physical robot interaction tasks (1) action condition tactile prediction and (2) action conditioned tactile-video prediction models. We use a magnetic-based tactile sensor that is challenging to analyse and test state-of-the-art predictive models and the only existing bespoke tactile prediction model. We compare the performance of these models with those of our proposed models. We perform the comparison study using our novel tactile-enabled dataset containing 51,000 tactile frames of a real-world robotic manipulation task with 11 flat-surfaced household objects. Our experimental results demonstrate the superiority of our proposed tactile prediction models in terms of qualitative, quantitative and slip prediction scores
    corecore