760 research outputs found
On the Use of Large Area Tactile Feedback for Contact Data Processing and Robot Control
The progress in microelectronics and embedded systems has recently enabled the realization of devices for robots functionally similar to the human skin, providing large area tactile feedback over the whole robot body.
The availability of such kind of systems, commonly referred to as extit{robot skins}, makes possible to measure the contact pressure distribution applied on the robot body over an arbitrary area.
Large area tactile systems open new scenarios on contact processing, both for control and cognitive level processing, enabling the interpretation of physical contacts.
The contents proposed in this thesis address these topics by proposing techniques exploiting large area tactile feedback for: (i) contact data processing and classification; (ii) robot control
Dataset with Tactile and Kinesthetic Information from a Human Forearm and Its Application to Deep Learning
There are physical Human–Robot Interaction (pHRI) applications where the robot has to grab the human body, such as rescue or assistive robotics. Being able to precisely estimate the grasping location when grabbing a human limb is crucial to perform a safe manipulation of the human. Computer vision methods provide pre-grasp information with strong constraints imposed by the field environments. Force-based compliant control, after grasping, limits the amount of applied strength. On the other hand, valuable tactile and proprioceptive information can be obtained from the pHRI gripper, which can be used to better know the features of the human and the contact state between the human and the robot. This paper presents a novel dataset of tactile and kinesthetic data obtained from a robot gripper that grabs a human forearm. The dataset is collected with a three-fingered gripper with two underactuated fingers and a fixed finger with a high-resolution tactile sensor. A palpation procedure is performed to record the shape of the forearm and to recognize the bones and muscles in different sections. Moreover, an application for the use of the database is included. In particular, a fusion approach is used to estimate the actual grasped forearm section using both kinesthetic and tactile information on a regression deep-learning neural network. First, tactile and kinesthetic data are trained separately with Long Short-Term Memory (LSTM) neural networks, considering the data are sequential. Then, the outputs are fed to a Fusion neural network to enhance the estimation. The experiments conducted show good results in training both sources separately, with superior performance when the fusion approach is considered.This research was funded by the University of Málaga, the Ministerio de Ciencia, Innovación y Universidades, Gobierno de España, grant number RTI2018-093421-B-I00 and the European Commission, grant number BES-2016-078237. Partial funding for open access charge: Universidad de Málag
Recommended from our members
In-Material Processing of High Bandwidth Sensor Measurements Using Modular Neural Networks
Robotic materials are a novel class of materials that tightly integrate sensing, computing, and actuation into an engineered material or composite to allow the behavior of the material to be defined algorithmically. Robotic materials are constructed using an embedded network of computing nodes based on small, inexpensive microcontrollers. Examples of such materials include morphable airfoils which change shape in response to flight conditions or mission parameters, robotic skins with rich tactile sensing capabilities that recognize texture or touch gestures, clothing with tightly integrated sensing to assist with or augment the wearer's perception of the environment, or materials with dynamic camouflage capabilities.In this thesis, I develop a framework for in-material processing which tightly couples modularized deep neural networks and high-bandwidth sensors using a network of embedded, material-scale components. This framework enables materials to learn multiple desired responses to stimuli, avoiding the need for accurate modeling of the dynamics of the material and stimuli. I utilize a modular neural network design consisting of convolutional (CNN) and long short-term memory (LSTM) layers implemented in each node in the material as a computational approach for robotic materials. This network architecture allows for nodes in the material to process local sensor values, maintain local state information, and communicate with nodes in a local neighborhood in the materials. A multiobjective optimization approach is employed to automatically design the neural network architectures which maximizes the performance of the network while ensuring hardware budgets, such as memory requirements, are maintained. A communication network design is also developed to allow network modules to learn a communication protocol that limits communication to a desired rate, ensuring in-network bandwidth constraints are maintained.I demonstrate the suitability of this computational model for robotic materials using examples in several domains. An RF-based e-textile gesture input device capable of distinguishing between user control gestures is used to control arbitrary external devices. A tire with embedded piezoelectric sensing capabilites for use in high-performance autonomous vehicles performs state-of-the-art identification of terrains driven on. Two robotic skins are presented---one which is capable of detecting and localizing contact, and identifying the texture of the contacting objects; and a second which assists with avoiding collisions with obstacles and identifies affective touch gestures performed by a human collaborator. Finally, a distributed approach to human activity recognition is presented whose activity identification performance is comparable to a centralized approach, but can be implemented on hardware designed for wearable applications, as opposed to a GPU-enabled device. The examples shown demonstrate that robotic materials can perform significant in-material processing; are loosely coupled from a host system, communicating a minimal number of low-bandwidth events to the host; and can exhibit multifunctional behavior that is analyzed for safety or performance considerations
Multimodal barometric and inertial measurement unit based tactile sensor for robot control
In this article, we present a low-cost multimodal tactile sensor capable of providing accelerometer, gyroscope, and pressure data using a seven-axis chip as a sensing element. This approach reduces the complexity of the tactile sensor design and collection of multimodal data. The tactile device is composed of a top layer (a printed circuit board (PCB) and a sensing element), a middle layer (soft rubber material), and a bottom layer (plastic base) forming a sandwich structure. This approach allows the measurement of multimodal data when force is applied to different parts of the top layer of the sensor. The multimodal tactile sensor is validated with analyses and experiments in both offline and real-time. First, the spatial impulse response and sensitivity of the sensor are analyzed with accelerometer, gyroscope, and pressure data systematically collected from the sensor. Second, the estimation of contact location from a range of sensor positions and force values is evaluated using accelerometer and gyroscope data together with a convolutional neural network (CNN) method. Third, the estimation of contact location is used to control the position of a robot arm. The results show that the proposed multimodal tactile sensor has the potential for robotic applications, such as tactile perception for robot control, human-robot interaction, and object exploration.</p
Multimodal barometric and inertial measurement unit based tactile sensor for robot control
In this article, we present a low-cost multimodal tactile sensor capable of providing accelerometer, gyroscope, and pressure data using a seven-axis chip as a sensing element. This approach reduces the complexity of the tactile sensor design and collection of multimodal data. The tactile device is composed of a top layer (a printed circuit board (PCB) and a sensing element), a middle layer (soft rubber material), and a bottom layer (plastic base) forming a sandwich structure. This approach allows the measurement of multimodal data when force is applied to different parts of the top layer of the sensor. The multimodal tactile sensor is validated with analyses and experiments in both offline and real-time. First, the spatial impulse response and sensitivity of the sensor are analyzed with accelerometer, gyroscope, and pressure data systematically collected from the sensor. Second, the estimation of contact location from a range of sensor positions and force values is evaluated using accelerometer and gyroscope data together with a convolutional neural network (CNN) method. Third, the estimation of contact location is used to control the position of a robot arm. The results show that the proposed multimodal tactile sensor has the potential for robotic applications, such as tactile perception for robot control, human-robot interaction, and object exploration.</p
- …