311 research outputs found
TactileGCN: A Graph Convolutional Network for Predicting Grasp Stability with Tactile Sensors
Tactile sensors provide useful contact data during the interaction with an
object which can be used to accurately learn to determine the stability of a
grasp. Most of the works in the literature represented tactile readings as
plain feature vectors or matrix-like tactile images, using them to train
machine learning models. In this work, we explore an alternative way of
exploiting tactile information to predict grasp stability by leveraging
graph-like representations of tactile data, which preserve the actual spatial
arrangement of the sensor's taxels and their locality. In experimentation, we
trained a Graph Neural Network to binary classify grasps as stable or slippery
ones. To train such network and prove its predictive capabilities for the
problem at hand, we captured a novel dataset of approximately 5000
three-fingered grasps across 41 objects for training and 1000 grasps with 10
unknown objects for testing. Our experiments prove that this novel approach can
be effectively used to predict grasp stability
MODELLING AND CONTROL OF MULTI-FINGERED ROBOT HAND USING INTELLIGENT TECHNIQUES
Research and development of robust multi-fingered robot hand (MFRH) have been going on for more than three decades. Yet few can be found in an industrial application. The difficulties stem from many factors, one of which is that the lack of general and effective control techniques for the manipulation of robot hand.
In this research, a MFRH with five fingers has been proposed with intelligent control algorithms. Initially, mathematical modeling for the proposed MFRH has been derived to find the Forward Kinematic, Inverse Kinematic, Jacobian, Dynamics and the plant model. Thereafter, simulation of the MFRH using PID controller, Fuzzy Logic Controller, Fuzzy-PID controller and PID-PSO controller has been carried out to gauge the system performance based parameters such rise time, settling time and percent overshoot
Smart Grasping using Laser and Tactile Array Sensors for UCF-MANUS- An Intelligent Assistive Robotic Manipulator
This thesis presents three improvements in the UCF MANUS Assistive Robotic Manipulator\u27s grasping abilities. Firstly, the robot can now grasp objects that are deformable, heavy and have uneven contact surfaces without undergoing slippage during robotic operations, e.g. paper cup, filled water bottle. This is achieved by installing a high precision non-contacting Laser sensor1 that runs with an algorithm that processes raw-input data from the sensor, registers smallest variation in the relative position of the object with respect to the gripper. Secondly, the robot can grasp objects that are as light and small as single cereal grain without deforming it. To achieve this a MEMS Barometer based tactile sensor array device that can measure force that are as small as 1 gram equivalent is embedded into the gripper to enhance pressure sensing capabilities. Thirdly, the robot gripper gloves are designed aesthetically and conveniently to accommodate existing and newly added sensors using a 3D printing technology that uses light weight ABS plastic as a fabrication material. The newly designed system was experimented and found that a high degree of adaptability for different kinds of objects can be attained with a better performance than the previous system
Recommended from our members
Learning To Grasp
Providing robots with the ability to grasp objects has, despite decades of research, remained a challenging problem. The problem is approachable in constrained environments where there is ample prior knowledge of the scene and objects that will be manipulated. The challenge is in building systems that scale beyond specific situational instances and gracefully operate in novel conditions. In the past, heuristic and simple rule based strategies were used to accomplish tasks such as scene segmentation or reasoning about occlusion. These heuristic strategies work in constrained environments where a roboticist can make simplifying assumptions about everything from the geometries of the objects to be interacted with, level of clutter, camera position, lighting, and a myriad of other relevant variables. With these assumptions in place, it becomes tractable for a roboticist to hardcode desired behaviour and build a robotic system capable of completing repetitive tasks. These hardcoded behaviours will quickly fail if the assumptions about the environment are invalidated. In this thesis we will demonstrate how a robust grasping system can be built that is capable of operating under a more variable set of conditions without requiring significant engineering of behavior by a roboticist.
This robustness is enabled by a new found ability to empower novel machine learning techniques with massive amounts of synthetic training data. The ability of simulators to create realistic sensory data enables the generation of massive corpora of labeled training data for various grasping related tasks. The use of simulation allows for the creation of a wide variety of environments and experiences exposing the robotic system to a large number of scenarios before ever operating in the real world. This thesis demonstrates that it is now possible to build systems that work in the real world trained using deep learning on synthetic data. The sheer volume of data that can be produced via simulation enables the use of powerful deep learning techniques whose performance scales with the amount of data available. This thesis will explore how deep learning and other techniques can be used to encode these massive datasets for efficient runtime use. The ability to train and test on synthetic data allows for quick iterative development of new perception, planning and grasp execution algorithms that work in a large number of environments. Creative applications of machine learning and massive synthetic datasets are allowing robotic systems to learn skills, and move beyond repetitive hardcoded tasks
- …