283 research outputs found
Design and Development of Sensor Integrated Robotic Hand
Most of the automated systems using robots as agents do use few sensors according to the need. However, there are situations where the tasks carried out by the end-effector, or for that matter by the robot hand needs multiple sensors. The hand, to make the best use of these sensors, and behave autonomously, requires a set of appropriate types of sensors which could be integrated in proper manners.
The present research work aims at developing a sensor integrated robot hand that can collect information related to the assigned tasks, assimilate there correctly and then do task action as appropriate. The process of development involves selection of sensors of right types and of right specification, locating then at proper places in the hand, checking their functionality individually and calibrating them for the envisaged process. Since the sensors need to be integrated so that they perform in the desired manner collectively, an integration platform is created using NI PXIe-1082.
A set of algorithm is developed for achieving the integrated model. The entire process is first modelled and simulated off line for possible modification in order to ensure that all the sensors do contribute towards the autonomy of the hand for desired activity.
This work also involves design of a two-fingered gripper. The design is made in such a way that it is capable of carrying out the desired tasks and can accommodate all the sensors within its fold. The developed sensor integrated hand has been put to work and its performance test has been carried out. This hand can be very useful for part assembly work in industries for any shape of part with a limit on the size of the part in mind.
The broad aim is to design, model simulate and develop an advanced robotic hand. Sensors for pick up contacts pressure, force, torque, position, surface profile shape using suitable sensing elements in a robot hand are to be introduced. The hand is a complex structure with large number of degrees of freedom and has multiple sensing capabilities apart from the associated sensing assistance from other organs. The present work is envisaged to add multiple sensors to a two-fingered robotic hand having motion capabilities and constraints similar to the human hand. There has been a good amount of research and development in this field during the last two decades a lot remains to be explored and achieved.
The objective of the proposed work is to design, simulate and develop a sensor integrated robotic hand. Its potential applications can be proposed for industrial environments and in healthcare field. The industrial applications include electronic assembly tasks, lighter inspection tasks, etc. Application in healthcare could be in the areas of rehabilitation and assistive techniques.
The work also aims to establish the requirement of the robotic hand for the target application areas, to identify the suitable kinds and model of sensors that can be integrated on hand control system. Functioning of motors in the robotic hand and integration of appropriate sensors for the desired motion is explained for the control of the various elements of the hand. Additional sensors, capable of collecting external information and information about the object for manipulation is explored.
Processes are designed using various software and hardware tools such as mathematical computation MATLAB, OpenCV library and LabVIEW 2013 DAQ system as applicable, validated theoretically and finally implemented to develop an intelligent robotic hand. The multiple smart sensors are installed on a standard six degree-of-freedom industrial robot KAWASAKI RS06L articulated manipulator, with the two-finger pneumatic SHUNK robotic hand or designed prototype and robot control programs are integrated in such a manner that allows easy application of grasping in an industrial pick-and-place operation where the characteristics of the object can vary or are unknown. The effectiveness of the actual recommended structure is usually proven simply by experiments using calibration involving sensors and manipulator. The dissertation concludes with a summary of the contribution and the scope of further work
Enhancing Grasp Pose Computation in Gripper Workspace Spheres
In this paper, enhancement to the novel grasp planning algorithm based on gripper workspace spheres is presented. Our development requires a registered point cloud of the target from different views, assuming no prior knowledge of the object, nor any of its properties. This work features a new set of metrics for grasp pose candidates evaluation, as well as exploring the impact of high object sampling on grasp success rates. In addition to gripper position sampling, we now perform orientation sampling about the x, y, and z-axes, hence the grasping algorithm no longer require object orientation estimation. Successful experiments have been conducted on a simple jaw gripper (Franka Panda gripper) as well as a complex, high Degree of Freedom (DoF) hand (Allegro hand) as a proof of its versatility. Higher grasp success rates of 76% and 85.5% respectively has been reported by real world experiments
Dexterous grasping of novel objects from a single view
In this thesis, a novel generative-evaluative method was proposed to solve the problem of dexterous grasping of the novel object with a single view. The generative model is learned from human demonstration. The grasps generated by the generative model are used to train the evaluative model. Two novel evaluative network architectures are proposed. The evaluative model is a deep evaluative network that is trained in the simulation. The generative-evaluative method is tested in a real grasp data set with 49 previously unseen challenging objects. The generative-evaluative method achieves a success rate of 78% that outperforms the purely generative method, that has a success rate of 57%. The thesis provides insights into the strengths and weaknesses of the generative-evaluative method by comparing different deep network architectures
Comparing Piezoresistive Substrates for Tactile Sensing in Dexterous Hands
While tactile skins have been shown to be useful for detecting collisions
between a robotic arm and its environment, they have not been extensively used
for improving robotic grasping and in-hand manipulation. We propose a novel
sensor design for use in covering existing multi-fingered robot hands. We
analyze the performance of four different piezoresistive materials using both
fabric and anti-static foam substrates in benchtop experiments. We find that
although the piezoresistive foam was designed as packing material and not for
use as a sensing substrate, it performs comparably with fabrics specifically
designed for this purpose. While these results demonstrate the potential of
piezoresistive foams for tactile sensing applications, they do not fully
characterize the efficacy of these sensors for use in robot manipulation. As
such, we use a high density foam substrate to develop a scalable tactile skin
that can be attached to the palm of a robotic hand. We demonstrate several
robotic manipulation tasks using this sensor to show its ability to reliably
detect and localize contact, as well as analyze contact patterns during
grasping and transport tasks.Comment: 10 figures, 8 pages, submitted to ICRA 202
A Robust Controller for Stable 3D Pinching using Tactile Sensing
This paper proposes a controller for stable grasping of unknown-shaped
objects by two robotic fingers with tactile fingertips. The grasp is stabilised
by rolling the fingertips on the contact surface and applying a desired
grasping force to reach an equilibrium state. The validation is both in
simulation and on a fully-actuated robot hand (the Shadow Modular Grasper)
fitted with custom-built optical tactile sensors (based on the BRL TacTip). The
controller requires the orientations of the contact surfaces, which are
estimated by regressing a deep convolutional neural network over the tactile
images. Overall, the grasp system is demonstrated to achieve stable equilibrium
poses on various objects ranging in shape and softness, with the system being
robust to perturbations and measurement errors. This approach also has promise
to extend beyond grasping to stable in-hand object manipulation with multiple
fingers.Comment: 8 pages, 10 figures, 1 appendix. Accepted for publication in IEEE
Robotics and Automation Letters and in IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2021). Supplemental video:
https://youtu.be/rfQesw3FDA
Sense, Think, Grasp: A study on visual and tactile information processing for autonomous manipulation
Interacting with the environment using hands is one of the distinctive
abilities of humans with respect to other species. This aptitude reflects on
the crucial role played by objects\u2019 manipulation in the world that we have
shaped for us. With a view of bringing robots outside industries for supporting
people during everyday life, the ability of manipulating objects
autonomously and in unstructured environments is therefore one of the basic
skills they need. Autonomous manipulation is characterized by great
complexity especially regarding the processing of sensors information to
perceive the surrounding environment. Humans rely on vision for wideranging
tridimensional information, prioprioception for the awareness of
the relative position of their own body in the space and the sense of touch
for local information when physical interaction with objects happens. The
study of autonomous manipulation in robotics aims at transferring similar
perceptive skills to robots so that, combined with state of the art control
techniques, they could be able to achieve similar performance in manipulating
objects. The great complexity of this task makes autonomous
manipulation one of the open problems in robotics that has been drawing
increasingly the research attention in the latest years.
In this work of Thesis, we propose possible solutions to some key components
of autonomous manipulation, focusing in particular on the perception
problem and testing the developed approaches on the humanoid robotic platform iCub. When available, vision is the first source of information
to be processed for inferring how to interact with objects. The object
modeling and grasping pipeline based on superquadric functions we designed
meets this need, since it reconstructs the object 3D model from partial
point cloud and computes a suitable hand pose for grasping the object.
Retrieving objects information with touch sensors only is a relevant skill
that becomes crucial when vision is occluded, as happens for instance during
physical interaction with the object. We addressed this problem with
the design of a novel tactile localization algorithm, named Memory Unscented
Particle Filter, capable of localizing and recognizing objects relying solely
on 3D contact points collected on the object surface. Another key point of
autonomous manipulation we report on in this Thesis work is bi-manual
coordination. The execution of more advanced manipulation tasks in fact
might require the use and coordination of two arms. Tool usage for instance
often requires a proper in-hand object pose that can be obtained via
dual-arm re-grasping. In pick-and-place tasks sometimes the initial and
target position of the object do not belong to the same arm workspace, then
requiring to use one hand for lifting the object and the other for locating it
in the new position. At this regard, we implemented a pipeline for executing
the handover task, i.e. the sequences of actions for autonomously passing an
object from one robot hand on to the other.
The contributions described thus far address specific subproblems of
the more complex task of autonomous manipulation. This actually differs
from what humans do, in that humans develop their manipulation
skills by learning through experience and trial-and-error strategy. Aproper
mathematical formulation for encoding this learning approach is given by
Deep Reinforcement Learning, that has recently proved to be successful in
many robotics applications. For this reason, in this Thesis we report also
on the six month experience carried out at Berkeley Artificial Intelligence
Research laboratory with the goal of studying Deep Reinforcement Learning
and its application to autonomous manipulation
- …