85 research outputs found
Autonomy Infused Teleoperation with Application to BCI Manipulation
Robot teleoperation systems face a common set of challenges including
latency, low-dimensional user commands, and asymmetric control inputs. User
control with Brain-Computer Interfaces (BCIs) exacerbates these problems
through especially noisy and erratic low-dimensional motion commands due to the
difficulty in decoding neural activity. We introduce a general framework to
address these challenges through a combination of computer vision, user intent
inference, and arbitration between the human input and autonomous control
schemes. Adjustable levels of assistance allow the system to balance the
operator's capabilities and feelings of comfort and control while compensating
for a task's difficulty. We present experimental results demonstrating
significant performance improvement using the shared-control assistance
framework on adapted rehabilitation benchmarks with two subjects implanted with
intracortical brain-computer interfaces controlling a seven degree-of-freedom
robotic manipulator as a prosthetic. Our results further indicate that shared
assistance mitigates perceived user difficulty and even enables successful
performance on previously infeasible tasks. We showcase the extensibility of
our architecture with applications to quality-of-life tasks such as opening a
door, pouring liquids from containers, and manipulation with novel objects in
densely cluttered environments
Data-Driven Grasp Synthesis - A Survey
We review the work on data-driven grasp synthesis and the methodologies for
sampling and ranking candidate grasps. We divide the approaches into three
groups based on whether they synthesize grasps for known, familiar or unknown
objects. This structure allows us to identify common object representations and
perceptual processes that facilitate the employed data-driven grasp synthesis
technique. In the case of known objects, we concentrate on the approaches that
are based on object recognition and pose estimation. In the case of familiar
objects, the techniques use some form of a similarity matching to a set of
previously encountered objects. Finally for the approaches dealing with unknown
objects, the core part is the extraction of specific features that are
indicative of good grasps. Our survey provides an overview of the different
methodologies and discusses open problems in the area of robot grasping. We
also draw a parallel to the classical approaches that rely on analytic
formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic
Simultaneous Tactile Exploration and Grasp Refinement for Unknown Objects
This paper addresses the problem of simultaneously exploring an unknown
object to model its shape, using tactile sensors on robotic fingers, while also
improving finger placement to optimise grasp stability. In many situations, a
robot will have only a partial camera view of the near side of an observed
object, for which the far side remains occluded. We show how an initial grasp
attempt, based on an initial guess of the overall object shape, yields tactile
glances of the far side of the object which enable the shape estimate and
consequently the successive grasps to be improved. We propose a grasp
exploration approach using a probabilistic representation of shape, based on
Gaussian Process Implicit Surfaces. This representation enables initial partial
vision data to be augmented with additional data from successive tactile
glances. This is combined with a probabilistic estimate of grasp quality to
refine grasp configurations. When choosing the next set of finger placements, a
bi-objective optimisation method is used to mutually maximise grasp quality and
improve shape representation during successive grasp attempts. Experimental
results show that the proposed approach yields stable grasp configurations more
efficiently than a baseline method, while also yielding improved shape estimate
of the grasped object.Comment: IEEE Robotics and Automation Letters. Preprint Version. Accepted
February, 202
Visuo-Haptic Grasping of Unknown Objects through Exploration and Learning on Humanoid Robots
Die vorliegende Arbeit befasst sich mit dem Greifen unbekannter Objekte durch humanoide Roboter. Dazu werden visuelle Informationen mit haptischer Exploration kombiniert, um Greifhypothesen zu erzeugen. Basierend auf simulierten Trainingsdaten wird außerdem eine Greifmetrik gelernt, welche die Erfolgswahrscheinlichkeit der Greifhypothesen bewertet und die mit der größten geschätzten Erfolgswahrscheinlichkeit auswählt. Diese wird verwendet, um Objekte mit Hilfe einer reaktiven Kontrollstrategie zu greifen. Die zwei Kernbeiträge der Arbeit sind zum einen die haptische Exploration von unbekannten Objekten und zum anderen das Greifen von unbekannten Objekten mit Hilfe einer neuartigen datengetriebenen Greifmetrik
Review on human‐like robot manipulation using dexterous hands
In recent years, human hand‐based robotic hands or dexterous hands have gained attention due to their enormous capabilities of handling soft materials compared to traditional grippers. Back in the earlier days, the development of a hand model close to that of a human was an impossible task but with the advancements made in technology, dexterous hands with three, four or five‐fingered robotic hands have been developed to mimic human hand nature. However, human‐like manipulation of dexterous hands to this date remains a challenge. Thus, this review focuses on (a) the history and motivation behind the development of dexterous hands, (b) a brief overview of the available multi‐fingered hands, and (c) learning‐based methods such as traditional and data‐driven learning methods for manipulating dexterous hands. Additionally, it discusses the challenges faced in terms of the manipulation of multi‐fingered or dexterous hands
- …