3,643 research outputs found
Robot eye-hand coordination learning by watching human demonstrations: a task function approximation approach
We present a robot eye-hand coordination learning method that can directly
learn visual task specification by watching human demonstrations. Task
specification is represented as a task function, which is learned using inverse
reinforcement learning(IRL) by inferring differential rewards between state
changes. The learned task function is then used as continuous feedbacks in an
uncalibrated visual servoing(UVS) controller designed for the execution phase.
Our proposed method can directly learn from raw videos, which removes the need
for hand-engineered task specification. It can also provide task
interpretability by directly approximating the task function. Besides,
benefiting from the use of a traditional UVS controller, our training process
is efficient and the learned policy is independent from a particular robot
platform. Various experiments were designed to show that, for a certain DOF
task, our method can adapt to task/environment variances in target positions,
backgrounds, illuminations, and occlusions without prior retraining.Comment: Accepted in ICRA 201
Geometry-aware Manipulability Learning, Tracking and Transfer
Body posture influences human and robots performance in manipulation tasks,
as appropriate poses facilitate motion or force exertion along different axes.
In robotics, manipulability ellipsoids arise as a powerful descriptor to
analyze, control and design the robot dexterity as a function of the
articulatory joint configuration. This descriptor can be designed according to
different task requirements, such as tracking a desired position or apply a
specific force. In this context, this paper presents a novel
\emph{manipulability transfer} framework, a method that allows robots to learn
and reproduce manipulability ellipsoids from expert demonstrations. The
proposed learning scheme is built on a tensor-based formulation of a Gaussian
mixture model that takes into account that manipulability ellipsoids lie on the
manifold of symmetric positive definite matrices. Learning is coupled with a
geometry-aware tracking controller allowing robots to follow a desired profile
of manipulability ellipsoids. Extensive evaluations in simulation with
redundant manipulators, a robotic hand and humanoids agents, as well as an
experiment with two real dual-arm systems validate the feasibility of the
approach.Comment: Accepted for publication in the Intl. Journal of Robotics Research
(IJRR). Website: https://sites.google.com/view/manipulability. Code:
https://github.com/NoemieJaquier/Manipulability. 24 pages, 20 figures, 3
tables, 4 appendice
A simple 5-DOF walking robot for space station application
Robots on the NASA space station have a potential range of applications from assisting astronauts during EVA (extravehicular activity), to replacing astronauts in the performance of simple, dangerous, and tedious tasks; and to performing routine tasks such as inspections of structures and utilities. To provide a vehicle for demonstrating the pertinent technologies, a simple robot is being developed for locomotion and basic manipulation on the proposed space station. In addition to the robot, an experimental testbed was developed, including a 1/3 scale (1.67 meter modules) truss and a gravity compensation system to simulate a zero-gravity environment. The robot comprises two flexible links connected by a rotary joint, with a 2 degree of freedom wrist joints and grippers at each end. The grippers screw into threaded holes in the nodes of the space station truss, and enable it to walk by alternately shifting the base of support from one foot (gripper) to the other. Present efforts are focused on mechanical design, application of sensors, and development of control algorithms for lightweight, flexible structures. Long-range research will emphasize development of human interfaces to permit a range of control modes from teleoperated to semiautonomous, and coordination of robot/astronaut and multiple-robot teams
Tele-operated high speed anthropomorphic dextrous hands with object shape and texture identification
This paper reports on the development of two number of robotic hands have been developed which focus on tele-operated high speed anthropomorphic dextrous robotic hands. The aim of developing these hands was to achieve a system that seamlessly interfaced between humans and robots. To provide sensory feedback, to a remote operator tactile sensors were developed to be mounted on the robotic hands. Two systems were developed, the first, being a skin sensor capable of shape reconstruction placed on the palm of the hand to feed back the shape of objects grasped and the second is a highly sensitive tactile array for surface texture identification
Semantic Robot Programming for Goal-Directed Manipulation in Cluttered Scenes
We present the Semantic Robot Programming (SRP) paradigm as a convergence of
robot programming by demonstration and semantic mapping. In SRP, a user can
directly program a robot manipulator by demonstrating a snapshot of their
intended goal scene in workspace. The robot then parses this goal as a scene
graph comprised of object poses and inter-object relations, assuming known
object geometries. Task and motion planning is then used to realize the user's
goal from an arbitrary initial scene configuration. Even when faced with
different initial scene configurations, SRP enables the robot to seamlessly
adapt to reach the user's demonstrated goal. For scene perception, we propose
the Discriminatively-Informed Generative Estimation of Scenes and Transforms
(DIGEST) method to infer the initial and goal states of the world from RGBD
images. The efficacy of SRP with DIGEST perception is demonstrated for the task
of tray-setting with a Michigan Progress Fetch robot. Scene perception and task
execution are evaluated with a public household occlusion dataset and our
cluttered scene dataset.Comment: published in ICRA 201
- …