1,757 research outputs found
Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain
Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors
NeRF in the Palm of Your Hand: Corrective Augmentation for Robotics via Novel-View Synthesis
Expert demonstrations are a rich source of supervision for training visual
robotic manipulation policies, but imitation learning methods often require
either a large number of demonstrations or expensive online expert supervision
to learn reactive closed-loop behaviors. In this work, we introduce SPARTN
(Synthetic Perturbations for Augmenting Robot Trajectories via NeRF): a
fully-offline data augmentation scheme for improving robot policies that use
eye-in-hand cameras. Our approach leverages neural radiance fields (NeRFs) to
synthetically inject corrective noise into visual demonstrations, using NeRFs
to generate perturbed viewpoints while simultaneously calculating the
corrective actions. This requires no additional expert supervision or
environment interaction, and distills the geometric information in NeRFs into a
real-time reactive RGB-only policy. In a simulated 6-DoF visual grasping
benchmark, SPARTN improves success rates by 2.8 over imitation learning
without the corrective augmentations and even outperforms some methods that use
online supervision. It additionally closes the gap between RGB-only and RGB-D
success rates, eliminating the previous need for depth sensors. In real-world
6-DoF robotic grasping experiments from limited human demonstrations, our
method improves absolute success rates by on average, including
objects that are traditionally challenging for depth-based methods. See video
results at \url{https://bland.website/spartn}
In-home and remote use of robotic body surrogates by people with profound motor deficits
By controlling robots comparable to the human body, people with profound
motor deficits could potentially perform a variety of physical tasks for
themselves, improving their quality of life. The extent to which this is
achievable has been unclear due to the lack of suitable interfaces by which to
control robotic body surrogates and a dearth of studies involving substantial
numbers of people with profound motor deficits. We developed a novel, web-based
augmented reality interface that enables people with profound motor deficits to
remotely control a PR2 mobile manipulator from Willow Garage, which is a
human-scale, wheeled robot with two arms. We then conducted two studies to
investigate the use of robotic body surrogates. In the first study, 15 novice
users with profound motor deficits from across the United States controlled a
PR2 in Atlanta, GA to perform a modified Action Research Arm Test (ARAT) and a
simulated self-care task. Participants achieved clinically meaningful
improvements on the ARAT and 12 of 15 participants (80%) successfully completed
the simulated self-care task. Participants agreed that the robotic system was
easy to use, was useful, and would provide a meaningful improvement in their
lives. In the second study, one expert user with profound motor deficits had
free use of a PR2 in his home for seven days. He performed a variety of
self-care and household tasks, and also used the robot in novel ways. Taking
both studies together, our results suggest that people with profound motor
deficits can improve their quality of life using robotic body surrogates, and
that they can gain benefit with only low-level robot autonomy and without
invasive interfaces. However, methods to reduce the rate of errors and increase
operational speed merit further investigation.Comment: 43 Pages, 13 Figure
Toward Image-Guided Automated Suture Grasping Under Complex Environments: A Learning-Enabled and Optimization-Based Holistic Framework
To realize a higher-level autonomy of surgical knot tying in minimally invasive surgery (MIS), automated suture grasping, which bridges the suture stitching and looping procedures, is an important yet challenging task needs to be achieved. This paper presents a holistic framework with image-guided and automation techniques to robotize this operation even under complex environments. The whole task is initialized by suture segmentation, in which we propose a novel semi-supervised learning architecture featured with a suture-aware loss to pertinently learn its slender information using both annotated and unannotated data. With successful segmentation in stereo-camera, we develop a Sampling-based Sliding Pairing (SSP) algorithm to online optimize the suture's 3D shape. By jointly studying the robotic configuration and the suture's spatial characteristics, a target function is introduced to find the optimal grasping pose of the surgical tool with Remote Center of Motion (RCM) constraints. To compensate for inherent errors and practical uncertainties, a unified grasping strategy with a novel vision-based mechanism is introduced to autonomously accomplish this grasping task. Our framework is extensively evaluated from learning-based segmentation, 3D reconstruction, and image-guided grasping on the da Vinci Research Kit (dVRK) platform, where we achieve high performances and successful rates in perceptions and robotic manipulations. These results prove the feasibility of our approach in automating the suture grasping task, and this work fills the gap between automated surgical stitching and looping, stepping towards a higher-level of task autonomy in surgical knot tying
Deep Visual Foresight for Planning Robot Motion
A key challenge in scaling up robot learning to many skills and environments
is removing the need for human supervision, so that robots can collect their
own data and improve their own performance without being limited by the cost of
requesting human feedback. Model-based reinforcement learning holds the promise
of enabling an agent to learn to predict the effects of its actions, which
could provide flexible predictive models for a wide range of tasks and
environments, without detailed human supervision. We develop a method for
combining deep action-conditioned video prediction models with model-predictive
control that uses entirely unlabeled training data. Our approach does not
require a calibrated camera, an instrumented training set-up, nor precise
sensing and actuation. Our results show that our method enables a real robot to
perform nonprehensile manipulation -- pushing objects -- and can handle novel
objects not seen during training.Comment: ICRA 2017. Supplementary video:
https://sites.google.com/site/robotforesight
ToF cameras for active vision in robotics
ToF cameras are now a mature technology that is widely being adopted to provide sensory input to robotic applications. Depending on the nature of the objects to be perceived and the viewing distance, we distinguish two groups of applications: those requiring to capture the whole scene and those centered on an object. It will be demonstrated that it is in this last group of applications, in which the robot has to locate and possibly manipulate an object, where the distinctive characteristics of ToF cameras can be better exploited. After presenting the physical sensor features and the calibration requirements of such cameras, we review some representative works highlighting for each one which of the distinctive ToF characteristics have been more essential. Even if at low resolution, the acquisition of 3D images at frame-rate is one of the most important features, as it enables quick background/ foreground segmentation. A common use is in combination with classical color cameras. We present three developed applications, using a mobile robot and a robotic arm, to exemplify with real images some of the stated advantages.This work was supported by the EU project GARNICS FP7-247947, by the Spanish Ministry of Science and Innovation under project PAU+ DPI2011-27510, and by the Catalan Research Commission through SGR-00155Peer Reviewe
- …