3,102 research outputs found
Recommended from our members
Learning To Grasp
Providing robots with the ability to grasp objects has, despite decades of research, remained a challenging problem. The problem is approachable in constrained environments where there is ample prior knowledge of the scene and objects that will be manipulated. The challenge is in building systems that scale beyond specific situational instances and gracefully operate in novel conditions. In the past, heuristic and simple rule based strategies were used to accomplish tasks such as scene segmentation or reasoning about occlusion. These heuristic strategies work in constrained environments where a roboticist can make simplifying assumptions about everything from the geometries of the objects to be interacted with, level of clutter, camera position, lighting, and a myriad of other relevant variables. With these assumptions in place, it becomes tractable for a roboticist to hardcode desired behaviour and build a robotic system capable of completing repetitive tasks. These hardcoded behaviours will quickly fail if the assumptions about the environment are invalidated. In this thesis we will demonstrate how a robust grasping system can be built that is capable of operating under a more variable set of conditions without requiring significant engineering of behavior by a roboticist.
This robustness is enabled by a new found ability to empower novel machine learning techniques with massive amounts of synthetic training data. The ability of simulators to create realistic sensory data enables the generation of massive corpora of labeled training data for various grasping related tasks. The use of simulation allows for the creation of a wide variety of environments and experiences exposing the robotic system to a large number of scenarios before ever operating in the real world. This thesis demonstrates that it is now possible to build systems that work in the real world trained using deep learning on synthetic data. The sheer volume of data that can be produced via simulation enables the use of powerful deep learning techniques whose performance scales with the amount of data available. This thesis will explore how deep learning and other techniques can be used to encode these massive datasets for efficient runtime use. The ability to train and test on synthetic data allows for quick iterative development of new perception, planning and grasp execution algorithms that work in a large number of environments. Creative applications of machine learning and massive synthetic datasets are allowing robotic systems to learn skills, and move beyond repetitive hardcoded tasks
Intuitive Hand Teleoperation by Novice Operators Using a Continuous Teleoperation Subspace
Human-in-the-loop manipulation is useful in when autonomous grasping is not
able to deal sufficiently well with corner cases or cannot operate fast enough.
Using the teleoperator's hand as an input device can provide an intuitive
control method but requires mapping between pose spaces which may not be
similar. We propose a low-dimensional and continuous teleoperation subspace
which can be used as an intermediary for mapping between different hand pose
spaces. We present an algorithm to project between pose space and teleoperation
subspace. We use a non-anthropomorphic robot to experimentally prove that it is
possible for teleoperation subspaces to effectively and intuitively enable
teleoperation. In experiments, novice users completed pick and place tasks
significantly faster using teleoperation subspace mapping than they did using
state of the art teleoperation methods.Comment: ICRA 2018, 7 pages, 7 figures, 2 table
Ground Robotic Hand Applications for the Space Program study (GRASP)
This document reports on a NASA-STDP effort to address research interests of the NASA Kennedy Space Center (KSC) through a study entitled, Ground Robotic-Hand Applications for the Space Program (GRASP). The primary objective of the GRASP study was to identify beneficial applications of specialized end-effectors and robotic hand devices for automating any ground operations which are performed at the Kennedy Space Center. Thus, operations for expendable vehicles, the Space Shuttle and its components, and all payloads were included in the study. Typical benefits of automating operations, or augmenting human operators performing physical tasks, include: reduced costs; enhanced safety and reliability; and reduced processing turnaround time
TransSC: Transformer-based Shape Completion for Grasp Evaluation
Currently, robotic grasping methods based on sparse partial point clouds have
attained a great grasping performance on various objects while they often
generate wrong grasping candidates due to the lack of geometric information on
the object. In this work, we propose a novel and robust shape completion model
(TransSC). This model has a transformer-based encoder to explore more
point-wise features and a manifold-based decoder to exploit more object details
using a partial point cloud as input.
Quantitative experiments verify the effectiveness of the proposed shape
completion network and demonstrate it outperforms existing methods. Besides,
TransSC is integrated into a grasp evaluation network to generate a set of
grasp candidates. The simulation experiment shows that TransSC improves the
grasping generation result compared to the existing shape completion baselines.
Furthermore, our robotic experiment shows that with TransSC the robot is more
successful in grasping objects that are randomly placed on a support surface
Autonomy Infused Teleoperation with Application to BCI Manipulation
Robot teleoperation systems face a common set of challenges including
latency, low-dimensional user commands, and asymmetric control inputs. User
control with Brain-Computer Interfaces (BCIs) exacerbates these problems
through especially noisy and erratic low-dimensional motion commands due to the
difficulty in decoding neural activity. We introduce a general framework to
address these challenges through a combination of computer vision, user intent
inference, and arbitration between the human input and autonomous control
schemes. Adjustable levels of assistance allow the system to balance the
operator's capabilities and feelings of comfort and control while compensating
for a task's difficulty. We present experimental results demonstrating
significant performance improvement using the shared-control assistance
framework on adapted rehabilitation benchmarks with two subjects implanted with
intracortical brain-computer interfaces controlling a seven degree-of-freedom
robotic manipulator as a prosthetic. Our results further indicate that shared
assistance mitigates perceived user difficulty and even enables successful
performance on previously infeasible tasks. We showcase the extensibility of
our architecture with applications to quality-of-life tasks such as opening a
door, pouring liquids from containers, and manipulation with novel objects in
densely cluttered environments
- …