2,177 research outputs found
Data-Driven Grasp Synthesis - A Survey
We review the work on data-driven grasp synthesis and the methodologies for
sampling and ranking candidate grasps. We divide the approaches into three
groups based on whether they synthesize grasps for known, familiar or unknown
objects. This structure allows us to identify common object representations and
perceptual processes that facilitate the employed data-driven grasp synthesis
technique. In the case of known objects, we concentrate on the approaches that
are based on object recognition and pose estimation. In the case of familiar
objects, the techniques use some form of a similarity matching to a set of
previously encountered objects. Finally for the approaches dealing with unknown
objects, the core part is the extraction of specific features that are
indicative of good grasps. Our survey provides an overview of the different
methodologies and discusses open problems in the area of robot grasping. We
also draw a parallel to the classical approaches that rely on analytic
formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic
Behavior-specific proprioception models for robotic force estimation: a machine learning approach
Robots that support humans in physically demanding tasks require accurate force sensing capabilities. A common way to achieve this is by monitoring the interaction with the environment directly with dedicated force sensors. Major drawbacks of such special purpose sensors are the increased costs and the reduced payload of the robot platform. Instead, this thesis investigates how the functionality of such sensors can be approximated by utilizing force estimation approaches. Most of today’s robots are equipped with rich proprioceptive sensing capabilities where even a robotic arm, e.g., the UR5, provides access to more than hundred sensor readings. Following this trend, it is getting feasible to utilize a wide variety of sensors for force estimation purposes. Human proprioception allows estimating forces such as the weight of an object by prior experience about sensory-motor patterns. Applying a similar approach to robots enables them to learn from previous demonstrations without the need of dedicated force sensors.
This thesis introduces Behavior-Specific Proprioception Models (BSPMs), a novel concept for enhancing robotic behavior with estimates of the expected proprioceptive feedback. A main methodological contribution is the operationalization of the BSPM approach using data-driven machine learning techniques. During a training phase, the behavior is continuously executed while recording proprioceptive sensor readings. The training data acquired from these demonstrations represents ground truth about behavior-specific sensory-motor experiences, i.e., the influence of performed actions and environmental conditions on the proprioceptive feedback. This data acquisition procedure does not require expert knowledge about the particular robot platform, e.g., kinematic chains or mass distribution, which is a major advantage over analytical approaches. The training data is then used to learn BSPMs, e.g. using lazy learning techniques or artificial neural networks. At runtime, the BSPMs provide estimates of the proprioceptive feedback that can be compared to actual sensations. The BSPM approach thus extends classical programming by demonstrations methods where only movement data is learned and enables robots to accurately estimate forces during behavior execution
Jointly Optimizing Placement and Inference for Beacon-based Localization
The ability of robots to estimate their location is crucial for a wide
variety of autonomous operations. In settings where GPS is unavailable,
measurements of transmissions from fixed beacons provide an effective means of
estimating a robot's location as it navigates. The accuracy of such a
beacon-based localization system depends both on how beacons are distributed in
the environment, and how the robot's location is inferred based on noisy and
potentially ambiguous measurements. We propose an approach for making these
design decisions automatically and without expert supervision, by explicitly
searching for the placement and inference strategies that, together, are
optimal for a given environment. Since this search is computationally
expensive, our approach encodes beacon placement as a differential neural layer
that interfaces with a neural network for inference. This formulation allows us
to employ standard techniques for training neural networks to carry out the
joint optimization. We evaluate this approach on a variety of environments and
settings, and find that it is able to discover designs that enable high
localization accuracy.Comment: Appeared at 2017 International Conference on Intelligent Robots and
Systems (IROS
- …