56,153 research outputs found
Learning shape placements by example
We present a method to learn and propagate shape placements in 2D polygonal scenes from a few examples provided by a user. The placement of a shape is modeled as an oriented bounding box. Simple geometric relationships between this bounding box and nearby scene polygons define a feature set for the placement. The feature sets of all example placements are then used to learn a probabilistic model over all possible placements and scenes. With this model, we can generate a new set of placements with similar geometric relationships in any given scene. We introduce extensions that enable propagation and generation of shapes in 3D scenes, as well as the application of a learned modeling session to large scenes without additional user interaction. These concepts allow us to generate complex scenes with thousands of objects with relatively little user interaction
Learning to Place New Objects
The ability to place objects in the environment is an important skill for a
personal robot. An object should not only be placed stably, but should also be
placed in its preferred location/orientation. For instance, a plate is
preferred to be inserted vertically into the slot of a dish-rack as compared to
be placed horizontally in it. Unstructured environments such as homes have a
large variety of object types as well as of placing areas. Therefore our
algorithms should be able to handle placing new object types and new placing
areas. These reasons make placing a challenging manipulation task. In this
work, we propose a supervised learning algorithm for finding good placements
given the point-clouds of the object and the placing area. It learns to combine
the features that capture support, stability and preferred placements using a
shared sparsity structure in the parameters. Even when neither the object nor
the placing area is seen previously in the training set, our algorithm predicts
good placements. In extensive experiments, our method enables the robot to
stably place several new objects in several new placing areas with 98%
success-rate; and it placed the objects in their preferred placements in 92% of
the cases
Pick and Place Without Geometric Object Models
We propose a novel formulation of robotic pick and place as a deep
reinforcement learning (RL) problem. Whereas most deep RL approaches to robotic
manipulation frame the problem in terms of low level states and actions, we
propose a more abstract formulation. In this formulation, actions are target
reach poses for the hand and states are a history of such reaches. We show this
approach can solve a challenging class of pick-place and regrasping problems
where the exact geometry of the objects to be handled is unknown. The only
information our method requires is: 1) the sensor perception available to the
robot at test time; 2) prior knowledge of the general class of objects for
which the system was trained. We evaluate our method using objects belonging to
two different categories, mugs and bottles, both in simulation and on real
hardware. Results show a major improvement relative to a shape primitives
baseline
Learning Manipulation under Physics Constraints with Visual Perception
Understanding physical phenomena is a key competence that enables humans and
animals to act and interact under uncertain perception in previously unseen
environments containing novel objects and their configurations. In this work,
we consider the problem of autonomous block stacking and explore solutions to
learning manipulation under physics constraints with visual perception inherent
to the task. Inspired by the intuitive physics in humans, we first present an
end-to-end learning-based approach to predict stability directly from
appearance, contrasting a more traditional model-based approach with explicit
3D representations and physical simulation. We study the model's behavior
together with an accompanied human subject test. It is then integrated into a
real-world robotic system to guide the placement of a single wood block into
the scene without collapsing existing tower structure. To further automate the
process of consecutive blocks stacking, we present an alternative approach
where the model learns the physics constraint through the interaction with the
environment, bypassing the dedicated physics learning as in the former part of
this work. In particular, we are interested in the type of tasks that require
the agent to reach a given goal state that may be different for every new
trial. Thereby we propose a deep reinforcement learning framework that learns
policies for stacking tasks which are parametrized by a target structure.Comment: arXiv admin note: substantial text overlap with arXiv:1609.04861,
arXiv:1711.00267, arXiv:1604.0006
Recommended from our members
Work-based and work-related learning in Higher National Certificates and Diplomas in Scotland and Foundation Degrees in England: a comparative study: final report
This final report draws on findings from the four stages of a comparative study of Higher National Certificates/Diplomas (HNC/Ds) in Scotland and Foundation Degrees (FDs) in England that was undertaken jointly by researchers in the Centre for Research in Lifelong Learning (CRLL) at Glasgow Caledonian University (GCU) and the Open University (OU). The overall study has examined and explored the following issues: the demand drivers and how far they differ in both countries; the differing policy and funding frameworks in place in Scotland and England; the different types of provision which have emerged and the roles of different stakeholders in shaping this provision; the consequences of these models for the experiences of the learners involved; and progression of students into further study or employment
Learning Manipulation under Physics Constraints with Visual Perception
Understanding physical phenomena is a key competence that enables humans and animals to act and interact under uncertain perception in previously unseen environments containing novel objects and their configurations. In this work, we consider the problem of autonomous block stacking and explore solutions to learning manipulation under physics constraints with visual perception inherent to the task. Inspired by the intuitive physics in humans, we first present an end-to-end learning-based approach to predict stability directly from appearance, contrasting a more traditional model-based approach with explicit 3D representations and physical simulation. We study the model's behavior together with an accompanied human subject test. It is then integrated into a real-world robotic system to guide the placement of a single wood block into the scene without collapsing existing tower structure. To further automate the process of consecutive blocks stacking, we present an alternative approach where the model learns the physics constraint through the interaction with the environment, bypassing the dedicated physics learning as in the former part of this work. In particular, we are interested in the type of tasks that require the agent to reach a given goal state that may be different for every new trial. Thereby we propose a deep reinforcement learning framework that learns policies for stacking tasks which are parametrized by a target structure
- …