40,291 research outputs found
Data-Driven Shape Analysis and Processing
Data-driven methods play an increasingly important role in discovering
geometric, structural, and semantic relationships between 3D shapes in
collections, and applying this analysis to support intelligent modeling,
editing, and visualization of geometric data. In contrast to traditional
approaches, a key feature of data-driven approaches is that they aggregate
information from a collection of shapes to improve the analysis and processing
of individual shapes. In addition, they are able to learn models that reason
about properties and relationships of shapes without relying on hard-coded
rules or explicitly programmed instructions. We provide an overview of the main
concepts and components of these techniques, and discuss their application to
shape classification, segmentation, matching, reconstruction, modeling and
exploration, as well as scene analysis and synthesis, through reviewing the
literature and relating the existing works with both qualitative and numerical
comparisons. We conclude our report with ideas that can inspire future research
in data-driven shape analysis and processing.Comment: 10 pages, 19 figure
Developmental Bayesian Optimization of Black-Box with Visual Similarity-Based Transfer Learning
We present a developmental framework based on a long-term memory and
reasoning mechanisms (Vision Similarity and Bayesian Optimisation). This
architecture allows a robot to optimize autonomously hyper-parameters that need
to be tuned from any action and/or vision module, treated as a black-box. The
learning can take advantage of past experiences (stored in the episodic and
procedural memories) in order to warm-start the exploration using a set of
hyper-parameters previously optimized from objects similar to the new unknown
one (stored in a semantic memory). As example, the system has been used to
optimized 9 continuous hyper-parameters of a professional software (Kamido)
both in simulation and with a real robot (industrial robotic arm Fanuc) with a
total of 13 different objects. The robot is able to find a good object-specific
optimization in 68 (simulation) or 40 (real) trials. In simulation, we
demonstrate the benefit of the transfer learning based on visual similarity, as
opposed to an amnesic learning (i.e. learning from scratch all the time).
Moreover, with the real robot, we show that the method consistently outperforms
the manual optimization from an expert with less than 2 hours of training time
to achieve more than 88% of success
Deep Object-Centric Representations for Generalizable Robot Learning
Robotic manipulation in complex open-world scenarios requires both reliable
physical manipulation skills and effective and generalizable perception. In
this paper, we propose a method where general purpose pretrained visual models
serve as an object-centric prior for the perception system of a learned policy.
We devise an object-level attentional mechanism that can be used to determine
relevant objects from a few trajectories or demonstrations, and then
immediately incorporate those objects into a learned policy. A task-independent
meta-attention locates possible objects in the scene, and a task-specific
attention identifies which objects are predictive of the trajectories. The
scope of the task-specific attention is easily adjusted by showing
demonstrations with distractor objects or with diverse relevant objects. Our
results indicate that this approach exhibits good generalization across object
instances using very few samples, and can be used to learn a variety of
manipulation tasks using reinforcement learning
- …