67 research outputs found
A Deep Probabilistic Framework for Heterogeneous Self-Supervised Learning of Affordances
The perception of affordances provides an action-centered parametric representation of the environment. By perceiving an object's visual features in terms of what actions they afford, novel behavior opportunities can be inferred about previously unseen objects. In this paper, a flexible deep probabilistic framework is proposed which allows an explorative agent to learn tool-object affordances in continuous space. To this end, we use a deep variational auto-encoder with heterogeneous probabilistic distributions to infer the most probable action that achieves a desired effect or to predict a parametric probability distribution over action consequences i.e. effects. Our experiments show the generalization of the method to unseen objects and tools and we have analyzed the influence of different design choices. Our framework goes beyond other proposals by incorporating various probability distributions tailored for each individual modality and by eliminating the need for any pre-processing of the data
Learning How a Tool Affords by Simulating 3D Models from the Web
Thanks to: UoAs ABVenture Zone, N. Petkov, K. Georgiev, B. Nougier, S. Fichtl, S. Ramamoorthy, M. Beetz, A. Haidu, J. Alexander, M. Schoeler, N. Pugeault, D. Cruickshank, M. Chung and N. Khan. Paulo Abelha is on a PhD studentship supported by the Brazilian agency CAPES through the program Science without Borders. Frank Guerin received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. Published in: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) DOI: 10.1109/IROS.2017.8206372 Date of Conference: 24-28 Sept. 2017 Conference Location: Vancouver, BC, Canada.Postprin
Invariant Feature Mappings for Generalizing Affordance Understanding Using Regularized Metric Learning
This paper presents an approach for learning invariant features for object
affordance understanding. One of the major problems for a robotic agent
acquiring a deeper understanding of affordances is finding sensory-grounded
semantics. Being able to understand what in the representation of an object
makes the object afford an action opens up for more efficient manipulation,
interchange of objects that visually might not be similar, transfer learning,
and robot to human communication. Our approach uses a metric learning algorithm
that learns a feature transform that encourages objects that affords the same
action to be close in the feature space. We regularize the learning, such that
we penalize irrelevant features, allowing the agent to link what in the sensory
input caused the object to afford the action. From this, we show how the agent
can abstract the affordance and reason about the similarity between different
affordances
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
- …