4,426 research outputs found
Deep Affordance-grounded Sensorimotor Object Recognition
It is well-established by cognitive neuroscience that human perception of
objects constitutes a complex process, where object appearance information is
combined with evidence about the so-called object "affordances", namely the
types of actions that humans typically perform when interacting with them. This
fact has recently motivated the "sensorimotor" approach to the challenging task
of automatic object recognition, where both information sources are fused to
improve robustness. In this work, the aforementioned paradigm is adopted,
surpassing current limitations of sensorimotor object recognition research.
Specifically, the deep learning paradigm is introduced to the problem for the
first time, developing a number of novel neuro-biologically and
neuro-physiologically inspired architectures that utilize state-of-the-art
neural networks for fusing the available information sources in multiple ways.
The proposed methods are evaluated using a large RGB-D corpus, which is
specifically collected for the task of sensorimotor object recognition and is
made publicly available. Experimental results demonstrate the utility of
affordance information to object recognition, achieving an up to 29% relative
error reduction by its inclusion.Comment: 9 pages, 7 figures, dataset link included, accepted to CVPR 201
Affordances, context and sociality
Affordances, i.e. the opportunity of actions offered by the environment, are one of the central research topics for the theoretical perspectives that view cognition as emerging from the interaction between the environment and the body. Being at the bridge between perception and action, affordances help to question a dichotomous view of perception and action. While Gibson’s view of affordances is mainly externalist, many contemporary approaches define affordances (and micro-affordances) as the product of long-term visuomotor associations in the brain. These studies have emphasized the fact that affordances are activated automatically, independently from the context and the previous intention to act: for example, affordances related to objects’ size would emerge even if the task does not require focusing on size. This emphasis on the automaticity of affordances has led to overlook their flexibility and contextual-dependency. In this contribution I will outline and discuss recent perspectives and evidence that reveal the flexibility and context-dependency of affordances, clarifying how they are modulated by the physical, cultural and social context. I will focus specifically on social affordances, i.e. on how perception of affordances might be influenced by the presence of multiple actors having different goals
Learning grasp affordance reasoning through semantic relations
Reasoning about object affordances allows an autonomous agent to perform
generalised manipulation tasks among object instances. While current approaches
to grasp affordance estimation are effective, they are limited to a single
hypothesis. We present an approach for detection and extraction of multiple
grasp affordances on an object via visual input. We define semantics as a
combination of multiple attributes, which yields benefits in terms of
generalisation for grasp affordance prediction. We use Markov Logic Networks to
build a knowledge base graph representation to obtain a probability
distribution of grasp affordances for an object. To harvest the knowledge base,
we collect and make available a novel dataset that relates different semantic
attributes. We achieve reliable mappings of the predicted grasp affordances on
the object by learning prototypical grasping patches from several examples. We
show our method's generalisation capabilities on grasp affordance prediction
for novel instances and compare with similar methods in the literature.
Moreover, using a robotic platform, on simulated and real scenarios, we
evaluate the success of the grasping task when conditioned on the grasp
affordance prediction.Comment: Accepted in IEEE/RSJ International Conference on Intelligent Robots
and Systems (IROS) 201
Context-Independent Task Knowledge for Neurosymbolic Reasoning in Cognitive Robotics
One of the current main goals of artificial intelligence and robotics research is the creation of an artificial assistant which can have flexible, human like behavior, in order to accomplish everyday tasks. A lot of what is context-independent task knowledge to the human is what enables this flexibility at multiple levels of cognition. In this scope the author analyzes how to acquire, represent and disambiguate symbolic knowledge representing context-independent task knowledge, abstracted from multiple instances: this thesis elaborates the incurred problems, implementation constraints, current state-of-the-art practices and ultimately the solutions newly introduced in this scope. The author specifically discusses acquisition of context-independent task knowledge from large amounts of human-written texts and their reusability in the robotics domain; the acquisition of knowledge on human musculoskeletal dependencies constraining motion which allows a better higher level representation of observed trajectories; the means of verbalization of partial contextual and instruction knowledge, increasing interaction possibilities with the human as well as contextual adaptation. All the aforementioned points are supported by evaluation in heterogeneous setups, to bring a view on how to make optimal use of statistical & symbolic applications (i.e. neurosymbolic reasoning) in cognitive robotics. This work has been performed to enable context-adaptable artificial assistants, by bringing together knowledge on what is usually regarded as context-independent task knowledge
Action-Related Representations
Theories of grounded cognition state that there is a meaningful connection between action and cognition. Although these claims are widely accepted, the nature and structure of this connection is far from clear and is still a matter of controversy. This book argues for a type of cognitive representation that essentially combines cognition and action, and which is foundational for higher-order cognitive capacities
Affordances in Psychology, Neuroscience, and Robotics: A Survey
The concept of affordances appeared in psychology during the late 60s as an alternative perspective on the visual perception of the environment. It was revolutionary in the intuition that the way living beings perceive the world is deeply influenced by the actions they are able to perform. Then, across the last 40 years, it has influenced many applied fields, e.g., design, human-computer interaction, computer vision, and robotics. In this paper, we offer a multidisciplinary perspective on the notion of affordances. We first discuss the main definitions and formalizations of the affordance theory, then we report the most significant evidence in psychology and neuroscience that support it, and finally we review the most relevant applications of this concept in robotics
- …