83 research outputs found
TOWARDS THE GROUNDING OF ABSTRACT CATEGORIES IN COGNITIVE ROBOTS
The grounding of language in humanoid robots is a fundamental problem, especially
in social scenarios which involve the interaction of robots with human beings. Indeed,
natural language represents the most natural interface for humans to interact
and exchange information about concrete entities like KNIFE, HAMMER and abstract
concepts such as MAKE, USE. This research domain is very important not
only for the advances that it can produce in the design of human-robot communication
systems, but also for the implication that it can have on cognitive science.
Abstract words are used in daily conversations among people to describe events and
situations that occur in the environment. Many scholars have suggested that the
distinction between concrete and abstract words is a continuum according to which
all entities can be varied in their level of abstractness.
The work presented herein aimed to ground abstract concepts, similarly to concrete
ones, in perception and action systems. This permitted to investigate how different
behavioural and cognitive capabilities can be integrated in a humanoid robot in
order to bootstrap the development of higher-order skills such as the acquisition of
abstract words. To this end, three neuro-robotics models were implemented.
The first neuro-robotics experiment consisted in training a humanoid robot to perform
a set of motor primitives (e.g. PUSH, PULL, etc.) that hierarchically combined
led to the acquisition of higher-order words (e.g. ACCEPT, REJECT). The
implementation of this model, based on a feed-forward artificial neural networks,
permitted the assessment of the training methodology adopted for the grounding of
language in humanoid robots.
In the second experiment, the architecture used for carrying out the first study
was reimplemented employing recurrent artificial neural networks that enabled the
temporal specification of the action primitives to be executed by the robot. This
permitted to increase the combinations of actions that can be taught to the robot
for the generation of more complex movements.
For the third experiment, a model based on recurrent neural networks that integrated
multi-modal inputs (i.e. language, vision and proprioception) was implemented for
the grounding of abstract action words (e.g. USE, MAKE). Abstract representations
of actions ("one-hot" encoding) used in the other two experiments, were replaced
with the joints values recorded from the iCub robot sensors.
Experimental results showed that motor primitives have different activation patterns
according to the action's sequence in which they are embedded. Furthermore, the
performed simulations suggested that the acquisition of concepts related to abstract
action words requires the reactivation of similar internal representations activated
during the acquisition of the basic concepts, directly grounded in perceptual and
sensorimotor knowledge, contained in the hierarchical structure of the words used
to ground the abstract action words.This study was financed by the EU project RobotDoC (235065) from the Seventh
Framework Programme (FP7), Marie Curie Actions Initial Training Network
Development of Cognitive Capabilities in Humanoid Robots
Merged with duplicate record 10026.1/645 on 03.04.2017 by CS (TIS)Building intelligent systems with human level of competence is the ultimate
grand challenge for science and technology in general, and especially for the
computational intelligence community. Recent theories in autonomous cognitive
systems have focused on the close integration (grounding) of communication with
perception, categorisation and action. Cognitive systems are essential for
integrated multi-platform systems that are capable of sensing and communicating.
This thesis presents a cognitive system for a humanoid robot that integrates
abilities such as object detection and recognition, which are merged with natural
language understanding and refined motor controls. The work includes three
studies; (1) the use of generic manipulation of objects using the NMFT algorithm,
by successfully testing the extension of the NMFT to control robot behaviour; (2) a
study of the development of a robotic simulator; (3) robotic simulation experiments
showing that a humanoid robot is able to acquire complex behavioural, cognitive,
and linguistic skills through individual and social learning. The robot is able to
learn to handle and manipulate objects autonomously, to cooperate with human
users, and to adapt its abilities to changes in internal and environmental conditions.
The model and the experimental results reported in this thesis, emphasise the
importance of embodied cognition, i.e. the humanoid robot's physical interaction
between its body and the environment
Affordances in Psychology, Neuroscience, and Robotics: A Survey
The concept of affordances appeared in psychology during the late 60s as an alternative perspective on the visual perception of the environment. It was revolutionary in the intuition that the way living beings perceive the world is deeply influenced by the actions they are able to perform. Then, across the last 40 years, it has influenced many applied fields, e.g., design, human-computer interaction, computer vision, and robotics. In this paper, we offer a multidisciplinary perspective on the notion of affordances. We first discuss the main definitions and formalizations of the affordance theory, then we report the most significant evidence in psychology and neuroscience that support it, and finally we review the most relevant applications of this concept in robotics
Detecting Object Affordances with Convolutional Neural Networks
We present a novel and real-time method to detect
object affordances from RGB-D images. Our method trains
a deep Convolutional Neural Network (CNN) to learn deep
features from the input data in an end-to-end manner. The CNN
has an encoder-decoder architecture in order to obtain smooth
label predictions. The input data are represented as multiple
modalities to let the network learn the features more effectively.
Our method sets a new benchmark on detecting object affordances, improving the accuracy by 20% in comparison with
the state-of-the-art methods that use hand-designed geometric
features. Furthermore, we apply our detection method on a
full-size humanoid robot (WALK-MAN) to demonstrate that
the robot is able to perform grasps after efficiently detecting
the object affordances
Nanotechnology for Humans and Humanoids A vision of the use of nanotechnology in future robotics
Humanoids will soon co-exist with humans, helping us at home and at work, assisting elder people, replacing us in dangerous environments and somewhat adding to our personal communication devices the capability to actuate motion. In order for humanoids to be compatible with our everyday tools and our lifestyle it is however mandatory to reproduce (at least partially) the body-mind nexus that makes humans so superior to machines. This requires a totally new approach to humanoid technologies, combining new responsive and soft materials, bioinspired sensors, high efficiency power sources and cognition/intelligence of low computational cost: in other words, an unprecedented merge of nanotechnology, cognition and mechatronics
Making sense of words: a robotic model for language abstraction
Building robots capable of acting independently in unstructured environments is still a challenging task for roboticists. The capability to comprehend and produce language in a 'human-like' manner represents a powerful tool for the autonomous interaction of robots with human beings, for better understanding situations and exchanging information during the execution of tasks that require cooperation. In this work, we present a robotic model for grounding abstract action words (i.e. USE, MAKE) through the hierarchical organization of terms directly linked to perceptual and motor skills of a humanoid robot. Experimental results have shown that the robot, in response to linguistic commands, is capable of performing the appropriate behaviors on objects. Results obtained in case of inconsistency between the perceptual and linguistic inputs have shown that the robot executes the actions elicited by the seen object
- …