2,864 research outputs found
A Review of Verbal and Non-Verbal Human-Robot Interactive Communication
In this paper, an overview of human-robot interactive communication is
presented, covering verbal as well as non-verbal aspects of human-robot
interaction. Following a historical introduction, and motivation towards fluid
human-robot communication, ten desiderata are proposed, which provide an
organizational axis both of recent as well as of future research on human-robot
communication. Then, the ten desiderata are examined in detail, culminating to
a unifying discussion, and a forward-looking conclusion
Symbol Emergence in Robotics: A Survey
Humans can learn the use of language through physical interaction with their
environment and semiotic communication with other people. It is very important
to obtain a computational understanding of how humans can form a symbol system
and obtain semiotic skills through their autonomous mental development.
Recently, many studies have been conducted on the construction of robotic
systems and machine-learning methods that can learn the use of language through
embodied multimodal interaction with their environment and other systems.
Understanding human social interactions and developing a robot that can
smoothly communicate with human users in the long term, requires an
understanding of the dynamics of symbol systems and is crucially important. The
embodied cognition and social interaction of participants gradually change a
symbol system in a constructive manner. In this paper, we introduce a field of
research called symbol emergence in robotics (SER). SER is a constructive
approach towards an emergent symbol system. The emergent symbol system is
socially self-organized through both semiotic communications and physical
interactions with autonomous cognitive developmental agents, i.e., humans and
developmental robots. Specifically, we describe some state-of-art research
topics concerning SER, e.g., multimodal categorization, word discovery, and a
double articulation analysis, that enable a robot to obtain words and their
embodied meanings from raw sensory--motor information, including visual
information, haptic information, auditory information, and acoustic speech
signals, in a totally unsupervised manner. Finally, we suggest future
directions of research in SER.Comment: submitted to Advanced Robotic
Humanoid Theory Grounding
In this paper we consider the importance of using a humanoid physical form for a certain proposed kind of robotics, that of theory grounding. Theory grounding involves grounding the theory skills and knowledge of an embodied artificially intelligent (AI) system by developing theory skills and knowledge from the bottom up. Theory grounding can potentially occur in a variety of domains, and the particular domain considered here is that of language. Language is taken to be another “problem space” in which a system can explore and discover solutions. We argue that because theory grounding necessitates robots experiencing domain information, certain behavioral-form aspects, such as abilities to socially smile, point, follow gaze, and generate manual gestures, are necessary for robots grounding a humanoid theory of language
The Mechanics of Embodiment: A Dialogue on Embodiment and Computational Modeling
Embodied theories are increasingly challenging traditional views of cognition by arguing that conceptual representations that constitute our knowledge are grounded in sensory and motor experiences, and processed at this sensorimotor level, rather than being represented and processed abstractly in an amodal conceptual system. Given the established empirical foundation, and the relatively underspecified theories to date, many researchers are extremely interested in embodied cognition but are clamouring for more mechanistic implementations. What is needed at this stage is a push toward explicit computational models that implement sensory-motor grounding as intrinsic to cognitive processes. In this article, six authors from varying backgrounds and approaches address issues concerning the construction of embodied computational models, and illustrate what they view as the critical current and next steps toward mechanistic theories of embodiment. The first part has the form of a dialogue between two fictional characters: Ernest, the �experimenter�, and Mary, the �computational modeller�. The dialogue consists of an interactive sequence of questions, requests for clarification, challenges, and (tentative) answers, and touches the most important aspects of grounded theories that should inform computational modeling and, conversely, the impact that computational modeling could have on embodied theories. The second part of the article discusses the most important open challenges for embodied computational modelling
Considering the anchoring problem in robotic intelligent bin picking
Random Bin Picking means the selection by a robot of a particular item from a container (or bin) in which there are many items randomly distributed. Generalist robots and the Anchoring Problem should be considered if we want to provide a more general solution, since users want that it works with different type of items that are not known 'a priori'. Therefore, we are working on an approach in which robot learning and human-robot interaction are used to anchor control primitives and robot skills to objects and action symbols while the robot system is running, but we are limiting the scope to the packaging domain. In this paper we explain how to use our system to do anchoring in Robotic Bin Picking.Peer ReviewedPostprint (author's final draft
A Survey of Brain Inspired Technologies for Engineering
Cognitive engineering is a multi-disciplinary field and hence it is difficult
to find a review article consolidating the leading developments in the field.
The in-credible pace at which technology is advancing pushes the boundaries of
what is achievable in cognitive engineering. There are also differing
approaches to cognitive engineering brought about from the multi-disciplinary
nature of the field and the vastness of possible applications. Thus research
communities require more frequent reviews to keep up to date with the latest
trends. In this paper we shall dis-cuss some of the approaches to cognitive
engineering holistically to clarify the reasoning behind the different
approaches and to highlight their strengths and weaknesses. We shall then show
how developments from seemingly disjointed views could be integrated to achieve
the same goal of creating cognitive machines. By reviewing the major
contributions in the different fields and showing the potential for a combined
approach, this work intends to assist the research community in devising more
unified methods and techniques for developing cognitive machines
The implications of embodiment for behavior and cognition: animal and robotic case studies
In this paper, we will argue that if we want to understand the function of
the brain (or the control in the case of robots), we must understand how the
brain is embedded into the physical system, and how the organism interacts with
the real world. While embodiment has often been used in its trivial meaning,
i.e. 'intelligence requires a body', the concept has deeper and more important
implications, concerned with the relation between physical and information
(neural, control) processes. A number of case studies are presented to
illustrate the concept. These involve animals and robots and are concentrated
around locomotion, grasping, and visual perception. A theoretical scheme that
can be used to embed the diverse case studies will be presented. Finally, we
will establish a link between the low-level sensory-motor processes and
cognition. We will present an embodied view on categorization, and propose the
concepts of 'body schema' and 'forward models' as a natural extension of the
embodied approach toward first representations.Comment: Book chapter in W. Tschacher & C. Bergomi, ed., 'The Implications of
Embodiment: Cognition and Communication', Exeter: Imprint Academic, pp. 31-5
URBANO: A Tour-Guide Robot Learning to Make Better Speeches
—Thanks to the numerous attempts that are being made to develop autonomous robots, increasingly intelligent and cognitive skills are allowed. This paper proposes an automatic presentation generator for a robot guide, which is considered one more cognitive skill. The presentations are made up of groups of paragraphs. The selection of the best paragraphs is based on a semantic understanding of the characteristics of the paragraphs, on the restrictions defined for the presentation and by the quality criteria appropriate for a public presentation. This work is part of the ROBONAUTA project of the Intelligent Control Research Group at the Universidad Politécnica de Madrid to create "awareness" in a robot guide. The software developed in the project has been verified on the tour-guide robot Urbano. The most important aspect of this proposal is that the design uses learning as the means to optimize the quality of the presentations. To achieve this goal, the system has to perform the optimized decision making, in different phases. The modeling of the quality index of the presentation is made using fuzzy logic and it represents the beliefs of the robot about what is good, bad, or indifferent about a presentation. This fuzzy system is used to select the most appropriate group of paragraphs for a presentation. The beliefs of the robot continue to evolving in order to coincide with the opinions of the public. It uses a genetic algorithm for the evolution of the rules. With this tool, the tour guide-robot shows the presentation, which satisfies the objectives and restrictions, and automatically it identifies the best paragraphs in order to find the most suitable set of contents for every public profil
Proceedings of the 1st Standardized Knowledge Representation and Ontologies for Robotics and Automation Workshop
Welcome to IEEE-ORA (Ontologies for Robotics and Automation) IROS workshop. This
is the 1st edition of the workshop on! Standardized Knowledge Representation and
Ontologies for Robotics and Automation. The IEEE-ORA 2014 workshop was held on
the 18th September, 2014 in Chicago, Illinois, USA.
In!the IEEE-ORA IROS workshop, 10 contributions were presented from 7 countries in
North and South America, Asia and Europe. The presentations took place in the
afternoon, from 1:30 PM to 5:00 PM. The first session was dedicated to “Standards for
Knowledge Representation in Robotics”, where presentations were made from the
IEEE working group standards for robotics and automation, and also from the ISO TC
184/SC2/WH7. The second session was dedicated to “Core and Application
Ontologies”, where presentations were made for core robotics ontologies, and also for
industrial and robot assisted surgery ontologies. Three posters were presented in
emergent applications of ontologies in robotics.
We would like to express our thanks to all participants. First of all to the authors,
whose quality work is the essence of this workshop. Next, to all the members of the
international program committee, who helped us with their expertise and valuable
time. We would also like to deeply thank the IEEE-IROS 2014 organizers for hosting
this workshop.
Our deep gratitude goes to the IEEE Robotics and Automation Society, that sponsors!
the IEEE-ORA group activities, and also to the scientific organizations that kindly
agreed to sponsor all the workshop authors work
- …