62,472 research outputs found
Using humanoid robots to study human behavior
Our understanding of human behavior advances as our humanoid robotics work progresses-and vice versa. This team's work focuses on trajectory formation and planning, learning from demonstration, oculomotor control and interactive behaviors. They are programming robotic behavior based on how we humans âprogramâ behavior in-or train-each other
Challenging the Computational Metaphor: Implications for How We Think
This paper explores the role of the traditional computational metaphor in our thinking as computer scientists, its influence on epistemological styles, and its implications for our understanding of cognition. It proposes to replace the conventional metaphor--a sequence of steps--with the notion of a community of interacting entities, and examines the ramifications of such a shift on these various ways in which we think
A Review of Verbal and Non-Verbal Human-Robot Interactive Communication
In this paper, an overview of human-robot interactive communication is
presented, covering verbal as well as non-verbal aspects of human-robot
interaction. Following a historical introduction, and motivation towards fluid
human-robot communication, ten desiderata are proposed, which provide an
organizational axis both of recent as well as of future research on human-robot
communication. Then, the ten desiderata are examined in detail, culminating to
a unifying discussion, and a forward-looking conclusion
Stepwise Acquisition of Dialogue Act Through Human-Robot Interaction
A dialogue act (DA) represents the meaning of an utterance at the
illocutionary force level (Austin 1962) such as a question, a request, and a
greeting. Since DAs take charge of the most fundamental part of communication,
we believe that the elucidation of DA learning mechanism is important for
cognitive science and artificial intelligence. The purpose of this study is to
verify that scaffolding takes place when a human teaches a robot, and to let a
robot learn to estimate DAs and to make a response based on them step by step
utilizing scaffolding provided by a human. To realize that, it is necessary for
the robot to detect changes in utterance and rewards given by the partner and
continue learning accordingly. Experimental results demonstrated that
participants who continued interaction for a sufficiently long time often gave
scaffolding for the robot. Although the number of experiments is still
insufficient to obtain a definite conclusion, we observed that 1) the robot
quickly learned to respond to DAs in most cases if the participants only spoke
utterances that match the situation, 2) in the case of participants who builds
scaffolding differently from what we assumed, learning did not proceed quickly,
and 3) the robot could learn to estimate DAs almost exactly if the participants
kept interaction for a sufficiently long time even if the scaffolding was
unexpected.Comment: Published as a conference paper at IJCNN 201
Internet of robotic things : converging sensing/actuating, hypoconnectivity, artificial intelligence and IoT Platforms
The Internet of Things (IoT) concept is evolving rapidly and influencing newdevelopments in various application domains, such as the Internet of MobileThings (IoMT), Autonomous Internet of Things (A-IoT), Autonomous Systemof Things (ASoT), Internet of Autonomous Things (IoAT), Internetof Things Clouds (IoT-C) and the Internet of Robotic Things (IoRT) etc.that are progressing/advancing by using IoT technology. The IoT influencerepresents new development and deployment challenges in different areassuch as seamless platform integration, context based cognitive network integration,new mobile sensor/actuator network paradigms, things identification(addressing, naming in IoT) and dynamic things discoverability and manyothers. The IoRT represents new convergence challenges and their need to be addressed, in one side the programmability and the communication ofmultiple heterogeneous mobile/autonomous/robotic things for cooperating,their coordination, configuration, exchange of information, security, safetyand protection. Developments in IoT heterogeneous parallel processing/communication and dynamic systems based on parallelism and concurrencyrequire new ideas for integrating the intelligent âdevicesâ, collaborativerobots (COBOTS), into IoT applications. Dynamic maintainability, selfhealing,self-repair of resources, changing resource state, (re-) configurationand context based IoT systems for service implementation and integrationwith IoT network service composition are of paramount importance whennew âcognitive devicesâ are becoming active participants in IoT applications.This chapter aims to be an overview of the IoRT concept, technologies,architectures and applications and to provide a comprehensive coverage offuture challenges, developments and applications
Ontology-based Fuzzy Markup Language Agent for Student and Robot Co-Learning
An intelligent robot agent based on domain ontology, machine learning
mechanism, and Fuzzy Markup Language (FML) for students and robot co-learning
is presented in this paper. The machine-human co-learning model is established
to help various students learn the mathematical concepts based on their
learning ability and performance. Meanwhile, the robot acts as a teacher's
assistant to co-learn with children in the class. The FML-based knowledge base
and rule base are embedded in the robot so that the teachers can get feedback
from the robot on whether students make progress or not. Next, we inferred
students' learning performance based on learning content's difficulty and
students' ability, concentration level, as well as teamwork sprit in the class.
Experimental results show that learning with the robot is helpful for
disadvantaged and below-basic children. Moreover, the accuracy of the
intelligent FML-based agent for student learning is increased after machine
learning mechanism.Comment: This paper is submitted to IEEE WCCI 2018 Conference for revie
Symbol Emergence in Robotics: A Survey
Humans can learn the use of language through physical interaction with their
environment and semiotic communication with other people. It is very important
to obtain a computational understanding of how humans can form a symbol system
and obtain semiotic skills through their autonomous mental development.
Recently, many studies have been conducted on the construction of robotic
systems and machine-learning methods that can learn the use of language through
embodied multimodal interaction with their environment and other systems.
Understanding human social interactions and developing a robot that can
smoothly communicate with human users in the long term, requires an
understanding of the dynamics of symbol systems and is crucially important. The
embodied cognition and social interaction of participants gradually change a
symbol system in a constructive manner. In this paper, we introduce a field of
research called symbol emergence in robotics (SER). SER is a constructive
approach towards an emergent symbol system. The emergent symbol system is
socially self-organized through both semiotic communications and physical
interactions with autonomous cognitive developmental agents, i.e., humans and
developmental robots. Specifically, we describe some state-of-art research
topics concerning SER, e.g., multimodal categorization, word discovery, and a
double articulation analysis, that enable a robot to obtain words and their
embodied meanings from raw sensory--motor information, including visual
information, haptic information, auditory information, and acoustic speech
signals, in a totally unsupervised manner. Finally, we suggest future
directions of research in SER.Comment: submitted to Advanced Robotic
- âŠ