9,092 research outputs found
Multi-Modal Human-Machine Communication for Instructing Robot Grasping Tasks
A major challenge for the realization of intelligent robots is to supply them
with cognitive abilities in order to allow ordinary users to program them
easily and intuitively. One way of such programming is teaching work tasks by
interactive demonstration. To make this effective and convenient for the user,
the machine must be capable to establish a common focus of attention and be
able to use and integrate spoken instructions, visual perceptions, and
non-verbal clues like gestural commands. We report progress in building a
hybrid architecture that combines statistical methods, neural networks, and
finite state machines into an integrated system for instructing grasping tasks
by man-machine interaction. The system combines the GRAVIS-robot for visual
attention and gestural instruction with an intelligent interface for speech
recognition and linguistic interpretation, and an modality fusion module to
allow multi-modal task-oriented man-machine communication with respect to
dextrous robot manipulation of objects.Comment: 7 pages, 8 figure
Bayesian Inference of Self-intention Attributed by Observer
Most of agents that learn policy for tasks with reinforcement learning (RL)
lack the ability to communicate with people, which makes human-agent
collaboration challenging. We believe that, in order for RL agents to
comprehend utterances from human colleagues, RL agents must infer the mental
states that people attribute to them because people sometimes infer an
interlocutor's mental states and communicate on the basis of this mental
inference. This paper proposes PublicSelf model, which is a model of a person
who infers how the person's own behavior appears to their colleagues. We
implemented the PublicSelf model for an RL agent in a simulated environment and
examined the inference of the model by comparing it with people's judgment. The
results showed that the agent's intention that people attributed to the agent's
movement was correctly inferred by the model in scenes where people could find
certain intentionality from the agent's behavior
Neuro-fuzzy knowledge processing in intelligent learning environments for improved student diagnosis
In this paper, a neural network implementation for a fuzzy logic-based model of the diagnostic process is proposed as a means to achieve accurate student diagnosis and updates of the student model in Intelligent Learning Environments. The neuro-fuzzy synergy allows the diagnostic model to some extent "imitate" teachers in diagnosing students' characteristics, and equips the intelligent learning environment with reasoning capabilities that can be further used to drive pedagogical decisions depending on the student learning style. The neuro-fuzzy implementation helps to encode both structured and non-structured teachers' knowledge: when teachers' reasoning is available and well defined, it can be encoded in the form of fuzzy rules; when teachers' reasoning is not well defined but is available through practical examples illustrating their experience, then the networks can be trained to represent this experience. The proposed approach has been tested in diagnosing aspects of student's learning style in a discovery-learning environment that aims to help students to construct the concepts of vectors in physics and mathematics. The diagnosis outcomes of the model have been compared against the recommendations of a group of five experienced teachers, and the results produced by two alternative soft computing methods. The results of our pilot study show that the neuro-fuzzy model successfully manages the inherent uncertainty of the diagnostic process; especially for marginal cases, i.e. where it is very difficult, even for human tutors, to diagnose and accurately evaluate students by directly synthesizing subjective and, some times, conflicting judgments
Common reasoning in games: a Lewisian analysis of common knowledge of rationality
The game-theoretic assumption of ‘common knowledge of rationality’ leads to paradoxes when rationality is represented in a Bayesian framework as cautious expected utility maximisation with independent beliefs (ICEU). We diagnose and resolve these paradoxes by presenting a new class of formal models of players’ reasoning, inspired by David Lewis’s account of common knowledge, in which the analogue of common knowledge is derivability in common reason. We show that such models can consistently incorporate any of a wide range of standards of decision-theoretic practical rationality. We investigate the implications arising when the standard of decision-theoretic rationality so assumed is ICEU.Common reasoning; common knowledge; common knowledge of rationality; David Lewis; Bayesian models of games
- …