2,220 research outputs found
Talking About Task Progress: Towards Integrating Task Planning and Dialog for Assistive Robotic Services
The use of service robots to assist ageing people in their own homes has the potential to allow people to maintain their independence, increasing their health and quality of life. In many assistive applications, robots perform tasks on peopleâs behalf that they are unable or unwilling to monitor directly. It is important that users be given useful and appropriate information about task progress. People being assisted in homes and other realworld environments are likely be engaged in other activities while they wait for a service, so information should also be presented in an appropriate, nonintrusive manner. This paper presents a human-robot interaction experiment investigatingwhat type of feedback people prefer in verbal updates by a service robot about distributed assistive services. People found feedback about time until task completion more useful than feedback about events in task progress or no feedback. We also discuss future research directions that involve giving non-expert users more input into the task planning process when delays or failures occur that necessitate replanning or modifying goals
Symbol Emergence in Robotics: A Survey
Humans can learn the use of language through physical interaction with their
environment and semiotic communication with other people. It is very important
to obtain a computational understanding of how humans can form a symbol system
and obtain semiotic skills through their autonomous mental development.
Recently, many studies have been conducted on the construction of robotic
systems and machine-learning methods that can learn the use of language through
embodied multimodal interaction with their environment and other systems.
Understanding human social interactions and developing a robot that can
smoothly communicate with human users in the long term, requires an
understanding of the dynamics of symbol systems and is crucially important. The
embodied cognition and social interaction of participants gradually change a
symbol system in a constructive manner. In this paper, we introduce a field of
research called symbol emergence in robotics (SER). SER is a constructive
approach towards an emergent symbol system. The emergent symbol system is
socially self-organized through both semiotic communications and physical
interactions with autonomous cognitive developmental agents, i.e., humans and
developmental robots. Specifically, we describe some state-of-art research
topics concerning SER, e.g., multimodal categorization, word discovery, and a
double articulation analysis, that enable a robot to obtain words and their
embodied meanings from raw sensory--motor information, including visual
information, haptic information, auditory information, and acoustic speech
signals, in a totally unsupervised manner. Finally, we suggest future
directions of research in SER.Comment: submitted to Advanced Robotic
Creating Interaction Scenarios With a New Graphical User Interface
The field of human-centered computing has known a major progress these past
few years. It is admitted that this field is multidisciplinary and that the
human is the core of the system. It shows two matters of concern:
multidisciplinary and human. The first one reveals that each discipline plays
an important role in the global research and that the collaboration between
everyone is needed. The second one explains that a growing number of researches
aims at making the human commitment degree increase by giving him/her a
decisive role in the human-machine interaction. This paper focuses on these
both concerns and presents MICE (Machines Interaction Control in their
Environment) which is a system where the human is the one who makes the
decisions to manage the interaction with the machines. In an ambient context,
the human can decide of objects actions by creating interaction scenarios with
a new visual programming language: scenL.Comment: 5th International Workshop on Intelligent Interfaces for
Human-Computer Interaction, Palerme : Italy (2012
Spatial context-aware person-following for a domestic robot
Domestic robots are in the focus of research in
terms of service providers in households and even as robotic
companion that share the living space with humans. A major
capability of mobile domestic robots that is joint exploration
of space. One challenge to deal with this task is how could we
let the robots move in space in reasonable, socially acceptable
ways so that it will support interaction and communication
as a part of the joint exploration. As a step towards this
challenge, we have developed a context-aware following behav-
ior considering these social aspects and applied these together
with a multi-modal person-tracking method to switch between
three basic following approaches, namely direction-following,
path-following and parallel-following. These are derived from
the observation of human-human following schemes and are
activated depending on the current spatial context (e.g. free
space) and the relative position of the interacting human.
A combination of the elementary behaviors is performed in
real time with our mobile robot in different environments.
First experimental results are provided to demonstrate the
practicability of the proposed approach
Service robotics: do you know your new companion? Framing an interdisciplinary technology assessment
Service-Roboticâmainly defined as ânon-industrial roboticsââis identified as the next economical success story to be expected after robots have been ubiquitously implemented into industrial production lines. Under the heading of service-robotic, we found a widespread area of applications reaching from robotics in agriculture and in the public transportation system to service robots applied in private homes. We propose for our interdisciplinary perspective of technology assessment to take the human user/worker as common focus. In some cases, the user/worker is the effective subject acting by means of and in cooperation with a service robot; in other cases, the user/worker might become a pure object of the respective robotic system, for example, as a patient in a hospital. In this paper, we present a comprehensive interdisciplinary framework, which allows us to scrutinize some of the most relevant applications of service robotics; we propose to combine technical, economical, legal, philosophical/ethical, and psychological perspectives in order to design a thorough and comprehensive expert-based technology assessment. This allows us to understand the potentials as well as the limits and even the threats connected with the ongoing and the planned implementation of service robots into human lifeworldâparticularly of those technical systems displaying increasing grades of autonomy
Laser Graphics in Augmented Reality Applications for Real- World Robot Deployment
Lasers are powerful light source. With their thin shafts of bright light and colours, laser beams can provide a dazzling display matching that of outdoor fireworks. With computer assistance, animated laser graphics can generate eye-catching images against a dark sky. Due to technology constraints, laser images are outlines without any interior fill or detail. On a more functional note, lasers assist in the alignment of components, during installation
Attention-controlled acquisition of a qualitative scene model for mobile robots
Haasch A. Attention-controlled acquisition of a qualitative scene model for mobile robots. Bielefeld (Germany): Bielefeld University; 2007.Robots that are used to support humans in dangerous environments, e.g., in manufacture facilities, are established for decades. Now, a new generation of service robots is focus of current research and about to be introduced. These intelligent service robots are intended to support humans in everyday life. To achieve a most comfortable human-robot interaction with non-expert users it is, thus, imperative for the acceptance of such robots to provide interaction interfaces that we humans are accustomed to in comparison to human-human communication. Consequently, intuitive modalities like gestures or spontaneous speech are needed to teach the robot previously unknown objects and locations. Then, the robot can be entrusted with tasks like fetch-and-carry orders even without an extensive training of the user. In this context, this dissertation introduces the multimodal Object Attention System which offers a flexible integration of common interaction modalities in combination with state-of-the-art image and speech processing techniques from other research projects. To prove the feasibility of the approach the presented Object Attention System has successfully been integrated in different robotic hardware. In particular, the mobile robot BIRON and the anthropomorphic robot BARTHOC of the Applied Computer Science Group at Bielefeld University. Concluding, the aim of this work, to acquire a qualitative Scene Model by a modular component offering object attention mechanisms, has been successfully achieved as demonstrated on numerous occasions like reviews for the EU-integrated Project COGNIRON or demos
- âŠ