2,785 research outputs found
A multi-modal person perception framework for socially interactive mobile service robots
In order to meet the increasing demands of mobile service robot applications, a dedicated perception module is an essential requirement for the interaction with users in real-world scenarios. In particular, multi sensor fusion and human re-identification are recognized as active research fronts. Through this paper we contribute to the topic and present a modular detection and tracking system that models position and additional properties of persons in the surroundings of a mobile robot. The proposed system introduces a probability-based data association method that besides the position can incorporate face and color-based appearance features in order to realize a re-identification of persons when tracking gets interrupted. The system combines the results of various state-of-the-art image-based detection systems for person recognition, person identification and attribute estimation. This allows a stable estimate of a mobile robot’s user, even in complex, cluttered environments with long-lasting occlusions. In our benchmark, we introduce a new measure for tracking consistency and show the improvements when face and appearance-based re-identification are combined. The tracking system was applied in a real world application with a mobile rehabilitation assistant robot in a public hospital. The estimated states of persons are used for the user-centered navigation behaviors, e.g., guiding or approaching a person, but also for realizing a socially acceptable navigation in public environments
Human Motion Trajectory Prediction: A Survey
With growing numbers of intelligent autonomous systems in human environments,
the ability of such systems to perceive, understand and anticipate human
behavior becomes increasingly important. Specifically, predicting future
positions of dynamic agents and planning considering such predictions are key
tasks for self-driving vehicles, service robots and advanced surveillance
systems. This paper provides a survey of human motion trajectory prediction. We
review, analyze and structure a large selection of work from different
communities and propose a taxonomy that categorizes existing methods based on
the motion modeling approach and level of contextual information used. We
provide an overview of the existing datasets and performance metrics. We
discuss limitations of the state of the art and outline directions for further
research.Comment: Submitted to the International Journal of Robotics Research (IJRR),
37 page
Overcoming barriers and increasing independence: service robots for elderly and disabled people
This paper discusses the potential for service robots to overcome barriers and increase independence of
elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly
people and advances in technology which will make new uses possible and provides suggestions for some of these new
applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses
the complementarity of assistive service robots and personal assistance and considers the types of applications and
users for which service robots are and are not suitable
COACHES Cooperative Autonomous Robots in Complex and Human Populated Environments
Public spaces in large cities are increasingly becoming complex and unwelcoming environments. Public spaces progressively become more hostile and unpleasant to use because of the overcrowding and complex information in signboards. It is in the interest of cities to make their public spaces easier to use, friendlier to visitors and safer to increasing elderly population and to citizens with disabilities. Meanwhile, we observe, in the last decade a tremendous progress in the development of robots in dynamic, complex and uncertain environments. The new challenge for the near future is to deploy a network of robots in public spaces to accomplish services that can help humans. Inspired by the aforementioned challenges, COACHES project addresses fundamental issues related to the design of a robust system of self-directed autonomous robots with high-level skills of environment modelling and scene understanding, distributed autonomous decision-making, short-term interacting with humans and robust and safe navigation in overcrowding spaces. To this end, COACHES will provide an integrated solution to new challenges on: (1) a knowledge-based representation of the environment, (2) human activities and needs estimation using Markov and Bayesian techniques, (3) distributed decision-making under uncertainty to collectively plan activities of assistance, guidance and delivery tasks using Decentralized Partially Observable Markov Decision Processes with efficient algorithms to improve their scalability and (4) a multi-modal and short-term human-robot interaction to exchange information and requests. COACHES project will provide a modular architecture to be integrated in real robots. We deploy COACHES at Caen city in a mall called “Rive de l’orne”. COACHES is a cooperative system consisting of ?xed cameras and the mobile robots. The ?xed cameras can do object detection, tracking and abnormal events detection (objects or behaviour). The robots combine these information with the ones perceived via their own sensor, to provide information through its multi-modal interface, guide people to their destinations, show tramway stations and transport goods for elderly people, etc.... The COACHES robots will use different modalities (speech and displayed information) to interact with the mall visitors, shopkeepers and mall managers. The project has enlisted an important an end-user (Caen la mer) providing the scenarios where the COACHES robots and systems will be deployed, and gather together universities with complementary competences from cognitive systems (SU), robust image/video processing (VUB, UNICAEN), and semantic scene analysis and understanding (VUB), Collective decision-making using decentralized partially observable Markov Decision Processes and multi-agent planning (UNICAEN, Sapienza), multi-modal and short-term human-robot interaction (Sapienza, UNICAEN
Artificial Cognition for Social Human-Robot Interaction: An Implementation
© 2017 The Authors Human–Robot Interaction challenges Artificial Intelligence in many regards: dynamic, partially unknown environments that were not originally designed for robots; a broad variety of situations with rich semantics to understand and interpret; physical interactions with humans that requires fine, low-latency yet socially acceptable control strategies; natural and multi-modal communication which mandates common-sense knowledge and the representation of possibly divergent mental models. This article is an attempt to characterise these challenges and to exhibit a set of key decisional issues that need to be addressed for a cognitive robot to successfully share space and tasks with a human. We identify first the needed individual and collaborative cognitive skills: geometric reasoning and situation assessment based on perspective-taking and affordance analysis; acquisition and representation of knowledge models for multiple agents (humans and robots, with their specificities); situated, natural and multi-modal dialogue; human-aware task planning; human–robot joint task achievement. The article discusses each of these abilities, presents working implementations, and shows how they combine in a coherent and original deliberative architecture for human–robot interaction. Supported by experimental results, we eventually show how explicit knowledge management, both symbolic and geometric, proves to be instrumental to richer and more natural human–robot interactions by pushing for pervasive, human-level semantics within the robot's deliberative system
Temporal Patterns in Multi-modal Social Interaction between Elderly Users and Service Robot
Social interaction, especially for older people living
alone is a challenge currently facing human-robot interaction
(HRI). User interfaces to manage service robots in home environments need to be tailored for older people. Multi-modal
interfaces providing users with more than one communication
option seem promising. There has been little research on user
preference towards HRI interfaces; most studies have focused
on utility and functionality of the interface. In this paper, we
took both objective observations and participants’ opinions into
account in studying older users with a robot partner. Our study
was under the framework of the EU FP7 Robot-Era Project.
The developed dual-modal robot interface offered older users
options of speech or touch screen to perform tasks. Fifteen people
aged from 70 to 89 years old, participated. We analyzed the
spontaneous actions of the participants, including their attentional activities (eye contacts) and conversational activities, the
temporal characteristics (timestamps, duration of events, event
transitions) of these social behaviours, as well as questionnaires.
This combination of data distinguishes it from other studies that
focused on questionnaire ratings only. There were three main
findings. First, the design of the Robot-Era interface was very
acceptable for older users. Secondly, most older people used both
speech and tablet to perform the food delivery service, with no
difference in their preferences towards either. Thirdly, these older
people had frequent and long-duration eye contact with the robot
during their conversations, showing patience when expecting
the robot to respond. They enjoyed the service. Overall, social
engagement with the robot demonstrated by older people was no
different from what might be expected towards a human partner.
This study is an early attempt to reveal the social connections
between human beings and a personal robot in real life. Our
observations and findings should inspire new insights in HRI
research and eventually contribute to next-generation intelligent
robot developmen
Symbol Emergence in Robotics: A Survey
Humans can learn the use of language through physical interaction with their
environment and semiotic communication with other people. It is very important
to obtain a computational understanding of how humans can form a symbol system
and obtain semiotic skills through their autonomous mental development.
Recently, many studies have been conducted on the construction of robotic
systems and machine-learning methods that can learn the use of language through
embodied multimodal interaction with their environment and other systems.
Understanding human social interactions and developing a robot that can
smoothly communicate with human users in the long term, requires an
understanding of the dynamics of symbol systems and is crucially important. The
embodied cognition and social interaction of participants gradually change a
symbol system in a constructive manner. In this paper, we introduce a field of
research called symbol emergence in robotics (SER). SER is a constructive
approach towards an emergent symbol system. The emergent symbol system is
socially self-organized through both semiotic communications and physical
interactions with autonomous cognitive developmental agents, i.e., humans and
developmental robots. Specifically, we describe some state-of-art research
topics concerning SER, e.g., multimodal categorization, word discovery, and a
double articulation analysis, that enable a robot to obtain words and their
embodied meanings from raw sensory--motor information, including visual
information, haptic information, auditory information, and acoustic speech
signals, in a totally unsupervised manner. Finally, we suggest future
directions of research in SER.Comment: submitted to Advanced Robotic
A Review on Usability and User Experience of Assistive Social Robots for Older Persons
In the advancement of human-robot interaction technology, assistive social robots have been recognized as one of potential technologies that can provide physical and cognitive supports in older persons care. However, a major challenge faced by the designers is to develop an assistive social robot with prodigious usability and user experience for older persons who were known to have physical and cognitive limitations. A considerable number of published literatures was reporting on the technological design process of assistive social robots. However, only a small amount of attention has been paid to review the usability and user experience of the robots. The objective of this paper is to provide an overview of established researches in the literatures concerning usability and user experience issues faced by the older persons when interacting with assistive social robots. The authors searched relevant articles from the academic databases such as Google Scholar, Scopus and Web of Science as well as Google search for the publication period 2000 to 2021. Several search keywords were typed such as ‘older persons’ ‘elderly’, ‘senior citizens’, ‘assistive social robots’, ‘companion robots’, ‘personal robots’, ‘usability’ and ‘user experience’. This online search found a total of 215 articles which are related to assistive social robots in elderly care. Out of which, 54 articles identified as significant references, and they were examined thoroughly to prepare the main content of this paper. This paper reveals usability issues of 28 assistive social robots, and feedbacks of user experience based on 41 units of assistive social robots. Based on the research articles scrutinized, the authors concluded that the key elements in the design and development of assistive social robots to improve acceptance of older persons were determined by three factors: functionality, usability and users’ experience. Functionality refers to ability of robots to serve the older persons. Usability is ease of use of the robots. It is an indicator on how successful of interaction between the robots and the users. To improve usability, robot designers should consider the limitations of older persons such as vision, hearing, and cognition capabilities when interacting with the robots. User experience reflects to perceptions, preferences and behaviors of users that occur before, during and after use the robots. Combination of superior functionality and usability lead to a good user experience in using the robots which in the end achieves satisfaction of older persons
A Review on Usability and User Experience of Assistive Social Robots for Older Persons
In the advancement of human-robot interaction technology, assistive social robots have been recognized as one of potential technologies that can provide physical and cognitive supports in older persons care. However, a major challenge faced by the designers is to develop an assistive social robot with prodigious usability and user experience for older persons who were known to have physical and cognitive limitations. A considerable number of published literatures was reporting on the technological design process of assistive social robots. However, only a small amount of attention has been paid to review the usability and user experience of the robots. The objective of this paper is to provide an overview of established researches in the literatures concerning usability and user experience issues faced by the older persons when interacting with assistive social robots. The authors searched relevant articles from the academic databases such as Google Scholar, Scopus and Web of Science as well as Google search for the publication period 2000 to 2021. Several search keywords were typed such as ‘older persons’ ‘elderly’, ‘senior citizens’, ‘assistive social robots’, ‘companion robots’, ‘personal robots’, ‘usability’ and ‘user experience’. This online search found a total of 215 articles which are related to assistive social robots in elderly care. Out of which, 54 articles identified as significant references, and they were examined thoroughly to prepare the main content of this paper. This paper reveals usability issues of 28 assistive social robots, and feedbacks of user experience based on 41 units of assistive social robots. Based on the research articles scrutinized, the authors concluded that the key elements in the design and development of assistive social robots to improve acceptance of older persons were determined by three factors: functionality, usability and users’ experience. Functionality refers to ability of robots to serve the older persons. Usability is ease of use of the robots. It is an indicator on how successful of interaction between the robots and the users. To improve usability, robot designers should consider the limitations of older persons such as vision, hearing, and cognition capabilities when interacting with the robots. User experience reflects to perceptions, preferences and behaviors of users that occur before, during and after use the robots. Combination of superior functionality and usability lead to a good user experience in using the robots which in the end achieves satisfaction of older persons
- …