997 research outputs found
Socially aware robot navigation system in human-populated and interactive environments based on an adaptive spatial density function and space affordances
Traditionally robots are mostly known by society due to the wide use of manipulators, which are generally placed in controlled environments such as factories. However, with the advances in the area of mobile robotics, they are increasingly inserted into social contexts, i.e., in the presence of people. The adoption of socially acceptable behaviours demands a trade-off between social comfort and other metrics of efficiency. For navigation tasks, for example, humans must be differentiated from other ordinary objects in the scene. In this work, we propose a novel human-aware navigation strategy built upon the use of an adaptive spatial density function that efficiently cluster groups of people according to their spatial arrangement. Space affordances are also used for defining potential activity spaces considering the objects in the scene. The proposed function defines regions where navigation is either discouraged or forbidden. To implement a socially acceptable navigation, the navigation architecture combines a probabilistic roadmap and rapidly-exploring random tree path planners, and an adaptation of the elastic band algorithm. Trials in real and simulated environments carried out demonstrate that the use of the clustering algorithm and social rules in the navigation architecture do not hinder the navigation performance
SOCRATES: Text-based Human Search and Approach using a Robot Dog
In this paper, we propose a SOCratic model for Robots Approaching humans
based on TExt System (SOCRATES) focusing on the human search and approach based
on free-form textual description; the robot first searches for the target user,
then the robot proceeds to approach in a human-friendly manner. In particular,
textual descriptions are composed of appearance (e.g., wearing white shirts
with black hair) and location clues (e.g., is a student who works with robots).
We initially present a Human Search Socratic Model that connects large
pre-trained models in the language domain to solve the downstream task, which
is searching for the target person based on textual descriptions. Then, we
propose a hybrid learning-based framework for generating target-cordial robotic
motion to approach a person, consisting of a learning-from-demonstration module
and a knowledge distillation module. We validate the proposed searching module
via simulation using a virtual mobile robot as well as through real-world
experiments involving participants and the Boston Dynamics Spot robot.
Furthermore, we analyze the properties of the proposed approaching framework
with human participants based on the Robotic Social Attributes Scale (RoSAS)Comment: Project page: https://socratesrobotdog.github.io
Modelling Social Interaction between Humans and Service Robots in Large Public Spaces
With the advent of service robots in public places (e.g., in airports and shopping malls), understanding socio-psychological interactions between humans and robots is of paramount importance. On the one hand, traditional robotic navigation systems consider humans and robots as moving obstacles and focus on the problem of real-time collision avoidance in Human-Robot Interaction (HRI) using mathematical models. On the other hand, the behavior of a robot has been determined with respect to a human. Parameters for human-human interaction have been assumed and applied to interactions involving robots. One major limitation is the lack of sufficient data for calibration and validation procedures. This paper models, calibrates and validates the socio-psychological interaction of the human in HRIs among crowds. The mathematical model is an extension of the Social Force Model for crowd modelling. The proposed model is calibrated and validated using open source datasets (including uninstructed human trajectories) from the Asia and Pacific Trade Center shopping mall in Osaka (Japan).In summary, the results of the calibration and validation on the multiple HRIs encountered in the datasets show that humans react to a service robot to a higher extend within a larger distance compared to the interaction range towards another human. This microscopic model, calibration and validation framework can be used to simulate HRI between service robots and humans, predict humans' behavior, conduct comparative studies, and gain insights into safe and comfortable human-robot relationships from the human's perspective
Learning a Group-Aware Policy for Robot Navigation
Human-aware robot navigation promises a range of applications in which mobile
robots bring versatile assistance to people in common human environments. While
prior research has mostly focused on modeling pedestrians as independent,
intentional individuals, people move in groups; consequently, it is imperative
for mobile robots to respect human groups when navigating around people. This
paper explores learning group-aware navigation policies based on dynamic group
formation using deep reinforcement learning. Through simulation experiments, we
show that group-aware policies, compared to baseline policies that neglect
human groups, achieve greater robot navigation performance (e.g., fewer
collisions), minimize violation of social norms and discomfort, and reduce the
robot's movement impact on pedestrians. Our results contribute to the
development of social navigation and the integration of mobile robots into
human environments.Comment: 8 pages, 4 figure
âHey robot, please step back!â - exploration of a spatial threshold of comfort for human-mechanoid spatial interaction in a hallway scenario
Within the scope of the current research the goal was to develop an autonomous transport assistant for hospitals. As a sort of social robots, they need to fulfill two main requirements with respect to their interactive behavior with humans: (1) a high level of safety and (2) a behavior that is perceived as socially proper. One important element includes the characteristics of movement. However, state-of-the-art hospital robots rather focus on safe but not smart maneuvering. Vital motion parameters in human everyday environment are personal space and velocity. The relevance of these parameters has also been reported in existing human-robot interaction research. However, to date, no minimal accepted frontal and lateral distances for human-mechanoid proxemics have been explored. The present work attempts to gain insights into a potential threshold of comfort and additionally, aims to explore a potential interaction of this threshold and the mechanoid's velocity. Therefore, a user study putting the users in control of the mechanoid was conducted in a laboratory hallway-like setting. Findings align with previously reported personal space zones in human-robot interaction research. Minimal accepted frontal and lateral distances were obtained. Furthermore, insights into a potential categorization of the lateral personal space area around a human are discussed for human-robot interaction
Social navigation of autonomous robots in populated environments
Programa de Doctorado en BiotecnologĂa, IngenierĂa y TecnologĂa QuĂmicaLĂnea de InvestigaciĂłn: IngenierĂa InformĂĄticaClave Programa: DBICĂłdigo LĂnea: 19Today, more and more mobile robots are coexisting with us in our daily lives.
As a result, the behavior of robots that share space with humans in dynamic
environments is a subject of intense investigation in robotics. Robots must re-
spect human social conventions, guarantee the comfort of surrounding people,
and maintain the legibility so that humans can understand the robotÂżs intentions.
Robots that move in humansÂż vicinity should navigate in a socially compliant
way; this is called human-aware navigation. These social behaviors are not easy
to frame in mathematical expressions. Consequently, motion planners with pre-
programmed constraints and hard-coded functions can fail in acquiring proper
behaviors related to human-awareness. All in all, it is easier to demonstrate
socially acceptable behaviors than mathematically defining them. Therefore,
learning these social behaviors from data seems a more principled approach.
This thesis aims at endowing mobile robots with new social skills for au-
tonomous navigation in spaces populated with humans. This work makes use of
learning from demonstration (LfD) approaches to solve the problem of human-
aware navigation. Different techniques and algorithms are explored and devel-
oped in order to transfer social navigation behaviors to a robot by using demon-
strations of human experts performing the proposed tasks.
The contributions of this thesis are in the field of Learning from Demonstra-
tion applied to human-aware navigation tasks. First, a LfD technique based on
Inverse Reinforcement Learning (IRL) is employed to learn a policy for ÂżsocialÂż
local motion planning. Then, a novel learning algorithm combining LfD concepts
and sampling-based path planners is presented. Finally, other novel approaches
combining different LfD techniques, like deep learning among others, and path
planners are investigated. The methods proposed are compared against state-
of-the-art approaches and tested in different experiments with the real robots
employed in the European projects FROG and TERESA.Universidad Pablo de Olavide de Sevilla. Departamento de Deporte e InformĂĄticaPostprin
Human Motion Trajectory Prediction: A Survey
With growing numbers of intelligent autonomous systems in human environments,
the ability of such systems to perceive, understand and anticipate human
behavior becomes increasingly important. Specifically, predicting future
positions of dynamic agents and planning considering such predictions are key
tasks for self-driving vehicles, service robots and advanced surveillance
systems. This paper provides a survey of human motion trajectory prediction. We
review, analyze and structure a large selection of work from different
communities and propose a taxonomy that categorizes existing methods based on
the motion modeling approach and level of contextual information used. We
provide an overview of the existing datasets and performance metrics. We
discuss limitations of the state of the art and outline directions for further
research.Comment: Submitted to the International Journal of Robotics Research (IJRR),
37 page
Advances in Robot Navigation
Robot navigation includes different interrelated activities such as perception - obtaining and interpreting sensory information; exploration - the strategy that guides the robot to select the next direction to go; mapping - the construction of a spatial representation by using the sensory information perceived; localization - the strategy to estimate the robot position within the spatial map; path planning - the strategy to find a path towards a goal location being optimal or not; and path execution, where motor actions are determined and adapted to environmental changes. This book integrates results from the research work of authors all over the world, addressing the abovementioned activities and analyzing the critical implications of dealing with dynamic environments. Different solutions providing adaptive navigation are taken from nature inspiration, and diverse applications are described in the context of an important field of study: social robotics
- âŠ