2,720 research outputs found
Learning a Group-Aware Policy for Robot Navigation
Human-aware robot navigation promises a range of applications in which mobile
robots bring versatile assistance to people in common human environments. While
prior research has mostly focused on modeling pedestrians as independent,
intentional individuals, people move in groups; consequently, it is imperative
for mobile robots to respect human groups when navigating around people. This
paper explores learning group-aware navigation policies based on dynamic group
formation using deep reinforcement learning. Through simulation experiments, we
show that group-aware policies, compared to baseline policies that neglect
human groups, achieve greater robot navigation performance (e.g., fewer
collisions), minimize violation of social norms and discomfort, and reduce the
robot's movement impact on pedestrians. Our results contribute to the
development of social navigation and the integration of mobile robots into
human environments.Comment: 8 pages, 4 figure
Human aware robot navigation
Abstract. Human aware robot navigation refers to the navigation of a robot in an environment shared with humans in such a way that the humans should feel comfortable, and natural with the presence of the robot. On top of that, the robot navigation should comply with the social norms of the environment. The robot can interact with humans in the environment, such as avoiding them, approaching them, or following them. In this thesis, we specifically focus on the approach behavior of the robot, keeping the other use cases still in mind. Studying and analyzing how humans move around other humans gives us the idea about the kind of navigation behaviors that we expect the robots to exhibit. Most of the previous research does not focus much on understanding such behavioral aspects while approaching people. On top of that, a straightforward mathematical modeling of complex human behaviors is very difficult. So, in this thesis, we proposed an Inverse Reinforcement Learning (IRL) framework based on Guided Cost Learning (GCL) to learn these behaviors from demonstration. After analyzing the CongreG8 dataset, we found that the incoming human tends to make an O-space (circle) with the rest of the group. Also, the approaching velocity slows down when the approaching human gets closer to the group. We utilized these findings in our framework that can learn the optimal reward and policy from the example demonstrations and imitate similar human motion
Automatic Assessment and Learning of Robot Social Abilities
One of the key challenges of current state-of-the-art robotic deployments in public spaces, where the robot is supposed to interact with humans, is the generation of behaviors that are engaging for the users. Eliciting engagement during an interaction, and maintaining it after the initial phase of the interaction, is still an issue to be overcome. There is evidence that engagement in learning activities is higher in the presence of a robot, particularly if novel [1], but after the initial engagement state, long and non-interactive behaviors are detrimental to the continued engagement of the users [5, 16]. Overcoming this limitation requires to design robots with enhanced social abilities that go past monolithic behaviours and introduces in-situ learning and adaptation to the specific users and situations. To do so, the robot must have the ability to perceive the state of the humans participating in the interaction and use this feedback for the selection of its own actions over time [27]
Affective Communication for Socially Assistive Robots (SARs) for Children with Autism Spectrum Disorder: A Systematic Review
Research on affective communication for socially assistive robots has been conducted to
enable physical robots to perceive, express, and respond emotionally. However, the use of affective
computing in social robots has been limited, especially when social robots are designed for children,
and especially those with autism spectrum disorder (ASD). Social robots are based on cognitiveaffective models, which allow them to communicate with people following social behaviors and
rules. However, interactions between a child and a robot may change or be different compared to
those with an adult or when the child has an emotional deficit. In this study, we systematically
reviewed studies related to computational models of emotions for children with ASD. We used the
Scopus, WoS, Springer, and IEEE-Xplore databases to answer different research questions related to
the definition, interaction, and design of computational models supported by theoretical psychology
approaches from 1997 to 2021. Our review found 46 articles; not all the studies considered children
or those with ASD.This research was funded by VRIEA-PUCV, grant number 039.358/202
- …