2 research outputs found

    Enabling Socially Competent navigation through incorporating HRI

    Full text link
    Over the last years, social robots have been deployed in public environments making evident the need of human-aware navigation capabilities. In this regard, the robotics community have made efforts to include proxemics or social conventions within the navigation approaches. Nevertheless, few works have tackled the problem of labelling humans as an interactive agent when blocking the robot motion trajectory. Current state of the art navigation planners will either propose an alternative path or freeze the motion until the path is free. We present the first prototype of a framework designed to enhance social competency of robots while navigating in indoor environments. The implementation is done using Navigation and Object Detection open-source software. Specifically, the Robot Operating System (ROS) navigation stack, and OpenCV with Caffe deep learning models and MobileNet Single Shot Detector (SSD), respectively.Comment: HRI '19: ACM Workshop on Social Human-Robot Interaction of Human-care Service Robots, March 11--14, 2019, Daegu, Kore

    Target Reaching Behaviour for Unfreezing the Robot in a Semi-Static and Crowded Environment

    Full text link
    Robot navigation in human semi-static and crowded environments can lead to the freezing problem, where the robot can not move due to the presence of humans standing on its path and no other path is available. Classical approaches of robot navigation do not provide a solution for this problem. In such situations, the robot could interact with the humans in order to clear its path instead of considering them as unanimated obstacles. In this work, we propose a robot behavior for a wheeled humanoid robot that complains with social norms for clearing its path when the robot is frozen due to the presence of humans. The behavior consists of two modules: 1) A detection module, which make use of the Yolo v3 algorithm trained to detect human hands and human arms. 2) A gesture module, which make use of a policy trained in simulation using the Proximal Policy Optimization algorithm. Orchestration of the two models is done using the ROS framework
    corecore