5 research outputs found

    Towards A Multidimensional Perspective on Shared Autonomy

    Get PDF
    Schilling M, Kopp S, Wachsmuth S, et al. Towards A Multidimensional Perspective on Shared Autonomy. In: Proceedings of the AAAI Fall Symposium Series 2016, Stanford (USA). 2016

    Explainable robotics in human-robot interactions

    Get PDF
    This paper introduces a new research area called Explainable Robotics, which studies explainability in the context of human-robot interactions. The focus is on developing novel computational models, methods and algorithms for generating explanations that allow robots to operate at different levels of autonomy and communicate with humans in a trustworthy and human-friendly way. Individuals may need explanations during human-robot interactions for different reasons, which depend heavily on the context and human users involved. Therefore, the research challenge is identifying what needs to be explained at each level of autonomy and how these issues should be explained to different individuals. The paper presents the case for Explainable Robotics using a scenario involving the provision of medical health care to elderly patients with dementia with the help of technology. The paper highlights the main research challenges of Explainable Robotics. The first challenge is the need for new algorithms for generating explanations that use past experiences, analogies and real-time data to adapt to particular audiences and purposes. The second research challenge is developing novel computational models of situational and learned trust and new algorithms for the real-time sensing of trust. Finally, more research is needed to understand whether trust can be used as a control variable in Explainable Robotics

    Zusammenwirken von natürlicher und künstlicher Intelligenz

    Get PDF
    corecore