2 research outputs found

    Autonomous Generation of Robust and Focused Explanations for Robot Policies

    No full text
    Transparency of robot behaviors increases efficiency and quality of interactions with humans. To increase transparency of robot policies, we propose a method for generating robust and focused explanations that express why a robot chose a particular action. The proposed method examines the policy based on the state space in which an action was chosen and describes it in natural language. The method can generate focused explanations by leaving out irrelevant state dimensions, and avoid explanations that are sensitive to small perturbations or have ambiguous natural language concepts. Furthermore, the method is agnostic to the policy representation and only requires the policy to be evaluated at different samples of the state space. We conducted a user study with 18 participants to investigate the usability of the proposed method compared to a comprehensive method that generates explanations using all dimensions. We observed how focused explanations helped the subjects more reliably detect the irrelevant dimensions of the explained system and how preferences regarding explanation styles and their expected characteristics greatly differ among the participants.Peer reviewe
    corecore