3 research outputs found

    A Study of Automatic Metrics for the Evaluation of Natural Language Explanations

    Get PDF
    As transparency becomes key for robotics and AI, it will be necessary to evaluate the methods through which transparency is provided, including automatically generated natural language (NL) explanations. Here, we explore parallels between the generation of such explanations and the much-studied field of evaluation of Natural Language Generation (NLG). Specifically, we investigate which of the NLG evaluation measures map well to explanations. We present the ExBAN corpus: a crowd-sourced corpus of NL explanations for Bayesian Networks. We run correlations comparing human subjective ratings with NLG automatic measures. We find that embedding-based automatic NLG evaluation methods, such as BERTScore and BLEURT, have a higher correlation with human ratings, compared to word-overlap metrics, such as BLEU and ROUGE. This work has implications for Explainable AI and transparent robotic and autonomous systems.Comment: Accepted at EACL 202

    Planning and Explanations with a Learned Spatial Model

    Get PDF
    This paper reports on a robot controller that learns and applies a cognitively-based spatial model as it travels in challenging, real-world indoor spaces. The model not only describes indoor space, but also supports robust, model-based planning. Together with the spatial model, the controller\u27s reasoning framework allows it to explain and defend its decisions in accessible natural language. The novel contributions of this paper are an enhanced cognitive spatial model that facilitates successful reasoning and planning, and the ability to explain navigation choices for a complex environment. Empirical evidence is provided by simulation of a commercial robot in a large, complex, realistic world
    corecore