3,640 research outputs found

    Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps

    Get PDF
    With advances in reinforcement learning (RL), agents are now being developed in high-stakes application domains such as healthcare and transportation. Explaining the behavior of these agents is challenging, as the environments in which they act have large state spaces, and their decision-making can be affected by delayed rewards, making it difficult to analyze their behavior. To address this problem, several approaches have been developed. Some approaches attempt to convey the global\textit{global} behavior of the agent, describing the actions it takes in different states. Other approaches devised local\textit{local} explanations which provide information regarding the agent's decision-making in a particular state. In this paper, we combine global and local explanation methods, and evaluate their joint and separate contributions, providing (to the best of our knowledge) the first user study of combined local and global explanations for RL agents. Specifically, we augment strategy summaries that extract important trajectories of states from simulations of the agent with saliency maps which show what information the agent attends to. Our results show that the choice of what states to include in the summary (global information) strongly affects people's understanding of agents: participants shown summaries that included important states significantly outperformed participants who were presented with agent behavior in a randomly set of chosen world-states. We find mixed results with respect to augmenting demonstrations with saliency maps (local information), as the addition of saliency maps did not significantly improve performance in most cases. However, we do find some evidence that saliency maps can help users better understand what information the agent relies on in its decision making, suggesting avenues for future work that can further improve explanations of RL agents

    Trends and Challenges of Text-to-Image Generation: Sustainability Perspective

    Get PDF
    Text-to-image generation is a rapidly growing field that aims to generate images from textual descriptions. This paper provides a comprehensive overview of the latest trends and developments, highlighting their importance and relevance in various domains, such as art, photography, marketing, and learning. The paper describes and compares various text-to-image models and discusses the challenges and limitations of this field. The findings of this paper demonstrate that recent advancements in deep learning and computer vision have led to significant progress in text-to-image models, enabling them to generate high-quality images from textual descriptions. However, challenges such as ensuring the legality and ethical implications of the final products generated by these models need to be addressed. This paper provides insights into these challenges and suggests future directions for this field. In addition, this study emphasises the need for a sustainability-oriented approach in the text-to-image domain. As text-to-image models advance, it is crucial to conscientiously assess their impact on ecological, cultural, and societal dimensions. Prioritising ethical model use while being mindful of their carbon footprint and potential effects on human creativity becomes crucial for sustainable progress

    Towards temporally uncertain explainable AI planning

    Get PDF
    Automated planning is able to handle increasingly complex applications, but can produce unsatisfactory results when the goal and metric provided in its model does not match the actual expectation and preference of those using the tool. This can be ameliorated by including methods for explainable planning (XAIP), to reveal the reasons for the automated plannerā€™s decisions and to provide more in-depth interaction with the planner. In this paper we describe at a high-level two recent pieces of work in XAIP. First, plan exploration through model restriction, in which contrastive questions are used to build a tree of solutions to a planning problem. Through a dialogue with the system the user better understands the underlying problem and the choices made by the automated planner. Second, strong controllability analysis of probabilistic temporal networks through solving a joint chance constrained optimisation problem. The result of the analysis is a Pareto optimal front that illustrates the trade-offs between costs and risk for a given plan. We also present a short discussion on the limitations of these methods and how they might be usefully combined

    Building Trust in Human-Machine Partnerships

    Get PDF
    Artificial Intelligence (AI) is bringing radical change to our lives. Fostering trust in this technology requires the technology to be transparent, and one route to transparency is to make the decisions that are reached by AIs explainable to the humans that interact with them. This paper lays out an exploratory approach to developing explainability and trust, describing the specific technologies that we are adopting, the social and organizational context in which we are working, and some of the challenges that we are addressing
    • ā€¦
    corecore