2 research outputs found

    Optimizing Deep Q-Learning Experience Replay with SHAP Explanations: Exploring Minimum Experience Replay Buffer Sizes in Reinforcement Learning

    Get PDF
    Explainable Reinforcement Learning (xRL) faces challenges in debugging and interpreting Deep Reinforcement Learning (DRL) models. A lack of understanding for internal components like Experience Replay, which samples and stores data from the environment, risks burdening resources. This paper presents an xRL-based Deep Q-Learning (DQL) system using SHAP (SHapley Additive exPlanations) to explain input feature contributions. Data is sampled from Experience Replay, creating SHAP Heatmaps to understand how it influences the neural network Q-value approximator\u27s actions. The xRL-based system aids in determining the smallest Experience Replay size for 23 simulations of varying complexities. It contributes an xRL optimization method, alongside traditional approaches, for tuning the Experience Replay size hyperparameter. This visual and creative approach achieves over 40% reduction in Experience Replay size for 18 of the 23 tested simulations, smaller than the commonly used sizes of 1 million transitions or 90% of total environment transitions

    Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions

    Get PDF
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from diverse fields to identify open problems, striving to synchronize research agendas and accelerate XAI in practical applications. By fostering collaborative discussion and interdisciplinary cooperation, we aim to propel XAI forward, contributing to its continued success. Our goal is to put forward a comprehensive proposal for advancing XAI. To achieve this goal, we present a manifesto of 27 open problems categorized into nine categories. These challenges encapsulate the complexities and nuances of XAI and offer a road map for future research. For each problem, we provide promising research directions in the hope of harnessing the collective intelligence of interested stakeholders
    corecore