3,667 research outputs found

    Explaining Deep Learning Models Through Rule-Based Approximation and Visualization

    Get PDF
    This paper describes a novel approach to the problem of developing explainable machine learning models. We consider a Deep Reinforcement Learning (DRL) model representing a highway path planning policy for autonomous highway driving. The model constitutes a mapping from the continuous multidimensional state space characterizing vehicle positions and velocities to a discrete set of actions in longitudinal and lateral direction. It is obtained by applying a customized version of the Double Deep Q-Network (DDQN) learning algorithm. The main idea is to approximate the DRL model with a set of IF…THEN rules that provide an alternative interpretable model, which is further enhanced by visualizing the rules. This concept is rationalized by the universal approximation properties of the rule-based models with fuzzy predicates. The proposed approach includes a learning engine composed of 0-order fuzzy rules, which generalize locally around the prototypes by using multivariate function models. The adjacent (in the data space) prototypes, which correspond to the same action are further grouped and merged into so-called "MegaClouds" reducing significantly the number of fuzzy rules. The input selection method is based on ranking the density of the individual inputs. Experimental results show that the specific DRL agent can be interpreted by approximating with families of rules of different granularity. The method is computationally efficient and can be potentially extended to addressing the explainability of the broader set of fully connected deep neural network model

    Driver Attention based on Deep Learning for a Smart Vehicle to Driver (V2D) Interaction

    Get PDF
    La atención del conductor es un tópico interesante dentro del mundo de los vehículos inteligentes para la consecución de tareas que van desde la monitorización del conductor hasta la conducción autónoma. Esta tesis aborda este tópico basándose en algoritmos de aprendizaje profundo para conseguir una interacción inteligente entre el vehículo y el conductor. La monitorización del conductor requiere una estimación precisa de su mirada en un entorno 3D para conocer el estado de su atención. En esta tesis se aborda este problema usando una única cámara, para que pueda ser utilizada en aplicaciones reales, sin un alto coste y sin molestar al conductor. La herramienta desarrollada ha sido evaluada en una base de datos pública (DADA2000), obteniendo unos resultados similares a los obtenidos mediante un seguidor de ojos caro que no puede ser usado en un vehículo real. Además, ha sido usada en una aplicación que evalúa la atención del conductor en la transición de modo autónomo a manual de forma simulada, proponiendo el uso de una métrica novedosa para conocer el estado de la situación del conductor en base a su atención sobre los diferentes objetos de la escena. Por otro lado, se ha propuesto un algoritmo de estimación de atención del conductor, utilizando las últimas técnicas de aprendizaje profundo como son las conditional Generative Adversarial Networks (cGANs) y el Multi-Head Self-Attention. Esto permite enfatizar ciertas zonas de la escena al igual que lo haría un humano. El modelo ha sido entrenado y validado en dos bases de datos públicas (BDD-A y DADA2000) superando a otras propuestas del estado del arte y consiguiendo unos tiempos de inferencia que permiten su uso en aplicaciones reales. Por último, se ha desarrollado un modelo que aprovecha nuestro algoritmo de atención del conductor para comprender una escena de tráfico obteniendo la decisión tomada por el vehículo y su explicación, en base a las imágenes tomadas por una cámara situada en la parte frontal del vehículo. Ha sido entrenado en una base de datos pública (BDD-OIA) proponiendo un modelo que entiende la secuencia temporal de los eventos usando un Transformer Encoder, consiguiendo superar a otras propuestas del estado del arte. Además de su validación en la base de datos, ha sido implementado en una aplicación que interacciona con el conductor aconsejando sobre las decisiones a tomar y sus explicaciones ante diferentes casos de uso en un entorno simulado. Esta tesis explora y demuestra los beneficios de la atención del conductor para el mundo de los vehículos inteligentes, logrando una interacción vehículo conductor a través de las últimas técnicas de aprendizaje profundo

    A Survey of Explainable AI and Proposal for a Discipline of Explanation Engineering

    Full text link
    In this survey paper, we deep dive into the field of Explainable Artificial Intelligence (XAI). After introducing the scope of this paper, we start by discussing what an "explanation" really is. We then move on to discuss some of the existing approaches to XAI and build a taxonomy of the most popular methods. Next, we also look at a few applications of these and other XAI techniques in four primary domains: finance, autonomous driving, healthcare and manufacturing. We end by introducing a promising discipline, "Explanation Engineering," which includes a systematic approach for designing explainability into AI systems

    Acquiring Qualitative Explainable Graphs for Automated Driving Scene Interpretation

    Full text link
    The future of automated driving (AD) is rooted in the development of robust, fair and explainable artificial intelligence methods. Upon request, automated vehicles must be able to explain their decisions to the driver and the car passengers, to the pedestrians and other vulnerable road users and potentially to external auditors in case of accidents. However, nowadays, most explainable methods still rely on quantitative analysis of the AD scene representations captured by multiple sensors. This paper proposes a novel representation of AD scenes, called Qualitative eXplainable Graph (QXG), dedicated to qualitative spatiotemporal reasoning of long-term scenes. The construction of this graph exploits the recent Qualitative Constraint Acquisition paradigm. Our experimental results on NuScenes, an open real-world multi-modal dataset, show that the qualitative eXplainable graph of an AD scene composed of 40 frames can be computed in real-time and light in space storage which makes it a potentially interesting tool for improved and more trustworthy perception and control processes in AD

    Explaining Deep Q-Learning Experience Replay with SHapley Additive exPlanations

    Get PDF
    Reinforcement Learning (RL) has shown promise in optimizing complex control and decision-making processes but Deep Reinforcement Learning (DRL) lacks interpretability, limiting its adoption in regulated sectors like manufacturing, finance, and healthcare. Difficulties arise from DRL’s opaque decision-making, hindering efficiency and resource use, this issue is amplified with every advancement. While many seek to move from Experience Replay to A3C, the latter demands more resources. Despite efforts to improve Experience Replay selection strategies, there is a tendency to keep capacity high. This dissertation investigates training a Deep Convolutional Q-learning agent across 20 Atari games, in solving a control task, physics task, and simulating addition, while intentionally reducing Experience Replay capacity from 1×106 to 5×102 . It was found that over 40% in the reduction of Experience Replay size is allowed for 18 of 23 simulations tested, offering a practical path to resource-efficient DRL. To illuminate agent decisions and align them with game mechanics, a novel method is employed: visualizing Experience Replay via Deep SHAP Explainer. This approach fosters comprehension and transparent, interpretable explanations, though any capacity reduction must be cautious to avoid overfitting. This study demonstrates the feasibility of reducing Experience Replay and advocates for transparent, interpretable decision explanations using the Deep SHAP Explainer to promote enhancing resource efficiency in Experience Replay

    Cell fault management using machine learning techniques

    Get PDF
    This paper surveys the literature relating to the application of machine learning to fault management in cellular networks from an operational perspective. We summarise the main issues as 5G networks evolve, and their implications for fault management. We describe the relevant machine learning techniques through to deep learning, and survey the progress which has been made in their application, based on the building blocks of a typical fault management system. We review recent work to develop the abilities of deep learning systems to explain and justify their recommendations to network operators. We discuss forthcoming changes in network architecture which are likely to impact fault management and offer a vision of how fault management systems can exploit deep learning in the future. We identify a series of research topics for further study in order to achieve this
    corecore