10 research outputs found

    Who needs XAI in the Energy Sector? A Framework to Upgrade Black Box Explainability

    Get PDF
    Artificial Intelligence (AI)-based methods in the energy sector challenge companies, organizations, and societies. Organizational issues include traceability, certifiability, explainability, responsibility, and efficiency. Societal challenges include ethical norms, bias, discrimination, privacy, and information security. Explainable Artificial Intelligence (XAI) can address these issues in various application areas of the energy sector, e.g., power generation forecasting, load management, and network security operations. We derive Key Topics (KTs) and Design Requirements (DRs) and develop Design Principles (DPs) for efficient XAI applications through Design Science Research (DSR). We analyze 179 scientific articles to identify our 8 KTs for XAI implementation through text mining and topic modeling. Based on the KTs, we derive 15 DRs and develop 18 DPs. After that, we discuss and evaluate our results and findings through expert surveys. We develop a Three-Forces Model as a framework for implementing efficient XAI solutions. We provide recommendations and a further research agenda

    Transdisciplinary AI Observatory -- Retrospective Analyses and Future-Oriented Contradistinctions

    Get PDF
    In the last years, AI safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently transdisciplinary AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing hands-on-advice utilizing concrete practical examples. Distinguishing between unintentionally and intentionally triggered AI risks with diverse socio-psycho-technological impacts, we exemplify a retrospective descriptive analysis followed by a retrospective counterfactual risk analysis. Building on these AI observatory tools, we present near-term transdisciplinary guidelines for AI safety. As further contribution, we discuss differentiated and tailored long-term directions through the lens of two disparate modern AI safety paradigms. For simplicity, we refer to these two different paradigms with the terms artificial stupidity (AS) and eternal creativity (EC) respectively. While both AS and EC acknowledge the need for a hybrid cognitive-affective approach to AI safety and overlap with regard to many short-term considerations, they differ fundamentally in the nature of multiple envisaged long-term solution patterns. By compiling relevant underlying contradistinctions, we aim to provide future-oriented incentives for constructive dialectics in practical and theoretical AI safety research

    Contributions to energy informatics, data protection, AI-driven cybersecurity, and explainable AI

    Get PDF
    This cumulative dissertation includes eleven papers dealing with energy informatics, privacy, artificial intelligence-enabled cybersecurity, explainable artificial intelligence, ethical artificial intelligence, and decision support. In addressing real-world challenges, the dissertation provides practical guidance, reduces complexity, shows insights from empirical data, and supports decision-making. Interdisciplinary research methods include morphological analysis, taxonomies, decision trees, and literature reviews. From the resulting design artifacts, such as design principles, critical success factors, taxonomies, archetypes, and decision trees ¬ practitioners, including energy utilities, data-intensive artificial intelligence service providers, cybersecurity consultants, managers, policymakers, regulators, decision-makers, and end users can benefit. These resources enable them to make informed and efficient decisions

    Adversarial attack and defense in reinforcement learning-from AI security view

    No full text
    Abstract Reinforcement learning is a core technology for modern artificial intelligence, and it has become a workhorse for AI applications ranging from Atrai Game to Connected and Automated Vehicle System (CAV). Therefore, a reliable RL system is the foundation for the security critical applications in AI, which has attracted a concern that is more critical than ever. However, recent studies discover that the interesting attack mode adversarial attack also be effective when targeting neural network policies in the context of reinforcement learning, which has inspired innovative researches in this direction. Hence, in this paper, we give the very first attempt to conduct a comprehensive survey on adversarial attacks in reinforcement learning under AI security. Moreover, we give briefly introduction on the most representative defense technologies against existing adversarial attacks
    corecore