1,920 research outputs found

    Trusted Artificial Intelligence in Manufacturing; Trusted Artificial Intelligence in Manufacturing

    Get PDF
    The successful deployment of AI solutions in manufacturing environments hinges on their security, safety and reliability which becomes more challenging in settings where multiple AI systems (e.g., industrial robots, robotic cells, Deep Neural Networks (DNNs)) interact as atomic systems and with humans. To guarantee the safe and reliable operation of AI systems in the shopfloor, there is a need to address many challenges in the scope of complex, heterogeneous, dynamic and unpredictable environments. Specifically, data reliability, human machine interaction, security, transparency and explainability challenges need to be addressed at the same time. Recent advances in AI research (e.g., in deep neural networks security and explainable AI (XAI) systems), coupled with novel research outcomes in the formal specification and verification of AI systems provide a sound basis for safe and reliable AI deployments in production lines. Moreover, the legal and regulatory dimension of safe and reliable AI solutions in production lines must be considered as well. To address some of the above listed challenges, fifteen European Organizations collaborate in the scope of the STAR project, a research initiative funded by the European Commission in the scope of its H2020 program (Grant Agreement Number: 956573). STAR researches, develops, and validates novel technologies that enable AI systems to acquire knowledge in order to take timely and safe decisions in dynamic and unpredictable environments. Moreover, the project researches and delivers approaches that enable AI systems to confront sophisticated adversaries and to remain robust against security attacks. This book is co-authored by the STAR consortium members and provides a review of technologies, techniques and systems for trusted, ethical, and secure AI in manufacturing. The different chapters of the book cover systems and technologies for industrial data reliability, responsible and transparent artificial intelligence systems, human centered manufacturing systems such as human-centred digital twins, cyber-defence in AI systems, simulated reality systems, human robot collaboration systems, as well as automated mobile robots for manufacturing environments. A variety of cutting-edge AI technologies are employed by these systems including deep neural networks, reinforcement learning systems, and explainable artificial intelligence systems. Furthermore, relevant standards and applicable regulations are discussed. Beyond reviewing state of the art standards and technologies, the book illustrates how the STAR research goes beyond the state of the art, towards enabling and showcasing human-centred technologies in production lines. Emphasis is put on dynamic human in the loop scenarios, where ethical, transparent, and trusted AI systems co-exist with human workers. The book is made available as an open access publication, which could make it broadly and freely available to the AI and smart manufacturing communities

    Explainable agents adapt to human behaviour

    Get PDF
    When integrating artificial agents into physical or digital environments that are shared with humans, agents are often equipped with opaque Machine Learning methods to enable adapting their behaviour to dynamic human needs and environment. This brings about agents that are also opaque and therefore hard to explain. In previous work, we show that we can reduce an opaque agent into an explainable Policy Graph (PG) which works accurately in multi-agent environments. Policy Graphs are based on a discretisation of the world into propositional logic to identify states, and the choice of which discretiser to apply is key to the performance of the reduced agent. In this work, we explore this further by 1) reducing a single agent into an explainable PG, and 2) enforcing collaboration between this agent and an agent trained from human behaviour. The human agent is computed by using GAIL from a series of human-played episodes, and kept unchanged. We show that an opaque agent created and trained to collaborate with the human agent can be reduced to an explainable, non-opaque PG, so long as predicates regarding collaboration are included in the state representation, by showing the difference in reward between the agent and its PG. Code is available at https://github.com/HPAI-BSC/explainable-agents-with-humansThis work has been partially supported by EU Horizon 2020 Project StairwAI (grant agreement No. 101017142).Peer ReviewedPostprint (published version

    How to Enable Sovereign Human-AI Interactions at Work? Concepts of Graspable Testbeds Empowering People to Understand and Competently Use AI-Systems

    Get PDF
    Artificial intelligence (AI) strategies are exhibiting a shift of perspectives, focusing more intensively on a more human-centric view. New conceptualizations of AI literacy (AIL) are being presented, summarizing the competencies human users need to successfully interact with AI-based systems. However, these conceptualizations lack practical relevance. In view of the rapid pace of technological development, this contribution addresses the urgent need to bridge the gap between theoretical concepts of AIL and practical requirements of working environments. It transfers current conceptualizations and new principles of a more human-centered perspective on AI into professional working environments. From a psychological perspective, the project focuses on emotional-motivational, eudaimonic, and social aspects. Methodologically, the project presented develops AI testbeds in virtual reality to realize literally graspable interactions with AI-based technologies in the actual work environment. Overall, the project aims to increase the competencies and the willingness to successfully master the challenges of the digitalized world of work

    Explainable Artificial Intelligence (XAI) from a user perspective- A synthesis of prior literature and problematizing avenues for future research

    Full text link
    The final search query for the Systematic Literature Review (SLR) was conducted on 15th July 2022. Initially, we extracted 1707 journal and conference articles from the Scopus and Web of Science databases. Inclusion and exclusion criteria were then applied, and 58 articles were selected for the SLR. The findings show four dimensions that shape the AI explanation, which are format (explanation representation format), completeness (explanation should contain all required information, including the supplementary information), accuracy (information regarding the accuracy of the explanation), and currency (explanation should contain recent information). Moreover, along with the automatic representation of the explanation, the users can request additional information if needed. We have also found five dimensions of XAI effects: trust, transparency, understandability, usability, and fairness. In addition, we investigated current knowledge from selected articles to problematize future research agendas as research questions along with possible research paths. Consequently, a comprehensive framework of XAI and its possible effects on user behavior has been developed

    Artificial intelligence, blockchain, and extended reality: emerging digital technologies to turn the tide on illegal logging and illegal wood trade

    Get PDF
    Illegal logging which often results in forest degradation and sometimes in deforestation remains ubiquitous in many places around the globe. Managing illegal logging and illegal wood trade constitutes a global priority over the next few decades. Scientific, technological, and research communities are committed to respond rapidly, evaluating the opportunities to capitalize on emerging digital technologies for treating this formidable challenge. The innovative potentials of these emerging digital technologies at tackling illegal logging-related challenges are here investigated. We propose a novel system, WoodchAInX, combining explainable artificial intelligence (X-AI), next-generation blockchain, and extended reality (XR). Our findings on the most effective means of leveraging each technology’s potential and the convergence of the three technologies infer a vast promise for digital technology in this field. Yet, we argue that, overall, digital transformations will not deliver fundamental, responsible, and sustainable benefits without revolutionary realignment

    Setting the relationship between human-centered approaches and users? Digital well-being: A review

    Get PDF
    With the advancement of technology and advent of the new digital era, the society is getting increasingly exposed to novel technologies, digital platforms, or smart devices. This reality opens a wide range of questions about the benefits and challenges of technology and its impact on humans. In this context, the present study investigates the relationship between human-centered approaches and their application to achieve users' digital well-being, as well as explores whether marketing and business industry are sufficiently considering human-centered approaches in their implementation of practices that care for users' digital wellbeing. To this end, we conduct a systematic literature review. The exploratory results confirm that the implementation of human-centered approaches makes it possible to achieve a greater user well-being in the marketing and management sector. Additionally, we also identify and dis-cuss seven more relevant areas. Our review concludes with a discussion of the theoretical and practical implications of our findings for further research on the use of human-centric and digital well-being concepts.info:eu-repo/semantics/publishedVersio

    XAIR: A Framework of Explainable AI in Augmented Reality

    Full text link
    Explainable AI (XAI) has established itself as an important component of AI-driven interactive systems. With Augmented Reality (AR) becoming more integrated in daily lives, the role of XAI also becomes essential in AR because end-users will frequently interact with intelligent services. However, it is unclear how to design effective XAI experiences for AR. We propose XAIR, a design framework that addresses "when", "what", and "how" to provide explanations of AI output in AR. The framework was based on a multi-disciplinary literature review of XAI and HCI research, a large-scale survey probing 500+ end-users' preferences for AR-based explanations, and three workshops with 12 experts collecting their insights about XAI design in AR. XAIR's utility and effectiveness was verified via a study with 10 designers and another study with 12 end-users. XAIR can provide guidelines for designers, inspiring them to identify new design opportunities and achieve effective XAI designs in AR.Comment: Proceedings of the 2023 CHI Conference on Human Factors in Computing System

    Human-centric explanation facilities

    Get PDF
    corecore