13 research outputs found

    What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research

    Get PDF
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these stakeholders' desiderata) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability of artificial systems and reviews their desiderata. We provide a model that explicitly spells out the main concepts and relations necessary to consider and investigate when evaluating, adjusting, choosing, and developing explainability approaches that aim to satisfy stakeholders' desiderata. This model can serve researchers from the variety of different disciplines involved in XAI as a common ground. It emphasizes where there is interdisciplinary potential in the evaluation and the development of explainability approaches.Comment: 57 pages, 2 figures, 1 table, to be published in Artificial Intelligence, Markus Langer, Daniel Oster and Timo Speith share first-authorship of this pape

    Software engineering for AI-based systems: A survey

    Get PDF
    AI-based systems are software systems with functionalities enabled by at least one AI component (e.g., for image-, speech-recognition, and autonomous driving). AI-based systems are becoming pervasive in society due to advances in AI. However, there is limited synthesized knowledge on Software Engineering (SE) approaches for building, operating, and maintaining AI-based systems. To collect and analyze state-of-the-art knowledge about SE for AI-based systems, we conducted a systematic mapping study. We considered 248 studies published between January 2010 and March 2020. SE for AI-based systems is an emerging research area, where more than 2/3 of the studies have been published since 2018. The most studied properties of AI-based systems are dependability and safety. We identified multiple SE approaches for AI-based systems, which we classified according to the SWEBOK areas. Studies related to software testing and software quality are very prevalent, while areas like software maintenance seem neglected. Data-related issues are the most recurrent challenges. Our results are valuable for: researchers, to quickly understand the state-of-the-art and learn which topics need more research; practitioners, to learn about the approaches and challenges that SE entails for AI-based systems; and, educators, to bridge the gap among SE and AI in their curricula.This work has been partially funded by the “Beatriz Galindo” Spanish Program BEAGAL18/00064 and by the DOGO4ML Spanish research project (ref. PID2020-117191RB-I00)Peer ReviewedPostprint (author's final draft

    Assuring safe and efficient operation of UAV using explainable machine learning

    Get PDF
    The accurate estimation of airspace capacity in unmanned traffic management (UTM) operations is critical for a safe, efficient, and equitable allocation of airspace system resources. While conventional approaches for assessing airspace complexity certainly exist, these methods fail to capture true airspace capacity, since they fail to address several important variables (such as weather). Meanwhile, existing AI-based decision-support systems evince opacity and inexplicability, and this restricts their practical application. With these challenges in mind, the authors propose a tailored solution to the needs of demand and capacity management (DCM) services. This solution, by deploying a synthesized fuzzy rule-based model and deep learning will address the trade-off between explicability and performance. In doing so, it will generate an intelligent system that will be explicable and reasonably comprehensible. The results show that this advisory system will be able to indicate the most appropriate regions for unmanned aerial vehicle (UAVs) operation, and it will also increase UTM airspace availability by more than 23%. Moreover, the proposed system demonstrates a maximum capacity gain of 65% and a minimum safety gain of 35%, while possessing an explainability attribute of 70%. This will assist UTM authorities through more effective airspace capacity estimation and the formulation of new operational regulations and performance requirements

    Deliktiline vastutus isejuhtiva sõidukiga kahju põhjustamise korral Eesti õiguse näitel

    Get PDF
    Väitekirja elektrooniline versioon ei sisalda publikatsiooneIsejuhtivuse taseme järgi saab jaotada sõidukid kuude kategooriasse, kus tase 0 tähendab isejuhtivuse puudumist ja tase 5 täielikku isejuhtivust, mille korral sõiduk tuleb toime kõigi liikluskeskkonna tingimustega ja inimene juhtimises ei osale. Viienda taseme sõidukeid teadaolevalt veel käibele lastud ei ole, kuid arvatakse, et täielik isejuhtivus lähitulevikus siiski saavutatakse. Isejuhtivad sõidukid peaksid aitama muu hulgas muuta liiklemise ohutumaks ja kättesaadavamaks, ent ei saa välistada, et mõni isejuhtiv sõiduk siiski kedagi kahjustab. Siinkohal tekib küsimus, kes on kohustatud isejuhtiva sõidukiga põhjustatud kahju heastama. Kahju õigusvastase tekitamisega seonduvat reguleerib võlaõigusseadus, mis näeb ette nii süül põhineva deliktilise üldvastutuse, vastutuse suurema ohu allikaga põhjustatud kahju eest (riskivastutuse) kui ka vastutuse puudusega toote põhjustatud kahju eest (tootja vastutus). Ehkki isejuhtivaid sõidukeid veel käibele lastud ei ole, saab neid siiski hüpoteetiliselt olemasolevasse õigusruumi asetada ja püüda hinnata, milliseid probleeme see kahju õigusvastasest tekitamisest tuleneva vastutuse vallas kaasa toob. Doktoritöös uuritakse ka seda, kas, millistel alustel ja kelle vastu saab kannatanu maksma panna süül põhinevale deliktilisele vastutusele, riskivastutusele ja tootja vastutusele tugineva nõude isejuhtiva sõidukiga põhjustatud kahju hüvitamiseks. Samuti analüüsitakse seda, millistel juhtudel tuleks isejuhtiva sõiduki tarkvaras või kasutatavates digitaalsetes teenustes olev viga lugeda sõiduki puuduseks ning kas ja millises ulatuses on põhjendatud arendusriski kaitseklausli alusel isejuhtivate sõidukite tootjate vabastamine vastutusest. Tähelepanu pälvib ka küsimus, kuidas hinnata nende sõidukite käitamisriski suurust ja jagada vastutust isejuhtiva sõiduki osalusel toimunud vastastikuse kahju tekitamise korral, arvestades et kahjuhüvitise vähendamisel ei saa juhi käitumist arvesse võtta.Self-driving vehicles can be divided into six levels, where Level 0 means no automation and Level 5 means full automation – whereby the vehicle copes with all of the conditions of the traffic environment and a human does not participate in the driving in any way. No Level 5 vehicles have yet been put into circulation, but full automation is expected to be achieved in the near future. Self-driving vehicles should help, among other things, in making traffic safer and increasing mobility, but it cannot be precluded that a self-driving vehicle will harm someone. This leads to the question of who is required to remedy the damage caused by a self-driving vehicle. Unlawfully caused damage is regulated by the Law of Obligations Act, which provides for general fault-based tortious liability, liability for damage caused by a source of greater danger (strict liability) and liability for damage caused by a defective product (product liability). Although there are currently no self-driving vehicles in circulation, they can hypothetically be placed in the existing legal space in order to assess what problems this raises in the field of liability for unlawful damage. This dissertation examines, inter alia, whether, on what grounds and against whom an injured person can bring a claim for damages under fault-based tortious liability, strict liability and product liability in a situation where the damage is caused by a self-driving vehicle. It analyses the situations in which an error in the software of a self-driving vehicle or in the digital services used by it could be deemed a defect of the vehicle and whether and to what an extent it is justified to discharge manufacturers of self-driving vehicles from liability based on the development risk defence. The dissertation also discusses how to assess the size of the risk of operation of self-driving vehicles and how to divide liability in a situation where mutual damage has been caused with the involvement of a self-driving vehicle, given that in the case of the latter, the driver’s conduct cannot be taken into account.https://www.ester.ee/record=b538487

    Maintaining Structured Experiences for Robots via Human Demonstrations: An Architecture To Convey Long-Term Robot\u2019s Beliefs

    Get PDF
    This PhD thesis presents an architecture for structuring experiences, learned through demonstrations, in a robot memory. To test our architecture, we consider a specific application where a robot learns how objects are spatially arranged in a tabletop scenario. We use this application as a mean to present a few software development guidelines for building architecture for similar scenarios, where a robot is able to interact with a user through a qualitative shared knowledge stored in its memory. In particular, the thesis proposes a novel technique for deploying ontologies in a robotic architecture based on semantic interfaces. To better support those interfaces, it also presents general-purpose tools especially designed for an iterative development process, which is suitable for Human-Robot Interaction scenarios. We considered ourselves at the beginning of the first iteration of the design process, and our objective was to build a flexible architecture through which evaluate different heuristic during further development iterations. Our architecture is based on a novel algorithm performing a oneshot structured learning based on logic formalism. We used a fuzzy ontology for dealing with uncertain environments, and we integrated the algorithm in the architecture based on a specific semantic interface. The algorithm is used for building experience graphs encoded in the robot\u2019s memory that can be used for recognising and associating situations after a knowledge bootstrapping phase. During this phase, a user is supposed to teach and supervise the beliefs of the robot through multimodal, not physical, interactions. We used the algorithm to implement a cognitive like memory involving the encoding, storing, retrieving, consolidating, and forgetting behaviours, and we showed that our flexible design pattern could be used for building architectures where contextualised memories are managed with different purposes, i.e. they contains representation of the same experience encoded with different semantics. The proposed architecture has the main purposes of generating and maintaining knowledge in memory, but it can be directly interfaced with perceiving and acting components if they provide, or require, symbolical knowledge. With the purposes of showing the type of data considered as inputs and outputs in our tests, this thesis also presents components to evaluate point clouds, engage dialogues, perform late data fusion and simulate the search of a target position. Nevertheless, our design pattern is not meant to be coupled only with those components, which indeed have a large room of improvement

    Building bridges for better machines : from machine ethics to machine explainability and back

    Get PDF
    Be it nursing robots in Japan, self-driving buses in Germany or automated hiring systems in the USA, complex artificial computing systems have become an indispensable part of our everyday lives. Two major challenges arise from this development: machine ethics and machine explainability. Machine ethics deals with behavioral constraints on systems to ensure restricted, morally acceptable behavior; machine explainability affords the means to satisfactorily explain the actions and decisions of systems so that human users can understand these systems and, thus, be assured of their socially beneficial effects. Machine ethics and explainability prove to be particularly efficient only in symbiosis. In this context, this thesis will demonstrate how machine ethics requires machine explainability and how machine explainability includes machine ethics. We develop these two facets using examples from the scenarios above. Based on these examples, we argue for a specific view of machine ethics and suggest how it can be formalized in a theoretical framework. In terms of machine explainability, we will outline how our proposed framework, by using an argumentation-based approach for decision making, can provide a foundation for machine explanations. Beyond the framework, we will also clarify the notion of machine explainability as a research area, charting its diverse and often confusing literature. To this end, we will outline what, exactly, machine explainability research aims to accomplish. Finally, we will use all these considerations as a starting point for developing evaluation criteria for good explanations, such as comprehensibility, assessability, and fidelity. Evaluating our framework using these criteria shows that it is a promising approach and augurs to outperform many other explainability approaches that have been developed so far.DFG: CRC 248: Center for Perspicuous Computing; VolkswagenStiftung: Explainable Intelligent System
    corecore