13 research outputs found

    Visual Representation of Explainable Artificial Intelligence Methods: Design and Empirical Studies

    Get PDF
    Explainability is increasingly considered a critical component of artificial intelligence (AI) systems, especially in high-stake domains where AI systems’ decisions can significantly impact individuals. As a result, there has been a surge of interest in explainable artificial intelligence (XAI) to increase the transparency of AI systems by explaining their decisions to end-users. In particular, extensive research has focused on developing “local model-agnostic” explainable methods that generate explanations of individual predictions for any predictive model. While these explanations can support end-users in the use of AI systems through increased transparency, three significant challenges have hindered their design, implementation, and large-scale adoption in real applications. First, there is a lack of understanding of how end-users evaluate explanations. There are many critiques that explanations are based on researchers’ intuition instead of end-users’ needs. Furthermore, there is insufficient evidence on whether end-users understand these explanations or trust XAI systems. Second, it is unclear which effect explanations have on trust when they disclose different biases on AI systems’ decisions. Prior research investigating biased decisions has found conflicting evidence on explanations’ effects. Explanations can either increase trust through perceived transparency or decrease trust as end-users perceive the system as biased. Moreover, it is unclear how contingency factors influence these opposing effects. Third, most XAI methods deliver static explanations that offer end-users limited information, resulting in an insufficient understanding of how AI systems make decisions and, in turn, lower trust. Furthermore, research has found that end-users perceive static explanations as not transparent enough, as these do not allow them to investigate the factors that influence a given decision. This dissertation addresses these challenges across three studies by focusing on the overarching research question of how to design visual representations of local model-agnostic XAI methods to increase end-users’ understanding and trust. The first challenge is addressed through an iterative design process that refines the representations of explanations from four well-established model-agnostic XAI methods and a subsequent evaluation with end-users using eye-tracking technology and interviews. Afterward, a research study that takes a psychological contract violation (PCV) theory and social identity theory perspective to investigate the contingency factors of the opposing effects of explanations on end-users’ trust addresses the second challenge. Specifically, this study investigates how end-users evaluate explanations of a gender-biased AI system while controlling for their awareness of gender discrimination in society. Finally, the third challenge is addressed through a design science research project to design an interactive XAI system for end-users to increase their understanding and trust. This dissertation makes several contributions to the ongoing research on improving the transparency of AI systems by explicitly emphasizing the end-user perspective on XAI. First, it contributes to practice by providing insights that help to improve the design of explanations of AI systems’ decisions. Additionally, this dissertation provides significant theoretical contributions by contextualizing the PCV theory to gender-biased XAI systems and the contingency factors that determine whether end-users experience a PCV. Moreover, it provides insights into how end-users cognitively evaluate explanations and extends the current understanding of the impact of explanations on trust. Finally, this dissertation contributes to the design knowledge of XAI systems by proposing guidelines for designing interactive XAI systems that give end-users more control over the information they receive to help them better understand how AI systems make decisions

    Designing Interactive Explainable AI Systems for Lay Users

    Get PDF
    Explainability considered a critical component of trustworthy artificial intelligence (AI) systems, has been proposed to address AI systems’ lack of transparency by revealing the reasons behind their decisions to lay users. However, most explainability methods developed so far provide static explanations that limit the information conveyed to lay users resulting in an insufficient understanding of how AI systems make decisions. To address this challenge and support the efforts to improve the transparency of AI systems, we conducted a design science research project to design an interactive explainable artificial intelligence (XAI) system to help lay users understand AI systems’ decisions. We relied on existing knowledge in the XAI literature to propose design principles and instantiate them in an initial prototype. We then conducted an evaluation of the prototype and interviews with lay users. Our research contributes design knowledge for interactive XAI systems and provides practical guidelines for practitioners

    TOWARDS AN INTEGRATIVE THEORETICAL FRAMEWORK OF INTERACTIVE MACHINE LEARNING SYSTEMS

    Get PDF
    Interactive machine learning (IML) is a learning process in which a user interacts with a system to iteratively define and optimise a model. Although recent years have illustrated the proliferation of IML systems in the fields of Human-Computer Interaction (HCI), Information Systems (IS), and Computer Science (CS), current research results are scattered leading to a lack of integration of existing work on IML. Furthermore, due to diverging functionalities and purposes IML systems can refer to, an uncertainty exists regarding the underlying distinct capabilities that constitute this class of systems. By reviewing extensive IML literature, this paper suggests an integrative theoretical framework for IML systems to address these current impediments. Reviewing 2,879 studies in leading journals and conferences during the years 1966-2018, we found an extensive range of applications areas that have implemented IML systems and the necessity to standardise the evaluation of those systems. Our framework offers an essential step to provide a theoretical foundation to integrate concepts and findings across different fields of research. The main contribution of this paper is organising and structuring the body of knowledge in IML for the advancement of the field. Furthermore, we suggest three opportunities for future IML research. From a practical point of view, our integrative theoretical framework can serve as a reference guide to inform the design and implementation of IML systems

    Is This System Biased? – How Users React to Gender Bias in an Explainable AI System

    No full text
    Biases in Artificial Intelligence (AI) can reinforce social inequality. Increasing transparency of AI systems through explanations can help to avoid the negative consequences of those biases. However, little is known about how users evaluate explanations of biased AI systems. Thus, we apply the Psychological Contract Violation Theory to investigate the implications of a gender-biased AI system on user trust. We allocated 339 participants into three experimental groups, each with a different loan forecasting AI system version: explainable gender-biased, explainable neutral, and non-explainable AI system. We demonstrate that only users with moderate to high general awareness of gender stereotypes in society, i.e., stigma consciousness, perceive the gender-biased AI system as not trustworthy. However, users with low stigma consciousness perceive the gender-biased AI system as trustworthy as it is more transparent than a system without explanations. Our findings show that AI biases can reinforce social inequality if they match with human stereotypes

    Produção in vitro de embriões em bovinos de corte, sob condições tropicais secas: efeito da raça

    No full text
    In vitro embryo production (IVEP) allows faster genetic gain and productivity improvement in beef herds.  Data records from the Laboratorio FIV of the UGRT, Cd. Victoria, Tamps., for the year 2019, were used to assess donor breed effects (Red Brangus, RB, Beefmaster, BM, Brahman, BH, and others) on IVEP. Individual donor records for total (TO), viable (VO) and degenerated (DO) oocytes and blastocyst production (BP) were used with a general linear model, in two analyses. In the first analysis records of 50 donors of each breed, RB, BM and BH were used, whereas, in the second analysis, a total of 462 oocyte collection records were used, as follows, 105 BR, 204 BM, 102 BH and 49 for several beef breeds. Overall, TO was not affected by donor breed, but it did affect the BP. In the first analysis, BH donors produced more blastocysts (6.5) in relation to BM (4.0) or RB (4.1) donors; whereas in the second analysis, the BR donors produced more blastocysts (8.3), in relation to the BM (5.8) or BH (6.2) donors. Perhaps, the differences for BP found in both analyses were due to the data curation for the first analysis and to the number of observations of the second analysis. It is concluded that the breed affects BP in an IVEP system, albeit, it is considered necessary to include factors as body condition score, physiological status and season in the analysis, in order to make a better assessment and a more objective interpretation of results.La producción de embriones in vitro (PEIVT) permite acelerar el mejoramiento genético y productividad en bovinos de carne. Se analizaron los registros de PEIVT del Laboratorio FIV-UGRT, Cd. Victoria, Tamps., correspondientes a 2019, con el objetivo de determinar el efecto de raza de donadora (Brangus Rojo, BR, Beefmaster, BM, Brahman, BH, y otras) sobre PEIVT. Se analizaron los registros individuales de oocitos totales (OT), viables (OV) y degenerados (OD) y la producción de blastocitos (PB) de cada donadora, mediante un modelo lineal general, en un primer análisis, se incluyeron los registros de 50 donadoras BR, BM y BH y en el segundo análisis se incluyeron todas las donadoras, para un total de 462 sesiones (105 BR, 204 BM, 102 BH y 49 de varias razas) de aspiración. En ambos análisis, la raza no afectó el número de OT, OV ni OD; sin embargo, si se observó un efecto de raza sobre PB. En el primer análisis, las donadoras BH produjeron mas blastocitos (6.5) que las BM (4.0) o las BR (4.1); mientras que en el segundo análisis, las donadoras BR produjeron mas blastocitos (8.3) en relación a las donadoras BM (5.8), las donadoras BH (6.2) o donadoras de las otras razas (3.9). Se concluye que la raza de la donadora si afecta la PB, aunque es necesario revisar posibles efectos de condición corporal, estado fisiológico y época del año; también se sugiere realizar una mejor selección de las donadoras previo a la colección de oocitos.A produção in vitro de embriões (PEIVT) permite acelerar o melhoramento genético e a produtividade em bovinos de corte. Foram analisados ​​os registros do PEIVT do Laboratório FIV-UGRT, Cd. Victoria, Tamps., correspondente ao ano de 2019, a fim de determinar o efeito da raça doadora (Brangus Rojo, BR, Beefmaster, BM, Brahman, BH, e outras) no PEIVT. Os registros individuais de ovócitos totais (OT), viáveis ​​(OV) e degenerados (OD) e produção de blastocistos (PB) de cada doadora foram analisados ​​por meio de um modelo linear geral, sendo que em uma primeira análise foram incluídos os registros de 50. doadores BR, BM e BH e na segunda análise foram incluídos todos os doadores, totalizando 462 sessões de sucção (105 BR, 204 BM, 102 BH e 49 de várias raças). Em ambas as análises, a raça não afetou o número de OT, OV ou OD; no entanto, foi observado um efeito da raça no PB. Na primeira análise, os doadores BH produziram mais blastos (6,5) do que BM (4,0) ou BR (4,1); enquanto na segunda análise, os doadores BR produziram mais blastos (8,3) em relação aos doadores BM (5,8), doadores BH (6,2) ou doadores de outras raças (3,9). Conclui-se que a raça do doador afeta a PC, embora seja necessário rever possíveis efeitos da condição corporal, estado fisiológico e época do ano; Sugere-se também uma melhor seleção das doadoras antes da coleta dos ovócitos

    New genus, two new species and new records of subterranean freshwater snails (Caenogastropoda; Cochliopidae and Lithoglyphidae) from Coahuila and Durango, Northern Mexico

    Get PDF
    This paper describes a new genus, two new species and new records of subterranean gastropods from the Sabinas and Álamos River, Coahuila, and the Nazas River, Durango, in northern Mexico. Phreatomascogos gregoi gen. n. et sp. n. from Don Martín Basin, Coahuila, is described based on shells and opercula that show some morphological similarities with shells of Phreatodrobia Hershler & Longley, 1986 (Lithoglyphidae), which is a subterranean genus from neighboring area in Texas, United States. Conchologically, the new genus can be distinguished from Phreatodrobia and all other subterranean genera by a unique combination of characteristic shell morphology and opercula apomorphies. Balconorbis sabinasense sp. n. (Cochliopidae) is the second species of this genus, which was previously known only from caves and associated subterranean habitats in Texas. The new record of Coahuilix parrasense, Czaja, Estrada-Rodríguez, Romero-Méndez, Ávila-Rodríguez, Meza-Sánchez & Covich, 2017 (Cochliopidae) from Durango and Coahuila is the first record of extant member of this genus out of its hitherto known habitat in the Cuatro Ciénegas basin, Coahuila. These records are remarkable because C. parrasense had been described recently as a fossil species. Shell morphologies of the new subterranean snails could be interpreted as possible evolutional adaptations to different hydrodynamic and other specific conditions in their habitat
    corecore