17 research outputs found

    The Pragmatic Turn in Explainable Artificial Intelligence (XAI)

    Get PDF
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will lack a well-defined goal. Aside from providing a clearer objective for XAI, focusing on understanding also allows us to relax the factivity condition on explanation, which is impossible to fulfill in many machine learning models, and to focus instead on the pragmatic conditions that determine the best fit between a model and the methods and devices deployed to understand it. After an examination of the different types of understanding discussed in the philosophical and psychological literature, I conclude that interpretative or approximation models not only provide the best way to achieve the objectual understanding of a machine learning model, but are also a necessary condition to achieve post hoc interpretability. This conclusion is partly based on the shortcomings of the purely functionalist approach to post hoc interpretability that seems to be predominant in most recent literature

    Explaining Random Forest Predictions with Association Rules

    Get PDF
    Random forests frequently achieve state-of-the-art predictive performance. However, the logic behind their predictions cannot be easily understood, since they are the result of averaging often hundreds or thousands of, possibly conflicting, individual predictions. Instead of presenting all the individual predictions, an alternative is proposed, by which the predictions are explained using association rules generated from itemsets representing paths in the trees of the forest. An empirical investigation is presented, in which alternative ways of generating the association rules are compared with respect to explainability, as measured by the fraction of predictions for which there is no applicable rule and by the fraction of predictions for which there is at least one applicable rule that conflicts with the forest prediction. For the considered datasets, it can be seen that most predictions can be explained by the discovered association rules, which have a high level of agreement with the underlying forest. The results do not single out a clear winner of the considered alternatives in terms of unexplained and disagreement rates, but show that they are associated with substantial differences in computational cost

    Approximate and Situated Causality in Deep Learning

    Get PDF
    Altres ajuts: ICREA Academia 2019, and "AppPhil: Applied Philosophy for the Value-Design of Social Networks Apps" project, funded by Caixabank in Recercaixa2017.Causality is the most important topic in the history of western science, and since the beginning of the statistical paradigm, its meaning has been reconceptualized many times. Causality entered into the realm of multi-causal and statistical scenarios some centuries ago. Despite widespread critics, today deep learning and machine learning advances are not weakening causality but are creating a new way of finding correlations between indirect factors. This process makes it possible for us to talk about approximate causality, as well as about a situated causality

    Exploring Qualitative Research Using LLMs

    Full text link
    The advent of AI driven large language models (LLMs) have stirred discussions about their role in qualitative research. Some view these as tools to enrich human understanding, while others perceive them as threats to the core values of the discipline. This study aimed to compare and contrast the comprehension capabilities of humans and LLMs. We conducted an experiment with small sample of Alexa app reviews, initially classified by a human analyst. LLMs were then asked to classify these reviews and provide the reasoning behind each classification. We compared the results with human classification and reasoning. The research indicated a significant alignment between human and ChatGPT 3.5 classifications in one third of cases, and a slightly lower alignment with GPT4 in over a quarter of cases. The two AI models showed a higher alignment, observed in more than half of the instances. However, a consensus across all three methods was seen only in about one fifth of the classifications. In the comparison of human and LLMs reasoning, it appears that human analysts lean heavily on their individual experiences. As expected, LLMs, on the other hand, base their reasoning on the specific word choices found in app reviews and the functional components of the app itself. Our results highlight the potential for effective human LLM collaboration, suggesting a synergistic rather than competitive relationship. Researchers must continuously evaluate LLMs role in their work, thereby fostering a future where AI and humans jointly enrich qualitative research

    Conceptual Foundations on Debiasing for Machine Learning-Based Software

    Get PDF
    Machine learning (ML)-based software’s deployment has raised serious concerns about its pervasive and harmful consequences for users, business, and society inflicted through bias. While approaches to address bias are increasingly recognized and developed, our understanding of debiasing remains nascent. Research has yet to provide a comprehensive coverage of this vast growing field, much of which is not embedded in theoretical understanding. Conceptualizing and structuring the nature, effect, and implementation of debiasing instruments could provide necessary guidance for practitioners investing in debiasing efforts. We develop a taxonomy that classifies debiasing instrument characteristics into seven key dimensions. We evaluate and refine our taxonomy through nine experts and apply our taxonomy to three actual debiasing instruments, drawing lessons for the design and choice of appropriate instruments. Bridging the gaps between our conceptual understanding of debiasing for ML-based software and its organizational implementation, we discuss contributions and future research

    Legal personality of artificial intelligence under international law

    Get PDF
    To be able to offer a deeper understanding of the topic this work will first examine the concept of legal personality, its meaning and application in the legal framework of international law over the years. Without claiming advanced technological knowledge in scientific areas like robotics and engineering the paper will then try to present some basic overview over the latest developments concerning Artificial Intelligence, such as quantum computing and emotional intelligence. Consequently some suggestions about possibilities of connecting these two topics will be made. The questions introduced will engage with the nature and different forms of legal personhood, its connection to intelligence, autonomy and/or consciousness. This paper aims to create a more practical and not a general, hypothetical idea of how an AI agent could be granted international legal personality and what could be the possible effects of that (for example rights and obligations). For this purpose it will focus on the recognised subjects of international law and examine on their example an AI agent as a possible future actor in international legal relationships. Subject of reference will be international law and recent developments in EU law, such as the European Parliament initiative to regulate Artificial Intelligence as well as some regulations and “visions” of national legislation, for example Estonia and China. Consequently the dangers of granting legal personhood to AI agents will be presented and discussed. The arguments against the creation of a “technical veil” will be examined closely. The work will then refer to possible advantages and positive aspects of an AI’s legal personhood under international law. In the final chapter a conclusion and some recommendation will be made

    Sustainable AI : An inventory of the state of knowledge of ethical, social, and legal challenges related to artificial intelligence

    Get PDF
    This report is an inventory of the state of knowledge of ethical, social, and legal challenges related to artificial intelligence conducted within the Swedish Vinnova-funded project “Hållbar AI – AI Ethics and Sustainability”, led by Anna Felländer. Based on a review and mapping of reports and studies, a quantitative and bibliometric analysis, and in-depth analyses of the healt- care sector, the telecom sector, and digital platforms, the report proposes three recommendations. Sustainable AI requires: 1. a broad focus on AI governance and regulation issues, 2. promoting multi-disciplinary collaboration, and 3. building trust in AI applications and applied machine-learning, which is a matter of key importance and requires further study of the relationship between transparency and accountability

    Exposing implicit biases and stereotypes in human and artificial intelligence: state of the art and challenges with a focus on gender

    Get PDF
    Biases in cognition are ubiquitous. Social psychologists suggested biases and stereotypes serve a multifarious set of cognitive goals, while at the same time stressing their potential harmfulness. Recently, biases and stereotypes became the purview of heated debates in the machine learning community too. Researchers and developers are becoming increasingly aware of the fact that some biases, like gender and race biases, are entrenched in the algorithms some AI applications rely upon. Here, taking into account several existing approaches that address the problem of implicit biases and stereotypes, we propose that a strategy to cope with this phenomenon is to unmask those found in AI systems by understanding their cognitive dimension, rather than simply trying to correct algorithms. To this extent, we present a discussion bridging together findings from cognitive science and insights from machine learning that can be integrated in a state-of-the-art semantic network. Remarkably, this resource can be of assistance to scholars (e.g., cognitive and computer scientists) while at the same time contributing to refine AI regulations affecting social life. We show how only through a thorough understanding of the cognitive processes leading to biases, and through an interdisciplinary effort, we can make the best of AI technology
    corecore