102 research outputs found

    What Does Explainable AI Really Mean? A New Conceptualization of Perspectives

    Get PDF
    We characterize three notions of explainable AI that cut across research fields: opaque systems that offer no insight into its algo- rithmic mechanisms; interpretable systems where users can mathemat- ically analyze its algorithmic mechanisms; and comprehensible systems that emit symbols enabling user-driven explanations of how a conclusion is reached. The paper is motivated by a corpus analysis of NIPS, ACL, COGSCI, and ICCV/ECCV paper titles showing differences in how work on explainable AI is positioned in various fields. We close by introducing a fourth notion: truly explainable systems, where automated reasoning is central to output crafted explanations without requiring human post processing as final step of the generative process

    Ethical Concerns And Responsible Use Of Generative Artificial Intelligence In Engineering Education

    Get PDF
    The use of educational technologies that use elements of machine learning (ML) and artificial intelligence (AI) are becoming common across the engineering education terrain. With the wide adoption of generative AI based applications, this trend is only going to grow. Not only is the use of these technologies going to impact teaching, but engineering education research practices are as likely to be affected as well. From data generation and analysis, to writing and presentation, all aspects of research will potentially be shaped. In this practice paper we discuss the ethical implications of the use of generative AI technologies on engineering teaching and engineering education research. We present a discussion of potential and futuristic concerns raised by the use of these technologies. We bring to the fore larger organizational and institutional issues and the need for a framework for responsible use of technology within engineering education. Finally, we engage with the current literature and popular writing on the topic to build an understanding of the issues with the potential to apply them in teaching and research practices

    Learning Faces to Predict Matching Probability in an Online Matching Platform

    Get PDF
    With the increasing use of online matching platforms, predicting matching probability between users is crucial for efficient market design. Although previous studies have constructed various visual features to predict matching probability, facial features, which are important in online matching, have not been widely used. We find that deep learning-enabled facial features can significantly enhance the prediction accuracy of a user’s partner preferences from the individual rating prediction analysis in an online dating market. We also build prediction models for each gender and use prior theories to explain different contributing factors of the models. Furthermore, we propose a novel method to visually interpret facial features using the generative adversarial network (GAN). Our work contributes the literature by providing a framework to develop and interpret facial features to investigate underlying mechanisms in online matching markets. Moreover, matching platforms can predict matching probability more accurately for better market design and recommender systems

    Explainable Artificial Intelligence and Machine Learning: A reality rooted perspective

    Full text link
    We are used to the availability of big data generated in nearly all fields of science as a consequence of technological progress. However, the analysis of such data possess vast challenges. One of these relates to the explainability of artificial intelligence (AI) or machine learning methods. Currently, many of such methods are non-transparent with respect to their working mechanism and for this reason are called black box models, most notably deep learning methods. However, it has been realized that this constitutes severe problems for a number of fields including the health sciences and criminal justice and arguments have been brought forward in favor of an explainable AI. In this paper, we do not assume the usual perspective presenting explainable AI as it should be, but rather we provide a discussion what explainable AI can be. The difference is that we do not present wishful thinking but reality grounded properties in relation to a scientific theory beyond physics
    • …
    corecore