1,498 research outputs found

    Classification of Explainable Artificial Intelligence Methods through Their Output Formats

    Get PDF
    Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret. Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic of their inferences. This systematic review aimed to organise these methods into a hierarchical classification system that builds upon and extends existing taxonomies by adding a significant dimension—the output formats. The reviewed scientific papers were retrieved by conducting an initial search on Google Scholar with the keywords “explainable artificial intelligence”; “explainable machine learning”; and “interpretable machine learning”. A subsequent iterative search was carried out by checking the bibliography of these articles. The addition of the dimension of the explanation format makes the proposed classification system a practical tool for scholars, supporting them to select the most suitable type of explanation format for the problem at hand. Given the wide variety of challenges faced by researchers, the existing XAI methods provide several solutions to meet the requirements that differ considerably between the users, problems and application fields of artificial intelligence (AI). The task of identifying the most appropriate explanation can be daunting, thus the need for a classification system that helps with the selection of methods. This work concludes by critically identifying the limitations of the formats of explanations and by providing recommendations and possible future research directions on how to build a more generally applicable XAI method. Future work should be flexible enough to meet the many requirements posed by the widespread use of AI in several fields, and the new regulation

    Interactive Visual Analysis of Translations

    Get PDF
    This thesis is the result of a collaboration with the College of Arts and Humanities at Swansea University. The goal of this collaboration is to design novel visualization techniques to enable digital humanities scholars to explore and analyze parallel translations. To this end, chapter 2 introduces the first survey of surveys on text visualization which reviews all of the surveys and state-of-the-art reports on text visualization techniques, classifies them, provides recommendations, and discusses reported challenges.Following this, we present three visual interactive designs that support the typical digital humanities scholars workflow. In Chapter 4, we present VNLP, a visual, interactive design that enables users to explicitly observe the NLP pipeline processes and update the parameters at each processing stage. Chapter 5 presents AlignVis, a visual tool that provides a semi-automatic alignment framework to build a correspondence between multiple translations. It presents the results of using text similarity measurements and enables the user to create, verify, and edit alignments using a novel visual interface. Chapter 6 introduce TransVis, a novel visual design that supports comparison of multiple parallel translations. It incorporates customized mechanisms for rapid and interactive filtering and selection of a large number of German translations of Shakespeare’s Othello. All of the visual designs are evaluated using examples, detailed observations, case studies, and/or domain expert feedback from a specialist in modern and contemporary German literature and culture.Chapter 7 reports our collaborative experience and proposes a methodological workflow to guide such interdisciplinary research projects. This chapter also includes a summary of outcomes and lessons learned from our collaboration with the domain expert. Finally, Chapter 8 presents a summary of the thesis and future work directions
    corecore