1,539 research outputs found

    Explainable Interfaces for Rapid Gaze-Based Interactions in Mixed Reality

    Full text link
    Gaze-based interactions offer a potential way for users to naturally engage with mixed reality (XR) interfaces. Black-box machine learning models enabled higher accuracy for gaze-based interactions. However, due to the black-box nature of the model, users might not be able to understand and effectively adapt their gaze behaviour to achieve high quality interaction. We posit that explainable AI (XAI) techniques can facilitate understanding of and interaction with gaze-based model-driven system in XR. To study this, we built a real-time, multi-level XAI interface for gaze-based interaction using a deep learning model, and evaluated it during a visual search task in XR. A between-subjects study revealed that participants who interacted with XAI made more accurate selections compared to those who did not use the XAI system (i.e., F1 score increase of 10.8%). Additionally, participants who used the XAI system adapted their gaze behavior over time to make more effective selections. These findings suggest that XAI can potentially be used to assist users in more effective collaboration with model-driven interactions in XR

    EMaP: Explainable AI with Manifold-based Perturbations

    Full text link
    In the last few years, many explanation methods based on the perturbations of input data have been introduced to improve our understanding of decisions made by black-box models. The goal of this work is to introduce a novel perturbation scheme so that more faithful and robust explanations can be obtained. Our study focuses on the impact of perturbing directions on the data topology. We show that perturbing along the orthogonal directions of the input manifold better preserves the data topology, both in the worst-case analysis of the discrete Gromov-Hausdorff distance and in the average-case analysis via persistent homology. From those results, we introduce EMaP algorithm, realizing the orthogonal perturbation scheme. Our experiments show that EMaP not only improves the explainers' performance but also helps them overcome a recently-developed attack against perturbation-based methods.Comment: 29 page

    A User-Centric Approach to Explainable AI in Corporate Performance Management

    Get PDF
    Machine learning (ML) applications have surged in popularity in the industry, however, the lack of transparency of ML-models often impedes the usability of ML in practice. Especially in the corporate performance management (CPM) domain, transparency is crucial to support corporate decision-making processes. To address this challenge, approaches of explainable artificial intelligence (XAI) provide solutions to reduce the opacity of ML-based systems. This design science study further builds on prior user experience (UX) and user interface (UI) focused XAI-research, to develop a user-centric approach to XAI for the CPM field. As key results, we identify design principles in three decomposition layers, including ten explainability UI-elements that we developed and evaluated through seven interviews. These results complement prior research by focusing it on the CPM domain and provide practitioners with concrete guidelines to foster ML adoption in the CPM field

    Secure and Trustworthy Artificial Intelligence-Extended Reality (AI-XR) for Metaverses

    Full text link
    Metaverse is expected to emerge as a new paradigm for the next-generation Internet, providing fully immersive and personalised experiences to socialize, work, and play in self-sustaining and hyper-spatio-temporal virtual world(s). The advancements in different technologies like augmented reality, virtual reality, extended reality (XR), artificial intelligence (AI), and 5G/6G communication will be the key enablers behind the realization of AI-XR metaverse applications. While AI itself has many potential applications in the aforementioned technologies (e.g., avatar generation, network optimization, etc.), ensuring the security of AI in critical applications like AI-XR metaverse applications is profoundly crucial to avoid undesirable actions that could undermine users' privacy and safety, consequently putting their lives in danger. To this end, we attempt to analyze the security, privacy, and trustworthiness aspects associated with the use of various AI techniques in AI-XR metaverse applications. Specifically, we discuss numerous such challenges and present a taxonomy of potential solutions that could be leveraged to develop secure, private, robust, and trustworthy AI-XR applications. To highlight the real implications of AI-associated adversarial threats, we designed a metaverse-specific case study and analyzed it through the adversarial lens. Finally, we elaborate upon various open issues that require further research interest from the community.Comment: 24 pages, 11 figure

    Synthetic-Neuroscore: Using A Neuro-AI Interface for Evaluating Generative Adversarial Networks

    Full text link
    Generative adversarial networks (GANs) are increasingly attracting attention in the computer vision, natural language processing, speech synthesis and similar domains. Arguably the most striking results have been in the area of image synthesis. However, evaluating the performance of GANs is still an open and challenging problem. Existing evaluation metrics primarily measure the dissimilarity between real and generated images using automated statistical methods. They often require large sample sizes for evaluation and do not directly reflect human perception of image quality. In this work, we describe an evaluation metric we call Neuroscore, for evaluating the performance of GANs, that more directly reflects psychoperceptual image quality through the utilization of brain signals. Our results show that Neuroscore has superior performance to the current evaluation metrics in that: (1) It is more consistent with human judgment; (2) The evaluation process needs much smaller numbers of samples; and (3) It is able to rank the quality of images on a per GAN basis. A convolutional neural network (CNN) based neuro-AI interface is proposed to predict Neuroscore from GAN-generated images directly without the need for neural responses. Importantly, we show that including neural responses during the training phase of the network can significantly improve the prediction capability of the proposed model. Materials related to this work are provided at https://github.com/villawang/Neuro-AI-Interface

    Artificial intelligence, blockchain, and extended reality: emerging digital technologies to turn the tide on illegal logging and illegal wood trade

    Get PDF
    Illegal logging which often results in forest degradation and sometimes in deforestation remains ubiquitous in many places around the globe. Managing illegal logging and illegal wood trade constitutes a global priority over the next few decades. Scientific, technological, and research communities are committed to respond rapidly, evaluating the opportunities to capitalize on emerging digital technologies for treating this formidable challenge. The innovative potentials of these emerging digital technologies at tackling illegal logging-related challenges are here investigated. We propose a novel system, WoodchAInX, combining explainable artificial intelligence (X-AI), next-generation blockchain, and extended reality (XR). Our findings on the most effective means of leveraging each technology’s potential and the convergence of the three technologies infer a vast promise for digital technology in this field. Yet, we argue that, overall, digital transformations will not deliver fundamental, responsible, and sustainable benefits without revolutionary realignment
    corecore