162 research outputs found

    Immersing the artist and designer in the needs of the clinician: evolving the brief for distraction and stress reduction in a new Child Protection Unit.

    Get PDF
    Engaging clinicians in the design of new, less stressful spaces in healthcare is an interdisciplinary challenge for artists and designers. The design brief is the primary means of ensuring shared understanding and success criteria for creative projects (Press and Cooper 2003) and highlights ambitions and constraints for the project. Conventionally the brief is prepared by the client and issued to the artist or designer. This assumes that the client knows at the outset how to articulate needs and is able to envisage the outcome. Alternative processes emerging through co-design and interdisciplinary working assume the brief is developed or evolved jointly as part of the process and is focused on the experience of the user. This paper focuses on the evolution of a meaningful brief for a Child Protection Unit in NHS Greater Glasgow & Clyde’s new Royal Hospital for Children. Development of the brief was driven by the art and design team and aimed at opening up mutual understanding with the clinicians. The visual mapping of dialogue between artist, interactive designer and clinicians provides a novel approach to understanding this key stage of the process. Fremantle co-ordinated the paper. Hepburn undertook the fieldwork and provided the analysis. Fremantle structured the paper and co-ordinated reviews with Hamilton and Sands

    bLIMEy:Surrogate Prediction Explanations Beyond LIME

    Get PDF
    Surrogate explainers of black-box machine learning predictions are of paramount importance in the field of eXplainable Artificial Intelligence since they can be applied to any type of data (images, text and tabular), are model-agnostic and are post-hoc (i.e., can be retrofitted). The Local Interpretable Model-agnostic Explanations (LIME) algorithm is often mistakenly unified with a more general framework of surrogate explainers, which may lead to a belief that it is the solution to surrogate explainability. In this paper we empower the community to "build LIME yourself" (bLIMEy) by proposing a principled algorithmic framework for building custom local surrogate explainers of black-box model predictions, including LIME itself. To this end, we demonstrate how to decompose the surrogate explainers family into algorithmically independent and interoperable modules and discuss the influence of these component choices on the functional capabilities of the resulting explainer, using the example of LIME.Comment: 2019 Workshop on Human-Centric Machine Learning (HCML 2019); 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canad

    Sampling Based On Natural Image Statistics Improves Local Surrogate Explainers

    Full text link
    Many problems in computer vision have recently been tackled using models whose predictions cannot be easily interpreted, most commonly deep neural networks. Surrogate explainers are a popular post-hoc interpretability method to further understand how a model arrives at a particular prediction. By training a simple, more interpretable model to locally approximate the decision boundary of a non-interpretable system, we can estimate the relative importance of the input features on the prediction. Focusing on images, surrogate explainers, e.g., LIME, generate a local neighbourhood around a query image by sampling in an interpretable domain. However, these interpretable domains have traditionally been derived exclusively from the intrinsic features of the query image, not taking into consideration the manifold of the data the non-interpretable model has been exposed to in training (or more generally, the manifold of real images). This leads to suboptimal surrogates trained on potentially low probability images. We address this limitation by aligning the local neighbourhood on which the surrogate is trained with the original training data distribution, even when this distribution is not accessible. We propose two approaches to do so, namely (1) altering the method for sampling the local neighbourhood and (2) using perceptual metrics to convey some of the properties of the distribution of natural images.Comment: 12 pages, 7 figure

    What You Hear Is What You See: Audio Quality Metrics From Image Quality Metrics

    Full text link
    In this study, we investigate the feasibility of utilizing state-of-the-art image perceptual metrics for evaluating audio signals by representing them as spectrograms. The encouraging outcome of the proposed approach is based on the similarity between the neural mechanisms in the auditory and visual pathways. Furthermore, we customise one of the metrics which has a psychoacoustically plausible architecture to account for the peculiarities of sound signals. We evaluate the effectiveness of our proposed metric and several baseline metrics using a music dataset, with promising results in terms of the correlation between the metrics and the perceived quality of audio as rated by human evaluators

    PerceptNet:A Human Visual System Inspired Neural Network for Estimating Perceptual Distance

    Get PDF
    Traditionally, the vision community has devised algorithms to estimate the distance between an original image and images that have been subject to perturbations. Inspiration was usually taken from the human visual perceptual system and how the system processes different perturbations in order to replicate to what extent it determines our ability to judge image quality. While recent works have presented deep neural networks trained to predict human perceptual quality, very few borrow any intuitions from the human visual system. To address this, we present PerceptNet, a convolutional neural network where the architecture has been chosen to reflect the structure and various stages in the human visual system. We evaluate PerceptNet on various traditional perception datasets and note strong performance on a number of them as compared with traditional image quality metrics. We also show that including a nonlinearity inspired by the human visual system in classical deep neural networks architectures can increase their ability to judge perceptual similarity. Compared to similar deep learning methods, the performance is similar, although our network has a number of parameters that is several orders of magnitude less
    • …
    corecore