34 research outputs found

    Theory-driven Visual Design to Support Reflective Dietary Practice via mHealth: A Design Science Approach

    Get PDF
    Design for reflection in human-computer interaction (HCI) has evolved from focusing on an abstract and outcome-driven design subject towards exposing procedural or structural reflection characteristics. Although HCI research has recognized that an individual\u27s reflection is a long-lasting, multi-layered process that can be supported by meaningful design, researchers have made few efforts to derive insights from a theoretical perspective about appropriate translation into end-user visual means. Therefore, we synthesize theoretical knowledge from reflective practice and learning and argue for a differentiation between time contexts of reflection that design needs to address differently. In an interdisciplinary design-science-research project in the mHealth nutrition promotion context, we developed theory-driven guidelines for “reflection-in-action” and “reflection-on-action”. Our final design guidelines emerged from prior demonstrations and a final utility evaluation with mockup artifacts in a laboratory experiment with 64 users. Our iterative design and the resulting design guidelines offer assistance for addressing reflection design by answering reflective practice’s respective contextual requirements. Based on our user study, we show that reflection in terms of “reflection- in-action” benefits from offering actionable choice criteria in an instant timeframe, while “reflection-on-action” profits from the structured classification of behavior-related criteria from a longer, still memorable timeframe

    LMFingerprints: Visual Explanations of Language Model Embedding Spaces through Layerwise Contextualization Scores.

    Get PDF
    Language models, such as BERT, construct multiple, contextualized embeddings for each word occurrence in a corpus. Understanding how the contextualization propagates through the model's layers is crucial for deciding which layers to use for a specific analysis task. Currently, most embedding spaces are explained by probing classifiers; however, some findings remain inconclusive. In this paper, we present LMFingerprints, a novel scoring-based technique for the explanation of contextualized word embeddings. We introduce two categories of scoring functions, which measure (1) the degree of contextualization, i.e., the layerwise changes in the embedding vectors, and (2) the type of contextualization, i.e., the captured context information. We integrate these scores into an interactive explanation workspace. By combining visual and verbal elements, we provide an overview of contextualization in six popular transformer-based language models. We evaluate hypotheses from the domain of computational linguistics, and our results not only confirm findings from related work but also reveal new aspects about the information captured in the embedding spaces. For instance, we show that while numbers are poorly contextualized, stopwords have an unexpected high contextualization in the models' upper layers, where their neighborhoods shift from similar functionality tokens to tokens that contribute to the meaning of the surrounding sentences

    Visual Analytics of Co-Occurrences to Discover Subspaces in Structured Data

    Get PDF
    We present an approach that shows all relevant subspaces of categorical data condensed in a single picture. We model the categorical values of the attributes as co-occurrences with data partitions generated from structured data using pattern mining. We show that these co-occurrences are a-priori, allowing us to greatly reduce the search space, effectively generating the condensed picture where conventional approaches filter out several subspaces as these are deemed insignificant. The task of identifying interesting subspaces is common but difficult due to exponential search spaces and the curse of dimensionality. One application of such a task might be identifying a cohort of patients defined by attributes such as gender, age, and diabetes type that share a common patient history, which is modeled as event sequences. Filtering the data by these attributes is common but cumbersome and often does not allow a comparison of subspaces. We contribute a powerful multi-dimensional pattern exploration approach (MDPE-approach) agnostic to the structured data type that models multiple attributes and their characteristics as co-occurrences, allowing the user to identify and compare thousands of subspaces of interest in a single picture. In our MDPE-approach, we introduce two methods to dramatically reduce the search space, outputting only the boundaries of the search space in the form of two tables. We implement the MDPE-approach in an interactive visual interface (MDPE-vis) that provides a scalable, pixel-based visualization design allowing the identification, comparison, and sense-making of subspaces in structured data. Our case studies using a gold-standard dataset and external domain experts confirm our approach’s and implementation’s applicability. A third use case sheds light on the scalability of our approach and a user study with 15 participants underlines its usefulness and power

    Effects and challenges of using a nutrition assistance system: results of a long-term mixed-method study

    Get PDF
    Healthy nutrition contributes to preventing non-communicable and diet-related diseases. Recommender systems, as an integral part of mHealth technologies, address this task by supporting users with healthy food recommendations. However, knowledge about the effects of the long-term provision of health-aware recommendations in real-life situations is limited. This study investigates the impact of a mobile, personalized recommender system named Nutrilize. Our system offers automated personalized visual feedback and recommendations based on individual dietary behaviour, phenotype, and preferences. By using quantitative and qualitative measures of 34 participants during a study of 2–3 months, we provide a deeper understanding of how our nutrition application affects the users’ physique, nutrition behaviour, system interactions and system perception. Our results show that Nutrilize positively affects nutritional behaviour (conditional R2=. 342) measured by the optimal intake of each nutrient. The analysis of different application features shows that reflective visual feedback has a more substantial impact on healthy behaviour than the recommender (conditional R2=. 354). We further identify system limitations influencing this result, such as a lack of diversity, mistrust in healthiness and personalization, real-life contexts, and personal user characteristics with a qualitative analysis of semi-structured in-depth interviews. Finally, we discuss general knowledge acquired on the design of personalized mobile nutrition recommendations by identifying important factors, such as the users’ acceptance of the recommender’s taste, health, and personalization

    Human papillomavirus vaccination of girls in the German model region Saarland: Insurance data-based analysis and identification of starting points for improving vaccination rates

    Get PDF
    In Germany, the incidence of cervical cancer, a disease caused by human papillomaviruses (HPV), is higher than in neighboring European countries. HPV vaccination has been recommended for girls since 2007. However, it continues to be significantly less well received than other childhood vaccines, so its potential for cancer prevention is not fully realized. To find new starting points for improving vaccination rates, we analyzed pseudonymized routine billing data from statutory health insurers in the PRÄZIS study (prevention of cervical carcinoma and its precursors in women in Saarland) in the federal state Saarland serving as a model region. We show that lowering the HPV vaccination age to 9 years led to more completed HPV vaccinations already in 2015. Since then, HPV vaccination rates and the proportion of 9- to 11-year-old girls among HPV-vaccinated females have steadily increased. However, HPV vaccination rates among 15-year-old girls in Saarland remained well below 50% in 2019. Pediatricians vaccinated the most girls overall, with a particularly high proportion at the recommended vaccination age of 9–14 years, while gynecologists provided more HPV catch-up vaccinations among 15-17-year-old girls, and general practitioners compensated for HPV vaccination in Saarland communities with fewer pediatricians or gynecologists. We also provide evidence for a significant association between attendance at the children´s medical check-ups “U11” or “J1” and HPV vaccination. In particular, participation in HPV vaccination is high on the day of U11. However, obstacles are that U11 is currently not financed by all statutory health insurers and there is a lack of invitation procedures for both U11 and J1, resulting in significantly lower participation rates than for the earlier U8 or U9 screenings, which are conducted exclusively with invitations and reminders. Based on our data, we propose to restructure U11 and J1 screening in Germany, with mandatory funding for U11 and organized invitations for HPV vaccination at U11 or J1 for both boys and girls

    Research directions in recommender systems for health and well-being.: A Preface to the Special Issue

    No full text
    Recommender systems have been put to use in the entertainment and e-commerce domains for decades, and in these decades, recommender systems have grown and matured into reliable and ubiquitous systems in today’s digital landscape. Relying on this maturity, the application of recommender systems for health and well-being has seen a rise in recent years, paving the way for tailored and personalized systems that support caretakers, caregivers, and other users in the health domain. In this introduction, we give a brief overview of the stakes, the requirements, and the possibilities that recommender systems for health and well-being bring

    Supporting High-Uncertainty Decisions through AI and Logic-Style Explanations

    No full text
    A common criteria for Explainable AI (XAI) is to support users in establishing appropriate trust in the AI – rejecting advice when it is incorrect, and accepting advice when it is correct. Previous findings suggest that explanations can cause an over-reliance on AI (overly accepting advice). Explanations that evoke appropriate trust are even more challenging for decision-making tasks that are difficult for humans and AI. For this reason, we study decision-making by non-experts in the high-uncertainty domain of stock trading. We compare the effectiveness of three different explanation styles (influenced by inductive, abductive, and deductive reasoning) and the role of AI confidence in terms of a) the users’ reliance on the XAI interface elements (charts with indicators, AI prediction, explanation), b) the correctness of the decision (task performance), and c) the agreement with the AI’s prediction. In contrast to previous work, we look at interactions between different aspects of decision-making, including AI correctness, and the combined effects of AI confidence and explanations styles. Our results show that specific explanation styles (abductive and deductive) improve the user’s task performance in the case of high AI confidence compared to inductive explanations. In other words, these styles of explanations were able to invoke correct decisions (for both positive and negative decisions) when the system was certain. In such a condition, the agreement between the user’s decision and the AI prediction confirms this finding, highlighting a significant agreement increase when the AI is correct. This suggests that both explanation styles are suitable for evoking appropriate trust in a confident AI. Our findings further indicate a need to consider AI confidence as a criterion for including or excluding explanations from AI interfaces. In addition, this paper highlights the importance of carefully selecting an explanation style according to the characteristics of the task and data

    Effects of AI and Logic-Style Explanations on Users’ Decisions under Different Levels of Uncertainty

    No full text
    Existing eXplainable Artificial Intelligence (XAI) techniques support people in interpreting AI advice. However, although previous work evaluates the users’ understanding of explanations, factors influencing the decision support are largely overlooked in the literature. This article addresses this gap by studying the impact of user uncertainty, AI correctness, and the interaction between AI uncertainty and explanation logic-styles for classification tasks. We conducted two separate studies: one requesting participants to recognize handwritten digits and one to classify the sentiment of reviews. To assess the decision making, we analyzed the task performance, agreement with the AI suggestion, and the user’s reliance on the XAI interface elements. Participants make their decision relying on three pieces of information in the XAI interface (image or text instance, AI prediction, and explanation). Participants were shown one explanation style (between-participants design) according to three styles of logical reasoning (inductive, deductive, and abductive). This allowed us to study how different levels of AI uncertainty influence the effectiveness of different explanation styles. The results show that user uncertainty and AI correctness on predictions significantly affected users’ classification decisions considering the analyzed metrics. In both domains (images and text), users relied mainly on the instance to decide. Users were usually overconfident about their choices, and this evidence was more pronounced for text. Furthermore, the inductive style explanations led to overreliance on the AI advice in both domains—it was the most persuasive, even when the AI was incorrect. The abductive and deductive styles have complex effects depending on the domain and the AI uncertainty levels
    corecore