6 research outputs found

    Communicating qualitative uncertainty in data visualization

    Get PDF
    Qualitative uncertainty refers to the implicit and underlying issues that are imbued in data, such as the circumstances of its collection, its storage or even biases and assumptions made by its authors. Although such uncertainty can jeopardize the validity of the data analysis, it is often overlooked in visualizations, due to it being indirect and non-quantifiable. In this paper we present two case studies within the digital humanities in which we examined how to integrate uncertainty in our visualization designs. Using these cases as a starting point we propose four considerations for data visualization research in relation to indirect, qualitative uncertainty: (1) we suggest that uncertainty in visualization should be examined within its socio-technological context, (2) we propose the use of interaction design patterns to design for it, (3) we argue for more attention to be paid to the data generation process in the humanities, and (4) we call for the further development of participatory activities specifically catered for understanding qualitative uncertainties. While our findings are grounded in the humanities, we believe that these considerations can be beneficial for other settings where indirect uncertainty plays an equally prevalent role

    Communicating uncertain information from deep learning models to users

    Get PDF
    “The use of Artificial Intelligence (AI) decision support systems is increasing in high-stakes contexts, such as healthcare, defense, and finance. Uncertainty information may help users better leverage AI predictions, especially when combined with domain knowledge. I conducted two human-subject experiments to examine the effects of uncertainty information with AI recommendations. The experimental stimuli are from an existing image recognition deep learning model, one popular approach to AI. In Paper I, I evaluated the effect of the number of AI recommendations and provision of uncertainty information. For a series of images, participants identified the subject and rated their confidence level. Results suggest that AI recommendations, especially multiple, increased accuracy and confidence. However, uncertainty information, which was represented visually with bars, did not significantly improve participants\u27 performance. In Paper II, I tested the effect of AI recommendations in a within-subject comparison and the effect of more salient uncertainty information in a between-subject comparison in the context of varying domain knowledge. The uncertainty information combined both numerical (percent) and visual (color-coded bar) formats to make the information easier to interpret and more noticeable. Consistent with Paper I, results suggest that AI recommendations improved participants’ accuracy and confidence. In addition, the more salient uncertainty information significantly increased accuracy, but not confidence. Based on a subjective measure of domain knowledge, participants had higher domain knowledge for animals. In general, AI recommendations and uncertainty information had less of an effect as domain knowledge increased. Results suggest that uncertainty information, can improve accuracy and potentially decrease over-confidence”--Abstract, page iv

    A systematic exploration of uncertainty in interactive systems

    Get PDF
    Uncertainty is an inherent part of our everyday life. Humans have to deal with uncertainty every time they make a decision. The importance of uncertainty additionally increases in the digital world. Machine learning and predictive algorithms introduce statistical uncertainty to digital information. In addition, the rising number of sensors in our surroundings increases the amount of statistically uncertain data, as sensor data is prone to measurement errors. Hence, there is an emergent need for practitioners and researchers in Human-Computer Interaction to explore new concepts and develop interactive systems able to handle uncertainty. Such systems should not only support users in entering uncertainty in their input, but additionally present uncertainty in a comprehensible way. The main contribution of this thesis is the exploration of the role of uncertainty in interactive systems and how novel input and output methods can support researchers and designers to efficiently and clearly communicate uncertainty. By using empirical methods of Human-Computer Interaction and a systematic approach, we present novel input and output methods that support the comprehensive communication of uncertainty in interactive systems. We further integrate our results in a simulation tool for end-users. Based on related work, we create a systematic overview of sources of uncertainty in interactive systems to support the quantification of uncertainty and identify relevant research areas. The overview can help practitioners and researchers to identify uncertainty in interactive systems and either reduce or communicate it. We then introduce new concepts for the input of uncertain data. We enhance standard input controls, develop specific slider controls and tangible input controls, and collect physiological measurements. We also compare different representations for the output of uncertainty to make recommendations for their usage. Furthermore, we analyze how humans interpret uncertain data und make suggestions on how to avoid misinterpretation and statistically wrong judgements. We embed the insights gained from the results of this thesis in an end-user simulation tool to make it available for future research. The tool is intended to be a starting point for future research on uncertainty in interactive systems and foster communicating uncertainty and building trust in the system. Overall, our work shows that user interfaces can be enhanced to effectively support users with the input and output of statistically uncertain information
    corecore