20 research outputs found

    Data visualisation literacy in higher education: an exploratory study of understanding of a learning dashboard tool

    Get PDF
    The visualisation of data has become ubiquitous. Visualisations are used to represent data in a way that is easy to understand and useful in our lives. Each data visualisation needs to be suitable to extract the correct information to complete a task and make an informed decision while minimising the impact of biases. To achieve this, the ability to create and read visualisations has become as important as the ability to read and write. Therefore, the Information Visualisation community is applying more attention to literacy and decision making in data visualisations. Until recently, researchers lacked valid and reliable test instruments to measure the literacy of users or the taxonomy to detect biased judgement in data visualisations. A literature review showed there is relatively little research on data visualisations for different user data literacy levels in authentic settings and a lack of studies that provide evidence for the presence of cognitive biases in data visualisations. This exploratory research study was undertaken to develop a method to assess perceived usefulness and confidence in reporting dashboards within higher education by adapting existing research instruments. A survey was designed to test perceived usefulness, perceived skill and 24 multiple-choice test items covering six data visualisations based on eight tasks. The study was sent to 157 potential participants, with a response rate of 20.38%. The results showed data visualisations are useful, but the purpose of some data visualisations is not always understood. Also, we showed there is a consensus that respondents perceive their data visualisation literacy is higher than they believe their peers to be. However, the higher their overconfidence, the lower their actual data visualisation literacy score. Finally, we discuss the benefits, limitations and possible future research areas

    Effect of Geospatial Uncertainty Borderization on Users' Heuristic Reasoning

    Get PDF
    Abstract. A set of mental strategies called "heuristics" – logical shortcuts that we use to make decisions under uncertainty – has become the subject of a growing number of studies. However, the process of heuristic reasoning about uncertain geospatial data remains relatively under-researched. With this study, we explored the relation between heuristics-driven decision-making and the visualization of geospatial data in states of uncertainty, with a specific focus on the visualization of borders, here termed "borderization". Therefore, we tested a set of cartographic techniques to visualize the boundaries of two types of natural hazards across a series of maps through a user survey. Respondents were asked to assess the safety and desirability of several housing locations potentially affected by air pollution or avalanches. Maps in the survey varied by "borderization" method, background color and type of information about uncertain data (e.g., extrinsic vs. intrinsic). Survey results, analyzed using a mixed quantitative-qualitative approach, confirmed previous suggestions that heuristics play a significant role in affecting users' map experience, and subsequent decision-making

    An extensible framework for provenance in human terrain visual analytics

    Get PDF
    We describe and demonstrate an extensible framework that supports data exploration and provenance in the context of Human Terrain Analysis (HTA). Working closely with defence analysts we extract requirements and a list of features that characterise data analysed at the end of the HTA chain. From these, we select an appropriate non-classified data source with analogous features, and model it as a set of facets. We develop ProveML, an XML-based extension of the Open Provenance Model, using these facets and augment it with the structures necessary to record the provenance of data, analytical process and interpretations. Through an iterative process, we develop and refine a prototype system for Human Terrain Visual Analytics (HTVA), and demonstrate means of storing, browsing and recalling analytical provenance and process through analytic bookmarks in ProveML. We show how these bookmarks can be combined to form narratives that link back to the live data. Throughout the process, we demonstrate that through structured workshops, rapid prototyping and structured communication with intelligence analysts we are able to establish requirements, and design schema, techniques and tools that meet the requirements of the intelligence community. We use the needs and reactions of defence analysts in defining and steering the methods to validate the framework

    Is it time we get real? A systematic review of the potential of data-driven technologies to address teachers' implicit biases

    Get PDF
    Data-driven technologies for education, such as artificial intelligence in education (AIEd) systems, learning analytics dashboards, open learner models, and other applications, are often created with an aspiration to help teachers make better, evidence-informed decisions in the classroom. Addressing gender, racial, and other biases inherent to data and algorithms in such applications is seen as a way to increase the responsibility of these systems and has been the focus of much of the research in the field, including systematic reviews. However, implicit biases can also be held by teachers. To the best of our knowledge, this systematic literature review is the first of its kind to investigate what kinds of teacher biases have been impacted by data-driven technologies, how or if these technologies were designed to challenge these biases, and which strategies were most effective at promoting equitable teaching behaviors and decision making. Following PRISMA guidelines, a search of five databases returned n = 359 records of which only n = 2 studies by a single research team were identified as relevant. The findings show that there is minimal evidence that data-driven technologies have been evaluated in their capacity for supporting teachers to make less biased decisions or promote equitable teaching behaviors, even though this capacity is often used as one of the core arguments for the use of data-driven technologies in education. By examining these two studies in conjunction with related studies that did not meet the eligibility criteria during the full-text review, we reveal the approaches that could play an effective role in mitigating teachers' biases, as well as ones that may perpetuate biases. We conclude by summarizing directions for future research that should seek to directly confront teachers' biases through explicit design strategies within teacher tools, to ensure that the impact of biases of both technology (including data, algorithms, models etc.) and teachers are minimized. We propose an extended framework to support future research and design in this area, through motivational, cognitive, and technological debiasing strategies

    Doctor of Philosophy

    Get PDF
    dissertationThe present study explored data presentation and human cognition with the objective of improving electronic Decision Support Systems (DSS). Computers have been used as tools for decision support for over 60 years, with the intent to supplement or replace human cognition. However, electronic computing has failed to reliably replace human cognition in complex domains. The suboptimal properties of the data and complexities of the domain often require human interpretation and intervention. Human interpretation relies on experience, values, intuition, insight and learning; which can lead to shortcuts or heuristics. Heuristics in the correct context can be economical and effective in solving many problems. When heuristics fail the results are labeled as cognitive biases or errors. Biases all share the elements of structuring incorrect or inappropriate models or hypotheses and/or insufficient consideration of the data. Most biases can be linked to confirmation bias - which is manifested by searches for and consideration of only confirming data. De-biasing techniques share the concept of shifting cognitive processing from an automatic associative mode to a more deliberate, conscious rule-based mode. This study used a modified Wason 2-4-6 task that combined methods of, 1) increased salience through data visualization with 2) appealing to the rule-based system through task instructions. The results indicate that neither increased salience nor instructions ensure increased search sufficiency, efficiency or decision accuracy. However, this study provides insight into the perceived value of evidence and iv four potential limitations related to self-directed searches: 1) The selection of necessary disconfirming evidence cannot be assumed, regardless of the perceived value of disconfirming evidence. 2) The selection of sufficient evidence does not ensure accuracy; however, 3) insufficient selection of disconfirming evidence results in lower accuracy. 4) Ambiguous evidence is considered more valuable than potentially disconfirming evidence. Implications for the design of decision support systems are presented along with limitations and directions for future research

    THE ROLE OF EMOTION IN VISUALIZATION

    Get PDF
    The popular notion that emotion and reason are incompatible is no longer defensi- ble. Recent research in psychology and cognitive science has established emotion as a key element in numerous aspects of perception and cognition, including attention, memory, decision-making, risk perception, and creativity. This dissertation centers around the observation that emotion influences many aspects of perception and cog- nition that are crucial for effective visualization. First, I demonstrate that emotion influences accuracy in fundamental visualiza- tion tasks by combining a classic graphical perception experiment (from Cleveland and McGill) with emotion induction procedures from psychology (chapter 3). Next, I expand on the experiments in the first chapter to explore additional techniques for studying emotion and visualization, resulting in an experiment that shows that performance differences between primed individuals persist even as task difficulty in- creases (chapter 4). In a separate experiment, I show how certain emotional states (i.e. frustration and engagement) can be inferred from visualization interaction logs using machine learning (chapter 5). I then discuss a model for individual cognitive dif- ferences in visualization, which situates emotion into existing individual differences research in visualization (chapter 6). Finally, I propose an preliminary model for emotion in visualization (chapter 7)

    The Attraction Effect in Information Visualization

    Get PDF
    International audience—The attraction effect is a well-studied cognitive bias in decision making research, where one's choice between two alternatives is influenced by the presence of an irrelevant (dominated) third alternative. We examine whether this cognitive bias, so far only tested with three alternatives and simple presentation formats such as numerical tables, text and pictures, also appears in visualiza-tions. Since visualizations can be used to support decision making — e.g., when choosing a house to buy or an employee to hire — a systematic bias could have important implications. In a first crowdsource experiment, we indeed partially replicated the attraction effect with three alternatives presented as a numerical table, and observed similar effects when they were presented as a scatterplot. In a second experiment, we investigated if the effect extends to larger sets of alternatives, where the number of alternatives is too large for numerical tables to be practical. Our findings indicate that the bias persists for larger sets of alternatives presented as scatterplots. We discuss implications for future research on how to further study and possibly alleviate the attraction effect

    A Design Thinking Framework for Human-Centric Explainable Artificial Intelligence in Time-Critical Systems

    Get PDF
    Artificial Intelligence (AI) has seen a surge in popularity as increased computing power has made it more viable and useful. The increasing complexity of AI, however, leads to can lead to difficulty in understanding or interpreting the results of AI procedures, which can then lead to incorrect predictions, classifications, or analysis of outcomes. The result of these problems can be over-reliance on AI, under-reliance on AI, or simply confusion as to what the results mean. Additionally, the complexity of AI models can obscure the algorithmic, data and design biases to which all models are subject, which may exacerbate negative outcomes, particularly with respect to minority populations. Explainable AI (XAI) aims to mitigate these problems by providing information on the intent, performance, and reasoning process of the AI. Where time or cognitive resources are limited, the burden of additional information can negatively impact performance. Ensuring XAI information is intuitive and relevant allows the user to quickly calibrate their trust in the AI, in turn improving trust in suggested task alternatives, reducing workload and improving task performance. This study details a structured approach to the development of XAI in time-critical systems based on a design thinking framework that preserves the agile, fast-iterative approach characteristic of design thinking and augments it with practical tools and guides. The framework establishes a focus on shared situational perspective, and the deep understanding of both users and the AI in the empathy phase, provides a model with seven XAI levels and corresponding solution themes, and defines objective, physiological metrics for concurrent assessment of trust and workload
    corecore