5,637 research outputs found

    Communicating population health statistics through graphs: a randomised controlled trial of graph design interventions

    Get PDF
    BACKGROUND: Australian epidemiologists have recognised that lay readers have difficulty understanding statistical graphs in reports on population health. This study aimed to provide evidence for graph design improvements that increase comprehension by non-experts. METHODS: This was a double-blind, randomised, controlled trial of graph-design interventions, conducted as a postal survey. Control and intervention participants were randomly selected from telephone directories of health system employees. Eligible participants were on duty at the listed location during the study period. Controls received a booklet of 12 graphs from original publications, and intervention participants received a booklet of the same graphs with design modifications. A questionnaire with 39 interpretation tasks was included with the booklet. Interventions were assessed using the ratio of the prevalence of correct responses given by the intervention group to those given by the control group for each task. RESULTS: The response rate from 543 eligible participants (261 intervention and 282 control) was 67%. The prevalence of correct answers in the control group ranged from 13% for a task requiring knowledge of an acronym to 97% for a task identifying the largest category in a pie chart. Interventions producing the greatest improvement in comprehension were: changing a pie chart to a bar graph (3.6-fold increase in correct point reading), changing the y axis of a graph so that the upward direction represented an increase (2.9-fold increase in correct judgement of trend direction), a footnote to explain an acronym (2.5-fold increase in knowledge of the acronym), and matching the y axis range of two adjacent graphs (two-fold increase in correct comparison of the relative difference in prevalence between two population subgroups). CONCLUSION: Profound population health messages can be lost through use of overly technical language and unfamiliar statistical measures. In our study, most participants did not understand age standardisation and confidence intervals. Inventive approaches are required to address this problem

    Ecological IVIS design : using EID to develop a novel in-vehicle information system

    Get PDF
    New in-vehicle information systems (IVIS) are emerging which purport to encourage more environment friendly or ‘green’ driving. Meanwhile, wider concerns about road safety and in-car distractions remain. The ‘Foot-LITE’ project is an effort to balance these issues, aimed at achieving safer and greener driving through real-time driving information, presented via an in-vehicle interface which facilitates the desired behaviours while avoiding negative consequences. One way of achieving this is to use ecological interface design (EID) techniques. This article presents part of the formative human-centred design process for developing the in-car display through a series of rapid prototyping studies comparing EID against conventional interface design principles. We focus primarily on the visual display, although some development of an ecological auditory display is also presented. The results of feedback from potential users as well as subject matter experts are discussed with respect to implications for future interface design in this field

    Slicer

    Get PDF
    Explorative data visualization is a widespread tool for gaining insights from datasets. Investigating data in linked visualizations lets users explore potential relationships in their data at will. Furthermore, this type of analysis does not require any technical knowledge, widening the userbase from developers to anyone. Implementing explorative data visualizations in web browsers makes data analysis accessible to anyone with a PC. In addition to accessibility, the available types of visualizations and their interactive latency are essential for the utility of data exploration. Available visualizations limit the number of datasets eligible for use in the application, and latency limits how much exploring the users are willing to do. Existing solutions often do all the computation involved in either the client application or on a backend server. However, using the client limits performance and data size since hardware resources in web browsers are scarce, and sending large datasets over a network is not feasible. Whereas server-based computation often comes with high requirements for server hardware and is limited by network latency and bandwidth on each interaction. This thesis presents Slicer, a framework for creating explorative data visualizations in web browsers. Applications can be created with minimal developer effort, requiring only a description of the visualizations. Slicer implements bar charts and choropleth maps. The visualizations are linked and can be filtered either by brushing or clicking on single targets. To overcome the hurdles of pure client- and server-reliant solutions, Slicer uses a hybrid approach, where prioritized interactions are handled client-side. Recognizing that different types of interactions have different latency thresholds, we trade the cost of switching views for low latency on filtering. To achieve real-time filtering performance, we follow the principle that the chosen resolution of the visualizations, not data size, should limit interactive scalability. We describe use of data tiles accommodating more interactions than shown in earlier work, using an approach based on delta differencing, which ensures constant time complexity when filtering. For computing data tiles, we present techniques for efficient computation on consumer hardware. Our results show that Slicer can offer real-time interactivity on latency-sensitive interactions regardless of data size, averaging above 150Hz on a consumer laptop. For less sensitive interactions, acceptable latency is shown for datasets with tens of millions of records, depending on the resolution of the visualizations

    Second CLIPS Conference Proceedings, volume 1

    Get PDF
    Topics covered at the 2nd CLIPS Conference held at the Johnson Space Center, September 23-25, 1991 are given. Topics include rule groupings, fault detection using expert systems, decision making using expert systems, knowledge representation, computer aided design and debugging expert systems

    COLAEVA: Visual Analytics and Data Mining Web-Based Tool for Virtual Coaching of Older Adult Populations

    Get PDF
    The global population is aging in an unprecedented manner and the challenges for improving the lives of older adults are currently both a strong priority in the political and healthcare arena. In this sense, preventive measures and telemedicine have the potential to play an important role in improving the number of healthy years older adults may experience and virtual coaching is a promising research area to support this process. This paper presents COLAEVA, an interactive web application for older adult population clustering and evolution analysis. Its objective is to support caregivers in the design, validation and refinement of coaching plans adapted to specific population groups. COLAEVA enables coaching caregivers to interactively group similar older adults based on preliminary assessment data, using AI features, and to evaluate the influence of coaching plans once the final assessment is carried out for a baseline comparison. To evaluate COLAEVA, a usability test was carried out with 9 test participants obtaining an average SUS score of 71.1. Moreover, COLAEVA is available online to use and explore.This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 769830

    Mobile phones as medical devices in mental disorder treatment: an overview

    Get PDF
    Mental disorders can have a significant, negative impact on sufferers’ lives, as well as on their friends and family, healthcare systems and other parts of society. Approximately 25 % of all people in Europe and the USA experience a mental disorder at least once in their lifetime. Currently, monitoring mental disorders relies on subjective clinical self-reporting rating scales, which were developed more than 50 years ago. In this paper, we discuss how mobile phones can support the treatment of mental disorders by (1) implementing human–computer interfaces to support therapy and (2) collecting relevant data from patients’ daily lives to monitor the current state and development of their mental disorders. Concerning the first point, we review various systems that utilize mobile phones for the treatment of mental disorders. We also evaluate how their core design features and dimensions can be applied in other, similar systems. Concerning the second point, we highlight the feasibility of using mobile phones to collect comprehensive data including voice data, motion and location information. Data mining methods are also reviewed and discussed. Based on the presented studies, we summarize advantages and drawbacks of the most promising mobile phone technologies for detecting mood disorders like depression or bipolar disorder. Finally, we discuss practical implementation details, legal issues and business models for the introduction of mobile phones as medical devices

    Playing for Success : an evaluation of the second year

    Get PDF

    Statistical Graph Quality Analysis of Utah State University Master of Science Thesis Reports

    Get PDF
    Graphical software packages have become increasingly popular in our modern world, but there are concerns within the statistical visualization field about the default settings provided by these packages, which can make it challenging to create good quality graphs that align with standard graph principles. In this thesis, we investigate whether the quality of graphs from Utah State University (USU) Plan A Master of Science (MS) thesis reports from the years 1930 to 2019 was affected by the rise of graphical software packages. We collected all data stored on the USU Digital Commons website since November 2021 to determine the specific group of graphs we wanted to investigate and developed a sampling process to obtain a sample size of 90 graphs evenly distributed over the time range. To accurately judge graph quality, we compiled and condensed good graphic standards from the statistical literature and developed our own set of graph quality criteria, grouped within four distinct categories: Labeling, Clear Understanding, Meaningful, and Scaling and Gridlines. We constructed a scoring system to rate the quality of graphs against these criteria and explored the results by constructing several visualizations and performing various statistical analyses. Our analysis assessed whether the rise of graphical software packages impacted the quality of graphs within the USU Plan A MS thesis reports
    • …
    corecore