63,199 research outputs found
Health Figures: An Open Source JavaScript Library for Health Data Visualization
The way we look at data has a great impact on how we can understand it,
particularly when the data is related to health and wellness. Due to the
increased use of self-tracking devices and the ongoing shift towards preventive
medicine, better understanding of our health data is an important part of
improving the general welfare of the citizens. Electronic Health Records,
self-tracking devices and mobile applications provide a rich variety of data
but it often becomes difficult to understand. We implemented the hFigures
library inspired on the hGraph visualization with additional improvements. The
purpose of the library is to provide a visual representation of the evolution
of health measurements in a complete and useful manner. We researched the
usefulness and usability of the library by building an application for health
data visualization in a health coaching program. We performed a user evaluation
with Heuristic Evaluation, Controlled User Testing and Usability
Questionnaires. In the Heuristics Evaluation the average response was 6.3 out
of 7 points and the Cognitive Walkthrough done by usability experts indicated
no design or mismatch errors. In the CSUQ usability test the system obtained an
average score of 6.13 out of 7, and in the ASQ usability test the overall
satisfaction score was 6.64 out of 7. We developed hFigures, an open source
library for visualizing a complete, accurate and normalized graphical
representation of health data. The idea is based on the concept of the hGraph
but it provides additional key features, including a comparison of multiple
health measurements over time. We conducted a usability evaluation of the
library as a key component of an application for health and wellness
monitoring. The results indicate that the data visualization library was
helpful in assisting users in understanding health data and its evolution over
time.Comment: BMC Medical Informatics and Decision Making 16.1 (2016
Evaluation and measurement of heliostat misalignment in solar power plant using vector model
Heliostat alignment evaluation is among the main issues in solar tower concentration plant operation and maintenance. This paper describes a novel method used to evaluate heliostat misalignment and its experimental verification. This method provides a different way of visualizing beam centroid pointing errors by generating the complete deviation curve for each axis. This, for example, would be useful for verifying a heliostat’s correct alignment by using a measurement performed out of the receiver target, using these traces to calculate its reflection’s expected location, given a known misalignment. This measurement could be performed during operation simply by including a reflective element in the heliostat and two detector arrays on the tower’s surface. This model has been tested for various types of misalignments of a heliostat at different hours, dates, and heliostat locations. The simulation results have been validated by using an experimental setup at a scale of 1:100
Visualizing Magnitude and Direction in Flow Fields
In weather visualizations, it is common to see vector data represented by glyphs placed on grids. The glyphs either do not encode magnitude in readable steps, or have designs that interfere with the data. The grids form strong but irrelevant patterns. Directional, quantitative glyphs bent along streamlines are more effective for visualizing flow patterns.
With the goal of improving the perception of flow patterns in weather forecasts, we designed and evaluated two variations on a glyph commonly used to encode wind speed and direction in weather visualizations. We tested the ability of subjects to determine wind direction and speed: the results show the new designs are superior to the traditional. In a second study we designed and evaluated new methods for representing modeled wave data using similar streamline-based designs. We asked subjects to rate the marine weather visualizations: the results revealed a preference for some of the new designs
Recommended from our members
Visual support for ontology learning: an experience report
Ontology learning methods aim to automate ontology
construction. They are complex methods involving several
elements such as documents, terms and concepts. During the development of an ontology learning method, as well as during its deployment, several situations occur where
understanding the relations between these elements is crucial. Our hypothesis is that visual techniques can be used to aid this understanding. To support this claim, we present a set of such complex situations and describe the visual solutions that we developed to support them
Visual Integration of Data and Model Space in Ensemble Learning
Ensembles of classifier models typically deliver superior performance and can
outperform single classifier models given a dataset and classification task at
hand. However, the gain in performance comes together with the lack in
comprehensibility, posing a challenge to understand how each model affects the
classification outputs and where the errors come from. We propose a tight
visual integration of the data and the model space for exploring and combining
classifier models. We introduce a workflow that builds upon the visual
integration and enables the effective exploration of classification outputs and
models. We then present a use case in which we start with an ensemble
automatically selected by a standard ensemble selection algorithm, and show how
we can manipulate models and alternative combinations.Comment: 8 pages, 7 picture
Recommended from our members
Explainable and Advisable Learning for Self-driving Vehicles
Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-to-interpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers, etc., can understand what triggered a particular behavior. Explanations may be triggered by the neural controller, namely introspective explanations, or informed by the neural controller's output, namely rationalizations. Our work has focused on the challenge of generating introspective explanations of deep models for self-driving vehicles. In Chapter 3, we begin by exploring the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network's output (steering control). In the first stage, we use a visual attention model to train a convolution network end-to-end from images to steering angle. The attention model highlights image regions that potentially influence the network's output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network's behavior. In Chapter 4, we add an attention-based video-to-text model to produce textual explanations of model actions, e.g. "the car slows down because the road is wet". The attention maps of controller and explanation model are aligned so that explanations are grounded in the parts of the scene that mattered to the controller. We explore two approaches to attention alignment, strong- and weak-alignment. These explainable systems represent an externalization of tacit knowledge. The network's opaque reasoning is simplified to a situation-specific dependence on a visible object in the image. This makes them brittle and potentially unsafe in situations that do not match training data. In Chapter 5, we propose to address this issue by augmenting training data with natural language advice from a human. Advice includes guidance about what to do and where to attend. We present the first step toward advice-giving, where we train an end-to-end vehicle controller that accepts advice. The controller adapts the way it attends to the scene (visual attention) and the control (steering and speed). Further, in Chapter 6, we propose a new approach that learns vehicle control with the help of long-term (global) human advice. Specifically, our system learns to summarize its visual observations in natural language, predict an appropriate action response (e.g. "I see a pedestrian crossing, so I stop"), and predict the controls, accordingly
Flow-based Intrinsic Curiosity Module
In this paper, we focus on a prediction-based novelty estimation strategy
upon the deep reinforcement learning (DRL) framework, and present a flow-based
intrinsic curiosity module (FICM) to exploit the prediction errors from optical
flow estimation as exploration bonuses. We propose the concept of leveraging
motion features captured between consecutive observations to evaluate the
novelty of observations in an environment. FICM encourages a DRL agent to
explore observations with unfamiliar motion features, and requires only two
consecutive frames to obtain sufficient information when estimating the
novelty. We evaluate our method and compare it with a number of existing
methods on multiple benchmark environments, including Atari games, Super Mario
Bros., and ViZDoom. We demonstrate that FICM is favorable to tasks or
environments featuring moving objects, which allow FICM to utilize the motion
features between consecutive observations. We further ablatively analyze the
encoding efficiency of FICM, and discuss its applicable domains
comprehensively.Comment: The SOLE copyright holder is IJCAI (International Joint Conferences
on Artificial Intelligence), all rights reserved. The link is provided as
follows: https://www.ijcai.org/Proceedings/2020/28
- …