646 research outputs found
GreenVis: Energy-Saving Color Schemes for Sequential Data Visualization on OLED Displays
The organic light emitting diode (OLED) display has recently become popular in the consumer electronics market. Compared with current LCD display technology, OLED is an emerging display technology that emits light by the pixels themselves and doesn’t need an external back light as the illumination source. In this paper, we offer an approach to reduce power consumption on OLED displays for sequential data visualization. First, we create a multi-objective optimization approach to find the most energy-saving color scheme for given visual perception difference levels. Second, we apply the model in two situations: pre-designed color schemes and auto generated color schemes. Third, our experiment results show that the energy-saving sequential color scheme can reduce power consumption by 17.2% for pre-designed color schemes. For auto-generated color schemes, it can save 21.9% of energy in comparison to the reference color scheme for sequential data
We Didn\u27t Think it Could Happen to Us
We Didn\u27t Think it Could Happen to Us by Chris Nort
The Effects of Task, Task Mapping, and Layout Space on User Performance in Information-Rich Virtual Environments
How should abstract information be displayed in Information-Rich Virtual Environments (IRVEs)? There are a variety of techniques available, and it is important to determine which techniques help foster a user’s understanding both within and between abstract and spatial information types. Our evaluation compared two such techniques: Object Space and Display Space. Users strongly prefer Display Space over Object Space, and those who use Display Space may perform better. Display Space was faster and more accurate than Object Space for tasks comparing abstract information. Object Space was more accurate for comparisons of spatial information. These results suggest that for abstract criteria, visibility is a more important requirement than perceptual coupling by depth and association cues. They also support the value of perceptual coupling for tasks with spatial criteria
DeepSI: Interactive Deep Learning for Semantic Interaction
In this paper, we design novel interactive deep learning methods to improve
semantic interactions in visual analytics applications. The ability of semantic
interaction to infer analysts' precise intents during sensemaking is dependent
on the quality of the underlying data representation. We propose the
framework that integrates deep learning into
the human-in-the-loop interactive sensemaking pipeline, with two important
properties. First, deep learning extracts meaningful representations from raw
data, which improves semantic interaction inference. Second, semantic
interactions are exploited to fine-tune the deep learning representations,
which then further improves semantic interaction inference. This feedback loop
between human interaction and deep learning enables efficient learning of user-
and task-specific representations. To evaluate the advantage of embedding the
deep learning within the semantic interaction loop, we compare
against a state-of-the-art but more basic use
of deep learning as only a feature extractor pre-processed outside of the
interactive loop. Results of two complementary studies, a human-centered
qualitative case study and an algorithm-centered simulation-based quantitative
experiment, show that more accurately
captures users' complex mental models with fewer interactions
Scientists in the MIST: Simplifying Interface Design for End Users
We are building a Malleable Interactive Software Toolkit (MIST), a tool set and infrastructure to simplify the design and construction of dynamically-reconfigurable (malleable) interactive software. Malleable software offers the end-user powerful tools to reshape their interactive environment on the fly. We aim to make the construction of such software straightforward, and to make reconfiguration of the resulting systems approachable and manageable to an educated, but non-specialist, user. To do so, we draw on a diverse body of existing research on alternative approaches to user interface (UI) and interactive software construction, including declarative UI languages, constraint-based programming and UI management, reflection and data-driven programming, and visual programming techniques
Space for Two to Think: Large, High-Resolution Displays for Co-located Collaborative Sensemaking
Large, high-resolution displays carry the potential to enhance single display groupware collaborative sensemaking for intelligence analysis tasks by providing space for common ground to develop, but it is up to the visual analytics tools to utilize this space effectively. In an exploratory study, we compared two tools (Jigsaw and a document viewer), which were adapted to support multiple input devices, to observe how the large display space was used in establishing and maintaining common ground during an intelligence analysis scenario using 50 textual documents. We discuss the spatial strategies employed by the pairs of participants, which were largely dependent on tool type (data-centric or function-centric), as well as how different visual analytics tools used collaboratively on large, high-resolution displays impact common ground in both process and solution. Using these findings, we suggest design considerations to enable future co-located collaborative sensemaking tools to take advantage of the benefits of collaborating on large, high-resolution displays
- …