249 research outputs found
An Affordance-Based Framework for Human Computation and Human-Computer Collaboration
Visual Analytics is “the science of analytical reasoning facilitated by visual interactive interfaces” [70]. The goal of this field is to develop tools and methodologies for approaching problems whose size and complexity render them intractable without the close coupling of both human and machine analysis. Researchers have explored this coupling in many venues: VAST, Vis, InfoVis, CHI, KDD, IUI, and more. While there have been myriad promising examples of human-computer collaboration, there exists no common language for comparing systems or describing the benefits afforded by designing for such collaboration. We argue that this area would benefit significantly from consensus about the design attributes that define and distinguish existing techniques. In this work, we have reviewed 1,271 papers from many of the top-ranking conferences in visual analytics, human-computer interaction, and visualization. From these, we have identified 49 papers that are representative of the study of human-computer collaborative problem-solving, and provide a thorough overview of the current state-of-the-art. Our analysis has uncovered key patterns of design hinging on human- and machine-intelligence affordances, and also indicates unexplored avenues in the study of this area. The results of this analysis provide a common framework for understanding these seemingly disparate branches of inquiry, which we hope will motivate future work in the field
Dynamic Prefetching of Data Tiles for Interactive Visualization
In this paper, we present ForeCache, a general-purpose tool for exploratory browsing of large datasets. ForeCache utilizes a client-server architecture, where the user interacts with a lightweight client-side interface to browse datasets, and the data to be browsed is retrieved from a DBMS running on a back-end server. We assume a detail-on-demand browsing paradigm, and optimize the back-end support for this paradigm by inserting a separate middleware layer in front of the DBMS. To improve response times, the middleware layer fetches data ahead of the user as she explores a dataset. We consider two different mechanisms for prefetching: (a) learning what to fetch from the user's recent movements, and (b) using data characteristics (e.g., histograms) to find data similar to what the user has viewed in the past. We incorporate these mechanisms into a single prediction engine that adjusts its prediction strategies over time, based on changes in the user's behavior. We evaluated our prediction engine with a user study, and found that our dynamic prefetching strategy provides: (1) significant improvements in overall latency when compared with non-prefetching systems (430% improvement); and (2) substantial improvements in both prediction accuracy (25% improvement) and latency (88% improvement) relative to existing prefetching techniques
Does Interaction Improve Bayesian Reasoning with Visualization?
Interaction enables users to navigate large amounts of data effectively,
supports cognitive processing, and increases data representation methods.
However, there have been few attempts to empirically demonstrate whether adding
interaction to a static visualization improves its function beyond popular
beliefs. In this paper, we address this gap. We use a classic Bayesian
reasoning task as a testbed for evaluating whether allowing users to interact
with a static visualization can improve their reasoning. Through two
crowdsourced studies, we show that adding interaction to a static Bayesian
reasoning visualization does not improve participants' accuracy on a Bayesian
reasoning task. In some cases, it can significantly detract from it. Moreover,
we demonstrate that underlying visualization design modulates performance and
that people with high versus low spatial ability respond differently to
different interaction techniques and underlying base visualizations. Our work
suggests that interaction is not as unambiguously good as we often believe; a
well designed static visualization can be as, if not more, effective than an
interactive one.Comment: 14 pages, 11 figures, To be published in 2021 ACM CHI Virtual
Conference on Human Factors in Computing System
Balancing Human and Machine Contributions in Human Computation Systems
Many interesting and successful human computation systems leverage the complementary computational strengths of both humans and machines to solve these problems. In this chapter, we examine Human Computation as a type of Human-Computer Collaboration—collaboration involving at least one human and at least one computational agent. We discuss recent advances in the open area of function allocation, and explore how to balance the contributions of humans and machines in computational systems. We then explore how human-computer collaborative strategies can be used to solve problems that are difficult or computationally infeasible for computers or humans alone
Exploring Agent-Based Simulations in Political Science Using Aggregate Temporal Graphs
Agent-based simulation has become a key technique for modeling and simulating dynamic, complicated behaviors in social and behavioral sciences. Lacking the appropriate tools and support, it is difficult for social scientists to thoroughly analyze the results of these simulations. In this work, we capture the complex relationships between discrete simulation states by visualizing the data as a temporal graph. In collaboration with expert analysts, we identify two graph structures which capture important relationships between pivotal states in the simulation and their inevitable outcomes. Finally, we demonstrate the utility of these structures in the interactive analysis of a large-scale social science simulation of political power in present-day Thailand
Preliminary Guidelines For Combining Data Integration and Visual Data Analysis
Data integration is often performed to consolidate information from multiple
disparate data sources during visual data analysis. However, integration
operations are usually separate from visual analytics operations such as encode
and filter in both interface design and empirical research. We conducted a
preliminary user study to investigate whether and how data integration should
be incorporated directly into the visual analytics process. We used two
interface alternatives featuring contrasting approaches to the data preparation
and analysis workflow: manual file-based ex-situ integration as a separate step
from visual analytics operations; and automatic UI-based in-situ integration
merged with visual analytics operations. Participants were asked to complete
specific and free-form tasks with each interface, browsing for patterns,
generating insights, and summarizing relationships between attributes
distributed across multiple files. Analyzing participants' interactions and
feedback, we found both task completion time and total interactions to be
similar across interfaces and tasks, as well as unique integration strategies
between interfaces and emergent behaviors related to satisficing and cognitive
bias. Participants' time spent and interactions revealed that in-situ
integration enabled users to spend more time on analysis tasks compared with
ex-situ integration. Participants' integration strategies and analytical
behaviors revealed differences in interface usage for generating and tracking
hypotheses and insights. With these results, we synthesized preliminary
guidelines for designing future visual analytics interfaces that can support
integrating attributes throughout an active analysis process.Comment: Accepted to IEEE TVCG. 13 pages, 5 figures. For a study breakdown
video, see https://youtu.be/NzVxHn-OpqQ . The source code, data and analysis
are available at https://github.com/AdamCoscia/Integration-Guidelines-V
- …