4,976 research outputs found
trackr: A Framework for Enhancing Discoverability and Reproducibility of Data Visualizations and Other Artifacts in R
Research is an incremental, iterative process, with new results relying and
building upon previous ones. Scientists need to find, retrieve, understand, and
verify results in order to confidently extend them, even when the results are
their own. We present the trackr framework for organizing, automatically
annotating, discovering, and retrieving results. We identify sources of
automatically extractable metadata for computational results, and we define an
extensible system for organizing, annotating, and searching for results based
on these and other metadata. We present an open-source implementation of these
concepts for plots, computational artifacts, and woven dynamic reports
generated in the R statistical computing language
Semantic Modeling of Analytic-based Relationships with Direct Qualification
Successfully modeling state and analytics-based semantic relationships of
documents enhances representation, importance, relevancy, provenience, and
priority of the document. These attributes are the core elements that form the
machine-based knowledge representation for documents. However, modeling
document relationships that can change over time can be inelegant, limited,
complex or overly burdensome for semantic technologies. In this paper, we
present Direct Qualification (DQ), an approach for modeling any semantically
referenced document, concept, or named graph with results from associated
applied analytics. The proposed approach supplements the traditional
subject-object relationships by providing a third leg to the relationship; the
qualification of how and why the relationship exists. To illustrate, we show a
prototype of an event-based system with a realistic use case for applying DQ to
relevancy analytics of PageRank and Hyperlink-Induced Topic Search (HITS).Comment: Proceedings of the 2015 IEEE 9th International Conference on Semantic
Computing (IEEE ICSC 2015
Recommended from our members
Supporting Story Synthesis: Bridging the Gap between Visual Analytics and Storytelling
Visual analytics usually deals with complex data and uses sophisticated algorithmic, visual, and interactive techniques. Findings of the analysis often need to be communicated to an audience that lacks visual analytics expertise. This requires analysis outcomes to be presented in simpler ways than that are typically used in visual analytics systems. However, not only analytical visualizations may be too complex for target audience but also the information that needs to be presented. Hence, there exists a gap on the path from obtaining analysis findings to communicating them, which involves two aspects: information and display complexity. We propose a general framework where data analysis and result presentation are linked by story synthesis, in which the analyst creates and organizes story contents. Differently, from the previous research, where analytic findings are represented by stored display states, we treat findings as data constructs. In story synthesis, findings are selected, assembled, and arranged in views using meaningful layouts that take into account the structure of information and inherent properties of its components. We propose a workflow for applying the proposed framework in designing visual analytics systems and demonstrate the generality of the approach by applying it to two domains, social media, and movement analysis
Recommended from our members
Towards a Theory of Analytical Behaviour: A Model of Decision-Making in Visual Analytics
This paper introduces a descriptive model of the human-computer processes that lead to decision-making in visual analytics. A survey of nine models from the visual analytics and HCI literature are presented to account for different perspectives such as sense-making, reasoning, and low-level human-computer interactions. The survey examines the people and computers (entities) presented in the models, the divisions of labour between entities (both physical and role-based), the behaviour of both people and machines as constrained by their roles and agency, and finally the elements and processes which define the flow of data both within and between entities. The survey informs the identification of four observations that characterise analytical behaviour - defined as decision-making facilitated by visual analytics: bilateral discourse, divisions of labour, mixed-synchronicity information flows, and bounded behaviour. Based on these principles, a descriptive model is presented as a contribution towards a theory of analytical behaviour. The future intention is to apply prospect theory, a economic model of decision-making under uncertainty, to the study of analytical behaviour. It is our assertion that to apply prospect theory first requires a descriptive model of the processes that facilitate decision-making in visual analytics. We conclude it necessary to measure the perception of risk in future work in order to apply prospect theory to the study of analytical behaviour using our proposed model
Cross-Platform Presentation of Interactive Volumetric Imagery
Volume data is useful across many disciplines, not just medicine.
Thus, it is very important that researchers have a simple and
lightweight method of sharing and reproducing such volumetric
data. In this paper, we explore some of the challenges associated
with volume rendering, both from a classical sense and from the
context of Web3D technologies. We describe and evaluate the pro-
posed X3D Volume Rendering Component and its associated styles
for their suitability in the visualization of several types of image
data. Additionally, we examine the ability for a minimal X3D node
set to capture provenance and semantic information from outside
ontologies in metadata and integrate it with the scene graph
LightGuider: Guiding Interactive Lighting Design using Suggestions, Provenance, and Quality Visualization
LightGuider is a novel guidance-based approach to interactive lighting
design, which typically consists of interleaved 3D modeling operations and
light transport simulations. Rather than having designers use a trial-and-error
approach to match their illumination constraints and aesthetic goals,
LightGuider supports the process by simulating potential next modeling steps
that can deliver the most significant improvements. LightGuider takes
predefined quality criteria and the current focus of the designer into account
to visualize suggestions for lighting-design improvements via a specialized
provenance tree. This provenance tree integrates snapshot visualizations of how
well a design meets the given quality criteria weighted by the designer's
preferences. This integration facilitates the analysis of quality improvements
over the course of a modeling workflow as well as the comparison of alternative
design solutions. We evaluate our approach with three lighting designers to
illustrate its usefulness
A provenance task abstraction framework
Visual analytics tools integrate provenance recording to externalize analytic processes or user insights. Provenance can be captured on varying levels of detail, and in turn activities can be characterized from different granularities. However, current approaches do not support inferring activities that can only be characterized across multiple levels of provenance. We propose a task abstraction framework that consists of a three stage approach, composed of (1) initializing a provenance task hierarchy, (2) parsing the provenance hierarchy by using an abstraction mapping mechanism, and (3) leveraging the task hierarchy in an analytical tool. Furthermore, we identify implications to accommodate iterative refinement, context, variability, and uncertainty during all stages of the framework. A use case describes exemplifies our abstraction framework, demonstrating how context can influence the provenance hierarchy to support analysis. The paper concludes with an agenda, raising and discussing challenges that need to be considered for successfully implementing such a framework
A provenance task abstraction framework
Visual analytics tools integrate provenance recording to externalize analytic processes or user insights. Provenance can be captured on varying levels of detail, and in turn activities can be characterized from different granularities. However, current approaches do not support inferring activities that can only be characterized across multiple levels of provenance. We propose a task abstraction framework that consists of a three stage approach, composed of (1) initializing a provenance task hierarchy, (2) parsing the provenance hierarchy by using an abstraction mapping mechanism, and (3) leveraging the task hierarchy in an analytical tool. Furthermore, we identify implications to accommodate iterative refinement, context, variability, and uncertainty during all stages of the framework. A use case describes exemplifies our abstraction framework, demonstrating how context can influence the provenance hierarchy to support analysis. The paper concludes with an agenda, raising and discussing challenges that need to be considered for successfully implementing such a framework
- …