71 research outputs found

    Provenance and logging for sense making

    Get PDF
    Sense making is one of the biggest challenges in data analysis faced by both the industry and the research community. It involves understanding the data and uncovering its model, generating a hypothesis, selecting analysis methods, creating novel solutions, designing evaluation, and also critical thinking and learning wherever needed. The research and development for such sense making tasks lags far behind the fast-changing user needs, such as those that emerged recently as the result of so-called “Big Data”. As a result, sense making is often performed manually and the limited human cognition capability becomes the bottleneck of sense making in data analysis and decision making. One of the recent advances in sense making research is the capture, visualization, and analysis of provenance information. Provenance is the history and context of sense making, including the data/analysis used and the users’ critical thinking process. It has been shown that provenance can effectively support many sense making tasks. For instance, provenance can provide an overview of what has been examined and reveal gaps like unexplored information or solution possibilities. Besides, provenance can support collaborative sense making and communication by sharing the rich context of the sense making process. Besides data analysis and decision making, provenance has been studied in many other fields, sometimes under different names, for different types of sense making. For example, the Human-Computer Interaction community relies on the analysis of logging to understand user behaviors and intentions; the WWW and database community has been working on data lineage to understand uncertainty and trustworthiness; and finally, reproducible science heavily relies on provenance to improve the reliability and efficiency of scientific research. This Dagstuhl Seminar brought together researchers from the diverse fields that relate to provenance and sense making to foster cross-community collaboration. Shared challenges were identified and progress has been made towards developing novel solutions

    Provenance analysis for sensemaking. IEEE Computer Graphics and Applications, 39 (6) . pp. 27-29. ISSN 0272-1716

    Get PDF
    The articles in this special section examine the concept of "sensemaking", which refers to how we structure the unknown so as to be able to act in it. In the context of data analysis it involves understanding the data, generating hypotheses, selecting analysis methods, creating novel solutions, and critical thinking and learning wherever needed. Due to its explorative and creative nature, sensemaking is arguably the most challenging part of any data analysis

    Toward Visualization for Games: Theory, Design Space, and Patterns

    Get PDF
    Abstract-Electronic games are starting to incorporate in-game telemetry that collects data about player, team, and community performance on a massive scale, and as data begins to accumulate, so does the demand for effectively analyzing this data. In this paper, we use examples from both old and new games of different genres to explore the theory and design space of visualization for games. Drawing on these examples, we define a design space for this novel research topic and use it to formulate design patterns for how to best apply visualization technology to games. We then discuss the implications that this new framework will potentially have on the design and development of game and visualization technology in the future

    Show me how you see: Lessons from studying computer forensics experts for visualization

    Get PDF
    Abstract. As part of a Analyze-Visualize-Validate cycle, we have initiated a domain analysis of email computer forensics to determine where visualization may be beneficial. To this end, we worked with police officers and other forensics professionals. However, the process of designing and executing such a study with real-world experts has been a non-trivial task. This paper presents our efforts in this area and the lessons learned as guidance to other practitioners

    Vortex Characterization for Engineering Applications

    Get PDF
    Realistic engineering simulation data often have features that are not optimally resolved due to practical limitations on mesh resolution. To be useful to application engineers, vortex characterization techniques must be sufficiently robust to handle realistic data with complex vortex topologies. In this paper, we present enhancements to the vortex topology identification component of an existing vortex characterization algorithm. The modified techniques are demonstrated by application to three realistic data sets that illustrate the strengths and weaknesses of our approach

    Visual parameter optimisation for biomedical image processing

    Get PDF
    Background: Biomedical image processing methods require users to optimise input parameters to ensure high quality output. This presents two challenges. First, it is difficult to optimise multiple input parameters for multiple input images. Second, it is difficult to achieve an understanding of underlying algorithms, in particular, relationships between input and output. Results: We present a visualisation method that transforms users’ ability to understand algorithm behaviour by integrating input and output, and by supporting exploration of their relationships. We discuss its application to a colour deconvolution technique for stained histology images and show how it enabled a domain expert to identify suitable parameter values for the deconvolution of two types of images, and metrics to quantify deconvolution performance. It also enabled a breakthrough in understanding by invalidating an underlying assumption about the algorithm. Conclusions: The visualisation method presented here provides analysis capability for multiple inputs and outputs in biomedical image processing that is not supported by previous analysis software. The analysis supported by our method is not feasible with conventional trial-and-error approaches

    Common ground in collaborative intelligence analysis: an empirical study

    Get PDF
    This paper reports an empirical exploration of how different configurations of collaboration technology affect peoples’ ability to construct and maintain common ground while conducting collaborative intelligence analysis work. Prior studies of collaboration technology have typically focused on simpler conversational tasks, or ones that involve physical manipulation, rather than the complex sensemaking and inference involved in intelligence work. The study explores the effects of video communication and shared visual workspace (SVW) on the negotiation of common ground by distributed teams collaborating in real time on intelligence analysis tasks. The experimental study uses a 2x2 factorial, between-subjects design involving two independent variables: presence or absence of Video and SVW. Two-member teams were randomly assigned to one of the four experimental media conditions and worked to complete several intelligence analysis tasks involving multiple, complex intelligence artefacts. Teams with access to the shared visual workspace could view their teammates’ eWhiteboards. Our results demonstrate a significant effect for the shared visual workspace: the effort of conversational grounding is reduced in the cases where SVW is available. However, there were no main effects for video and no interaction between the two variables. Also, we found that the “conversational grounding effort” required tended to decrease over the course of the tas

    Measures in Visualization Space

    Get PDF
    Postponed access: the file will be available after 2021-08-12Measurement is an integral part of modern science, providing the fundamental means for evaluation, comparison, and prediction. In the context of visualization, several different types of measures have been proposed, ranging from approaches that evaluate particular aspects of visualization techniques, their perceptual characteristics, and even economic factors. Furthermore, there are approaches that attempt to provide means for measuring general properties of the visualization process as a whole. Measures can be quantitative or qualitative, and one of the primary goals is to provide objective means for reasoning about visualizations and their effectiveness. As such, they play a central role in the development of scientific theories for visualization. In this chapter, we provide an overview of the current state of the art, survey and classify different types of visualization measures, characterize their strengths and drawbacks, and provide an outline of open challenges for future research.acceptedVersio
    • …
    corecore