22 research outputs found

    A Review and Characterization of Progressive Visual Analytics

    Get PDF
    Progressive Visual Analytics (PVA) has gained increasing attention over the past years. It brings the user into the loop during otherwise long-running and non-transparent computations by producing intermediate partial results. These partial results can be shown to the user for early and continuous interaction with the emerging end result even while it is still being computed. Yet as clear-cut as this fundamental idea seems, the existing body of literature puts forth various interpretations and instantiations that have created a research domain of competing terms, various definitions, as well as long lists of practical requirements and design guidelines spread across different scientific communities. This makes it more and more difficult to get a succinct understanding of PVA’s principal concepts, let alone an overview of this increasingly diverging field. The review and discussion of PVA presented in this paper address these issues and provide (1) a literature collection on this topic, (2) a conceptual characterization of PVA, as well as (3) a consolidated set of practical recommendations for implementing and using PVA-based visual analytics solutions

    Virtual reality-based parallel coordinates plots enhanced with explainable ai and data-science analytics for decision-making processes

    Get PDF
    We present a refinement of the Immersive Parallel Coordinates Plots (IPCP) system for Virtual Reality (VR). The evolved system provides data-science analytics built around a well-known method for visualization of multidimensional datasets in VR. The data-science analytics enhancements consist of importance analysis and a number of clustering algorithms including a novel SuMC (Subspace Memory Clustering) solution. These analytical methods were applied to both the main visualizations and supporting cross-dimensional scatter plots. They automate part of the analytical work that in the previous version of IPCP had to be done by an expert. We test the refined system with two sample datasets that represent the optimum solutions of two different multi-objective optimization studies in turbomachinery. The first one describes 54 data items with 29 dimensions (DS1), and the second 166 data items with 39 dimensions (DS2). We include the details of these methods as well as the reasoning behind selecting some methods over others.</jats:p

    Hillview:A trillion-cell spreadsheet for big data

    Get PDF
    Hillview is a distributed spreadsheet for browsing very large datasets that cannot be handled by a single machine. As a spreadsheet, Hillview provides a high degree of interactivity that permits data analysts to explore information quickly along many dimensions while switching visualizations on a whim. To provide the required responsiveness, Hillview introduces visualization sketches, or vizketches, as a simple idea to produce compact data visualizations. Vizketches combine algorithmic techniques for data summarization with computer graphics principles for efficient rendering. While simple, vizketches are effective at scaling the spreadsheet by parallelizing computation, reducing communication, providing progressive visualizations, and offering precise accuracy guarantees. Using Hillview running on eight servers, we can navigate and visualize datasets of tens of billions of rows and trillions of cells, much beyond the published capabilities of competing systems

    Cognition-Based Evaluation of Visualisation Frameworks for Exploring Structured Cultural Heritage Data

    Get PDF
    It is often claimed that Information Visualisation (InfoVis) tools improve the audience’s engagement with the display of cultural heritage (CH) collections, open up CH content to new audiences and support teaching and learning through interactive experiences. But there is a lack of studies systematically evaluating these claims, particularly from the perspective of modern educational theory. As far as the author is aware no experimental investigation has been undertaken until now, that attempts to measure deeper levels of user engagement and learning with InfoVis tools. The investigation of this thesis complements InfoVis research by initiating a human-centric approach since little previous research has attempted to incorporate and integrate human cognition as one of the fundamental components of InfoVis. In this thesis, using Bloom’s taxonomy of learning objectives as well as individual learning characteristics (i.e. cognitive preferences), I have evaluated the visitor experience of an art collection both with and without InfoVis tools (between subjects design). Results indicate that whilst InfoVis tools have some positive effect on the lower levels of learning, they are less effective for higher levels. In addition, this thesis shows that InfoVis tools seem to be more effective when they match specific cognitive preferences. These results have implications for both the designers of tools and for CH venues in terms of expectation of effectiveness and exhibition design; the proposed cognitive based evaluation framework and the results of this investigation could provide a valuable baseline for assessing the effectiveness of visitors’ interaction with the artifacts of online and physical exhibitions where InfoVis tools such as Timelines and Maps along with storytelling techniques are being used

    Supporting Methodology Transfer in Visualization Research with Literature-Based Discovery and Visual Text Analytics

    Get PDF
    [ES] La creciente especialización de la ciencia está motivando la rápida fragmentación de disciplinas bien establecidas en comunidades interdisciplinares. Esta descom- posición se puede observar en un tipo de investigación en visualización conocida como investigación de visualización dirigida por el problema. En ella, equipos de expertos en visualización y un dominio concreto, colaboran en un área específica de conocimiento como pueden ser las humanidades digitales, la bioinformática, la seguridad informática o las ciencias del deporte. Esta tesis propone una serie de métodos inspirados en avances recientes en el análisis automático de textos y la rep- resentación del conocimiento para promover la adecuada comunicación y transferen- cia de conocimiento entre estas comunidades. Los métodos obtenidos se combinaron en una interfaz de análisis visual de textos orientada al descubrimiento científico, GlassViz, que fue diseñada con estos objetivos en mente. La herramienta se probó por primera vez en el dominio de las humanidades digitales para explorar un corpus masivo de artículos de visualización de propósito general. GlassViz fue adaptada en un estudio posterior para que soportase diferentes fuentes de datos representativas de estas comunidades, mostrando evidencia de que el enfoque propuesto también es una alternativa válida para abordar el problema de la fragmentación en la investigación en visualización

    On intelligible multimodal visual analysis

    Get PDF
    Analyzing data becomes an important skill in a more and more digital world. Yet, many users are facing knowledge barriers preventing them to independently conduct their data analysis. To tear down some of these barriers, multimodal interaction for visual analysis has been proposed. Multimodal interaction through speech and touch enables not only experts, but also novice users to effortlessly interact with such kind of technology. However, current approaches do not take the user differences into account. In fact, whether visual analysis is intelligible ultimately depends on the user. In order to close this research gap, this dissertation explores how multimodal visual analysis can be personalized. To do so, it takes a holistic view. First, an intelligible task space of visual analysis tasks is defined by considering personalization potentials. This task space provides an initial basis for understanding how effective personalization in visual analysis can be approached. Second, empirical analyses on speech commands in visual analysis as well as used visualizations from scientific publications further reveal patterns and structures. These behavior-indicated findings help to better understand expectations towards multimodal visual analysis. Third, a technical prototype is designed considering the previous findings. Enriching the visual analysis by a persistent dialogue and a transparency of the underlying computations, conducted user studies show not only advantages, but address the relevance of considering the user’s characteristics. Finally, both communications channels – visualizations and dialogue – are personalized. Leveraging linguistic theory and reinforcement learning, the results highlight a positive effect of adjusting to the user. Especially when the user’s knowledge is exceeded, personalizations helps to improve the user experience. Overall, this dissertations confirms not only the importance of considering the user’s characteristics in multimodal visual analysis, but also provides insights on how an intelligible analysis can be achieved. By understanding the use of input modalities, a system can focus only on the user’s needs. By understanding preferences on the output modalities, the system can better adapt to the user. Combining both directions imporves user experience and contributes towards an intelligible multimodal visual analysis

    Digital Forensics Tool Interface Visualization

    Get PDF
    Recent trends show digital devices utilized with increasing frequency in most crimes committed. Investigating crime involving these devices is labor-intensive for the practitioner applying digital forensics tools that present possible evidence with results displayed in tabular lists for manual review. This research investigates how enhanced digital forensics tool interface visualization techniques can be shown to improve the investigator\u27s cognitive capacities to discover criminal evidence more efficiently. This paper presents visualization graphs and contrasts their properties with the outputs of The Sleuth Kit (TSK) digital forensic program. Exhibited is the textual-based interface proving the effectiveness of enhanced data presentation. Further demonstrated is the potential of the computer interface to present to the digital forensic practitioner an abstract, graphic view of an entire dataset of computer files. Enhanced interface design of digital forensic tools means more rapidly linking suspicious evidence to a perpetrator. Introduced in this study is a mixed methodology of ethnography and cognitive load measures. Ethnographically defined tasks developed from the interviews of digital forensics subject matter experts (SME) shape the context for cognitive measures. Cognitive load testing of digital forensics first-responders utilizing both a textual-based and visualized-based application established a quantitative mean of the mental workload during operation of the applications under test. A t-test correlating the dependent samples\u27 mean tested for the null hypothesis of less than a significant value between the applications\u27 comparative workloads of the operators. Results of the study indicate a significant value, affirming the hypothesis that a visualized application would reduce the cognitive workload of the first-responder analyst. With the supported hypothesis, this work contributes to the body of knowledge by validating a method of measurement and by providing empirical evidence that the use of the visualized digital forensics interface will provide a more efficient performance by the analyst, saving labor costs and compressing time required for the discovery phase of a digital investigation

    Automatic generation of software interfaces for supporting decisionmaking processes. An application of domain engineering & machine learning

    Get PDF
    [EN] Data analysis is a key process to foster knowledge generation in particular domains or fields of study. With a strong informative foundation derived from the analysis of collected data, decision-makers can make strategic choices with the aim of obtaining valuable benefits in their specific areas of action. However, given the steady growth of data volumes, data analysis needs to rely on powerful tools to enable knowledge extraction. Information dashboards offer a software solution to analyze large volumes of data visually to identify patterns and relations and make decisions according to the presented information. But decision-makers may have different goals and, consequently, different necessities regarding their dashboards. Moreover, the variety of data sources, structures, and domains can hamper the design and implementation of these tools. This Ph.D. Thesis tackles the challenge of improving the development process of information dashboards and data visualizations while enhancing their quality and features in terms of personalization, usability, and flexibility, among others. Several research activities have been carried out to support this thesis. First, a systematic literature mapping and review was performed to analyze different methodologies and solutions related to the automatic generation of tailored information dashboards. The outcomes of the review led to the selection of a modeldriven approach in combination with the software product line paradigm to deal with the automatic generation of information dashboards. In this context, a meta-model was developed following a domain engineering approach. This meta-model represents the skeleton of information dashboards and data visualizations through the abstraction of their components and features and has been the backbone of the subsequent generative pipeline of these tools. The meta-model and generative pipeline have been tested through their integration in different scenarios, both theoretical and practical. Regarding the theoretical dimension of the research, the meta-model has been successfully integrated with other meta-model to support knowledge generation in learning ecosystems, and as a framework to conceptualize and instantiate information dashboards in different domains. In terms of the practical applications, the focus has been put on how to transform the meta-model into an instance adapted to a specific context, and how to finally transform this later model into code, i.e., the final, functional product. These practical scenarios involved the automatic generation of dashboards in the context of a Ph.D. Programme, the application of Artificial Intelligence algorithms in the process, and the development of a graphical instantiation platform that combines the meta-model and the generative pipeline into a visual generation system. Finally, different case studies have been conducted in the employment and employability, health, and education domains. The number of applications of the meta-model in theoretical and practical dimensions and domains is also a result itself. Every outcome associated to this thesis is driven by the dashboard meta-model, which also proves its versatility and flexibility when it comes to conceptualize, generate, and capture knowledge related to dashboards and data visualizations

    Understanding cognitive differences in processing competing visualizations of complex systems

    Get PDF
    Node-link diagrams are used represent systems having different elements and relationships among the elements. Representing the systems using visualizations like node-link diagrams provides cognitive aid to individuals in understanding the system and effectively managing these systems. Using appropriate visual tools aids in task completion by reducing the cognitive load of individuals in understanding the problems and solving them. However, the visualizations that are currently developed lack any cognitive processing based evaluation. Most of the evaluations (if any) are based on the result of tasks performed using these visualizations. Therefore, the evaluations do not provide any perspective from the point of the cognitive processing required in working with the visualization. This research focuses on understanding the effect of different visualization types and complexities on problem understanding and performance using a visual problem solving task. Two informationally equivalent but visually different visualizations - geon diagrams based on structural object perception theory and UML diagrams based on object modeling - are investigated to understand the cognitive processes that underlie reasoning with different types of visualizations. Specifically, the two visualizations are used to represent interdependent critical infrastructures. Participants are asked to solve a problem using the different visualizations. The effectiveness of the task completion is measured in terms of the time taken to complete the task and the accuracy of the result of the task. The differences in the cognitive processing while using the different visualizations are measured in terms of the search path and the search-steps of the individual. The results from this research underscore the difference in the effectiveness of the different diagrams in solving the same problem. The time taken to complete the task is significantly lower in geon diagrams. The error rate is also significantly lower when using geon diagrams. The search path for UML diagrams is more node-dominant but for geon diagrams is a distribution of nodes, links and components (combinations of nodes and links). Evaluation dominates the search-steps in geon diagrams whereas locating steps dominate UML diagrams. The results also show that the differences in search path and search steps for different visualizations increase when the complexity of the diagrams increase. This study helps to establish the importance of cognitive level understanding of the use of diagrammatic representation of information for visual problem solving. The results also highlight that measures of effectiveness of any visualization should include measuring the cognitive process of individuals while they are doing the visual task apart from the measures of time and accuracy of the result of a visual task
    corecore