10,951 research outputs found

    Evaluation methodology for visual analytics software

    Get PDF
    O desafio do Visual Analytics (VA) é produzir visualizações que ajudem os utilizadores a concentrarem-se no aspecto mais relevante ou mais interessante dos dados apresentados. A sociedade actual enfrenta uma quantidade de dados que aumenta rapidamente. Assim, os utilizadores de informação em todos os domínios acabam por ter mais informação do que aquela com que podem lidar. O software VA deve suportar interacções intuitivas para que os analistas possam concentrar-se na informação que estão a manipular, e não na técnica de manipulação em si. Os ambientes de VA devem procurar minimizar a carga de trabalho cognitivo global dos seus utilizadores, porque se tivermos de pensar menos nas interacções em si, teremos mais tempo para pensar na análise propriamente dita. Tendo em conta os benefícios que as aplicações VA podem trazer e a confusão que ainda existe ao identificar tais aplicações no mercado, propomos neste trabalho uma nova metodologia de avaliação baseada em heurísticas. A nossa metodologia destina-se a avaliar aplicações através de testes de usabilidade considerando as funcionalidades e características desejáveis em sistemas de VA. No entanto, devido à sua natureza quatitativa, pode ser naturalmente utilizada para outros fins, tais como comparação para decisão entre aplicações de VA do mesmo contexto. Além disso, seus critérios poderão servir como fonte de informação para designers e programadores fazerem escolhas apropriadas durante a concepção e desenvolvimento de sistemas de VA

    Micro-entries: Encouraging Deeper Evaluation of Mental Models Over Time for Interactive Data Systems

    Full text link
    Many interactive data systems combine visual representations of data with embedded algorithmic support for automation and data exploration. To effectively support transparent and explainable data systems, it is important for researchers and designers to know how users understand the system. We discuss the evaluation of users' mental models of system logic. Mental models are challenging to capture and analyze. While common evaluation methods aim to approximate the user's final mental model after a period of system usage, user understanding continuously evolves as users interact with a system over time. In this paper, we review many common mental model measurement techniques, discuss tradeoffs, and recommend methods for deeper, more meaningful evaluation of mental models when using interactive data analysis and visualization systems. We present guidelines for evaluating mental models over time that reveal the evolution of specific model updates and how they may map to the particular use of interface features and data queries. By asking users to describe what they know and how they know it, researchers can collect structured, time-ordered insight into a user's conceptualization process while also helping guide users to their own discoveries.Comment: 10 pages, submitted to BELIV 2020 Worksho

    Data analytics 2016: proceedings of the fifth international conference on data analytics

    Get PDF

    ????????? ????????? ????????? ????????? ????????? ????????? ?????? ????????? ??????

    Get PDF
    Department of Computer Science and EngineeringMany visualization systems have provided multiple coordinated views (MCVs) with a belief that using MCVs brings benefits during visual analysis. However, if a tool requires tedious or repeated interactions to create one view, users may feel difficulty in utilizing the MCV tools due to perceived expensive interaction costs. To reduce such interaction costs, a number of visual tools have started providing a method, called visualization duplication to allow users to copy an existing visualization with one click. In spite of the importance of such easy view creation method, very little empirical work exists on measuring impacts of the method. In this work, we aim to investigate the impacts of visualization duplication on visual analysis strategies, interaction behaviors, and analysis performance. To achieve the goals, we designed a prototype visual tool, equipped with the easy view creation method and conducted a human-subjects study. In the experiment, 44 participants completed five analytic tasks using a visualization system. Through quantitative and qualitative analysis, we discovered that visualization duplication is related to the number of views and generated insights and accuracy of visual analysis. The results also revealed visualization duplication effects on deciding analytical strategies and interaction patterns.clos

    Visual design recommendations for situation awareness in social media

    Get PDF
    The use of online Social Media is increasingly popular amongst emergency services to support Situational Awareness (i.e. accurate, complete and real-time information about an event). Whilst many software solutions have been developed to monitor and analyse Social Media, little attention has been paid on how to visually design for Situational Awareness for this large-scale data space. We describe an approach where levels of SA have been matched to corresponding visual design recommendations using participatory design techniques with Emergency Responders in the UK. We conclude by presenting visualisation prototypes developed to satisfy the design recommendations, and how they contribute to Emergency Responders’ Situational Awareness in an example scenario. We end by highlighting research issues that emerged during the initial evaluation

    Analytic Provenance for Software Reverse Engineers

    Get PDF
    Reverse engineering is a time-consuming process essential to software-security tasks such as malware analysis and vulnerability discovery. During the process, an engineer will follow multiple leads to determine how the software functions. The combination of time and possible explanations makes it difficult for the engineers to maintain a context of their findings within the overall task. Analytic provenance tools have demonstrated value in similarly complex fields that require open-ended exploration and hypothesis vetting. However, they have not been explored in the reverse engineering domain. This dissertation presents SensorRE, the first analytic provenance tool designed to support software reverse engineers. A semi-structured interview with experts led to the design and implementation of the system. We describe the visual interfaces and their integration within an existing software analysis tool. SensorRE automatically captures user\u27s sense making actions and provides a graph and storyboard view to support further analysis. User study results with both experts and graduate students demonstrate that SensorRE is easy to use and that it improved the participants\u27 exploration process

    Immersive Telepresence: A framework for training and rehearsal in a postdigital age

    Get PDF
    corecore