2,035 research outputs found

    ReVisIt: Enabling Passive Evaluation of Interactive Visualizations

    Get PDF
    Visualizations on the web are used to communicate information to a wide range of audiences. However, there exists no widely accepted method for visualization creators to understand how an audience engages with their work. To address this, we propose using passive evaluation, where user interaction with visualizations is logged, and later explored by the creator. ReVisIt is a tool that captures user interactions with a visualization, processes them, and displays them in a dashboard. To evaluate ReVisIt, we conducted several interviews with visualization creators. We concluded that a tool like ReVisIt would be valuable to creators, and there are several opportunities for future work in this area

    The Effects of Latency on 3D Interactive Data Visualizations

    Get PDF
    Interactive data visualizations must respond fluidly to user input to be effective, or so we assume. In fact it is unknown exactly how fast a visualization must run to present every facet within a dataset. An engineering team with limited resources is left with intuition and estimates to determine if their application performs sufficiently well. This thesis studies how latency affects users\u27 comprehension of data visualizations, specifically 3D geospatial visualizations with large data sets. Subjects used a climate visualization showing temperatures spanning from the 19th to the 21st century to answer multiple choice questions. Metrics like their eye movements, time per question, and test score were recorded. Unbeknownst to the participants the latency was toggled between questions, subjugating frame rendering times to intervals between 33 1/3 and 200 milliseconds. Analysis of eye movements and question completion time and accuracy fail to show that latency has an impact on how users explore the visualization or comprehend the data presented. User fixation times on overlaid 2D visualization tools however are impacted by latency, although the fixation times do not significantly differ over 3D elements. The finding speaks to how resilient users are in navigating and understanding virtual 3D environments --- a conclusion supported by previous studies about video game latency

    AIOps for a Cloud Object Storage Service

    Full text link
    With the growing reliance on the ubiquitous availability of IT systems and services, these systems become more global, scaled, and complex to operate. To maintain business viability, IT service providers must put in place reliable and cost efficient operations support. Artificial Intelligence for IT Operations (AIOps) is a promising technology for alleviating operational complexity of IT systems and services. AIOps platforms utilize big data, machine learning and other advanced analytics technologies to enhance IT operations with proactive actionable dynamic insight. In this paper we share our experience applying the AIOps approach to a production cloud object storage service to get actionable insights into system's behavior and health. We describe a real-life production cloud scale service and its operational data, present the AIOps platform we have created, and show how it has helped us resolving operational pain points.Comment: 5 page

    Understanding the bi-directional relationship between analytical processes and interactive visualization systems

    Get PDF
    Interactive visualizations leverage the human visual and reasoning systems to increase the scale of information with which we can effectively work, therefore improving our ability to explore and analyze large amounts of data. Interactive visualizations are often designed with target domains in mind, such as analyzing unstructured textual information, which is a main thrust in this dissertation. Since each domain has its own existing procedures of analyzing data, a good start to a well-designed interactive visualization system is to understand the domain experts' workflow and analysis processes. This dissertation recasts the importance of understanding domain users' analysis processes and incorporating such understanding into the design of interactive visualization systems. To meet this aim, I first introduce considerations guiding the gathering of general and domain-specific analysis processes in text analytics. Two interactive visualization systems are designed by following the considerations. The first system is Parallel-Topics, a visual analytics system supporting analysis of large collections of documents by extracting semantically meaningful topics. Based on lessons learned from Parallel-Topics, this dissertation further presents a general visual text analysis framework, I-Si, to present meaningful topical summaries and temporal patterns, with the capability to handle large-scale textual information. Both systems have been evaluated by expert users and deemed successful in addressing domain analysis needs. The second contribution lies in preserving domain users' analysis process while using interactive visualizations. Our research suggests the preservation could serve multiple purposes. On the one hand, it could further improve the current system. On the other hand, users often need help in recalling and revisiting their complex and sometimes iterative analysis process with an interactive visualization system. This dissertation introduces multiple types of evidences available for capturing a user's analysis process within an interactive visualization and analyzes cost/benefit ratios of the capturing methods. It concludes that tracking interaction sequences is the most un-intrusive and feasible way to capture part of a user's analysis process. To validate this claim, a user study is presented to theoretically analyze the relationship between interactions and problem-solving processes. The results indicate that constraining the way a user interacts with a mathematical puzzle does have an effect on the problemsolving process. As later evidenced in an evaluative study, a fair amount of high-level analysis can be recovered through merely analyzing interaction logs

    BeSocratic: An Intelligent Tutoring System for the Recognition, Evaluation, and Analysis of Free-form Student Input

    Get PDF
    This dissertation describes a novel intelligent tutoring system, BeSocratic, which aims to help fill the gap between simple multiple-choice systems and free-response systems. BeSocratic focuses on targeting questions that are free-form in nature yet defined to the point which allows for automatic evaluation and analysis. The system includes a set of modules which provide instructors with tools to assess student performance. Beyond text boxes and multiple-choice questions, BeSocratic contains several modules that recognize, evaluate, provide feedback, and analyze student-drawn structures, including Euclidean graphs, chemistry molecules, computer science graphs, and simple drawings. Our system uses a visual, rule-based authoring system which enables the creation of activities for use within science, technology, engineering, and mathematics classrooms. BeSocratic records each action that students make within the system. Using a set of post-analysis tools, teachers have the ability to examine both individual and group performances. We accomplish this using hidden Markov model-based clustering techniques and visualizations. These visualizations can help teachers quickly identify common strategies and errors for large groups of students. Furthermore, analysis results can be used directly to improve activities through advanced detection of student errors and refined feedback. BeSocratic activities have been created and tested at several universities. We report specific results from several activities, and discuss how BeSocratic\u27s analysis tools are being used with data from other systems. We specifically detail two chemistry activities and one computer science activity: (1) an activity focused on improving mechanism use, (2) an activity which assesses student understanding of Gibbs energy, and (3) an activity which teaches students the fundamentals of splay trees. In addition to analyzing data collected from students within BeSocratic, we share our visualizations and results from analyzing data gathered with another educational system, PhET

    Holograph: A Tool for Assessing the Impact of Resource Assignment on Business Process Performance Based on Event Logs

    Get PDF
    Antud magistritöö eesmärgiks on välja selgitada, kas sündmuste logidest saab välja lugeda ressursside kasutamise mõju äriprotsessidele. Uuringu läbiviimiseks analüüsiti protsessiteadlikke infosüsteemide genereeritud logisid. Töös tuuakse välja enamlevinud probleemid antud valdkonnas: liiga keerukas lähenemine, keerulised logid või ülesande lahendamiseks ebasobilikud rakendused. Antud töös arvestatakse järgnevaid faktoreid: individuaalne efektiivsus võrreldes grupiefektiivsusega ning millal milliseid ressursse on vaja kasutada. Kasutatava meetodi eesmärk on luua tulemus, mis aitab vastata küsimustele erinevates äriaspektides: Milliseid ressursse on parim antud ülesande lahendamiseks kasutada? Millised ressursigrupid töötavad koos kõige efektiivsemalt? Eelmainitud meetodi kasulikkuse mõõtmiseks loodi rakendus nimega Holograph, mis rakendas töös välja toodud põhimõtteid. Meetodi valideerisid eelnevalt protsessikaevandamisega kokku puutunud infotehnoloogia juhtimise üliõpilased.This thesis aims to identify if the variations in the performance of a business process can be explained by the resource allocation observed in an event log. This aim is pursued by closely analyzing the logs produced by a process-aware information system. The approach addresses the common problems in this area, such as overcomplicated, hard to understand output, or tools that are not specialized for the task, by building a method that considers factors such as individual performance versus group performance, the moments in which the resources are involved, and the variants in which they take part of. Through this method, the goal is to obtain a result that is meaningful from different business points of view and helps answer questions such as: which resources are better suited for a given task? Which groups of resources work together in the most efficient way? In order to evaluate the benefits and usefulness of the approach, a web application called Holograph was implemented using the proposed guidelines. The approach was validated via an experiment involving a group of IT management students with prior knowledge of process mining

    A two-stage framework for designing visual analytics systems to augment organizational analytical processes

    Get PDF
    A perennially interesting research topic in the field of visual analytics is how to effectively develop systems that support organizational knowledge worker’s decision-making and reasoning processes. The primary objective of a visual analytic system is to facilitate analytical reasoning and discovery of insights through interactive visual interfaces. It also enables the transfer of capability and expertise from where it resides to where it is needed–across individuals, and organizations as necessary. The problem is, however, most domain analytical practices generally vary from organizations to organizations. This leads to the diversified design of visual analytics systems in incorporating domain analytical processes, making it difficult to generalize the success from one domain to another. Exacerbating this problem is the dearth of general models of analytical workflows available to enable such timely and effective designs. To alleviate these problems, this dissertation presents a two-stage framework for informing the design of a visual analytics system. This two-stage design framework builds upon and extends current practices pertaining to analytical workflow and focuses, in particular, on investigating its effect on the design of visual analytics systems for organizational environments. It aims to empower organizations with more systematic and purposeful information analyses through modeling the domain users’ reasoning processes. The first stage in this framework is an Observation and Designing stage, in which a visual analytic system is designed and implemented to abstract and encapsulate general organizational analytical processes, through extensive collaboration with domain users. The second stage is the User-centric Refinement stage, which aims at interactively enriching and refining the already encapsulated domain analysis process based on understanding user’s intentions through analyzing their task behavior. To implement this framework in the process of designing a visual analytics system, this dissertation proposes four general design recommendations that, when followed, empower such systems to bring the users closer to the center of their analytical processes. This dissertation makes three primary contributions: first, it presents a general characterization of the analytical workflow in organizational environments. This characterization fills in the blank of the current lack of such an analytical model and further represents a set of domain analytical tasks that are commonly applicable to various organizations. Secondly, this dissertation describes a two-stage framework for facilitating the domain users’ workflows through integrating their analytical models into interactive visual analytics systems. Finally, this dissertation presents recommendations and suggestions on enriching and refining domain analysis through capturing and analyzing knowledge workers’ analysis processes. To exemplify the generalizability of these design recommendations, this dissertation presents three visual analytics systems that are developed following the proposed recommendations, including Taste for Xerox Corporation, OpsVis for Microsoft, and IRSV for the U.S. Department of Transportation. All of these systems are deployed to domain knowledge workers and are adopted for their analytical practices. Extensive empirical evaluations are further conducted to demonstrate efficacy of these systems in facilitating domain analytical processes

    A visual analytics approach for passing strateggies analysis in soccer using geometric features

    Get PDF
    Passing strategies analysis has always been of interest for soccer research. Since the beginning of soccer, managers have used scouting, video footage, training drills and data feeds to collect information about tactics and player performance. However, the dynamic nature of passing strategies is complex enough to reflect what is happening in the game and makes it hard to understand its dynamics. Furthermore, there exists a growing demand for pattern detection and passing sequence analysis popularized by FC Barcelona’s tiki-taka. We propose an approach to abstract passing strategies and group them based on the geometry of the ball trajectory. To analyse passing sequences, we introduce a interactive visualization scheme to explore the frequency of usage, spatial location and time occurrence of the sequences. The frequency stripes visualization provide, an overview of passing groups frequency on three pitch regions: defense, middle, attack. A trajectory heatmap coordinated with a passing timeline allow, for the exploration of most recurrent passing shapes in temporal and spatial domains. Results show eight common ball trajectories for three-long passing sequences which depend on players positioning and on the angle of the pass. We demonstrate the potential of our approach with data from the Brazilian league under several case studies, and report feedback from a soccer expert.As estrategias de passes têm sido sempre de interesse para a pesquisa de futebol. Desde os inícios do futebol, os técnicos tem usado olheiros, gravações de vídeo, exercícios de treinamento e feeds de dados para coletar informações sobre as táticas e desempenho dos jogadores. No entanto, a natureza dinâmica das estratégias de passes são bastante complexas para refletir o que está acontecendo dentro do campo e torna difícil o entendimento do jogo. Além disso, existe uma demanda crecente pela deteção de padrões e analise de estrategias de passes popularizado pelo tiki-taka utilizado pelo FC. Barcelona. Neste trabalho, propomos uma abordagem para abstrair as sequências de pases e agrupálas baseadas na geometria da trajetória da bola. Para analizar as estratégias de passes, apresentamos um esquema de visualização interátiva para explorar a frequência de uso, a localização espacial e ocorrência temporal das sequências. A visualização Frequency Stripes fornece uma visão geral da frequencia dos grupos achados em tres regiões do campo: defesa, meio e ataque. O heatmap de trajetórias coordenado com a timeline de passes permite a exploração das formas mais recorrentes no espaço e tempo. Os resultados demostram oito trajetórias comunes da bola para sequências de três pases as quais dependem da posição dos jogadores e os ângulos de passe. Demonstramos o potencial da nossa abordagem com utilizando dados de várias partidas do Campeonato Brasileiro sob diferentes casos de estudo, e reportamos os comentários de especialistas em futebol

    Supporting Project Comprehension with Revision Control System Repository Analysis

    Get PDF
    Context: Project comprehension is an activity relevant to all aspects of software engineering, from requirements specification to maintenance. The historical, transactional data stored in revision control systems can be mined and analysed to produce a great deal of information about a project. Aims: This research aims to explore how the data-mining, analysis and presentation of revision control systems can be used to augment aspects of project comprehension, including change prediction, maintenance, visualization, management, profiling, sampling and assessment. Method: A series of case studies investigate how transactional data can be used to support project comprehension. A thematic analysis of revision logs is used to explore the development process and developer behaviour. A benchmarking study of a history-based model of change prediction is conducted to assess how successfully such a technique can be used to augment syntax-based models. A visualization tool is developed for managers of student projects with the aim of evaluating what visualizations best support their roles. Finally, a quasi-experiment is conducted to determine how well an algorithmic model can automatically select a representative sample of code entities from a project, in comparison with expert strategies. Results: The thematic analysis case study classified maintenance activities in 22 undergraduate projects and four real-world projects. The change prediction study calculated information retrieval metrics for 34 undergraduate projects and three real-world projects, as well as an in-depth exploration of the model's performance and applications in two selected projects. File samples for seven projects were generated by six experts and three heuristic models and compared to assess agreement rates, both within the experts and between the experts and the models. Conclusions: When the results from each study are evaluated together, the evidence strongly shows that the information stored in revision control systems can indeed be used to support a range of project comprehension activities in a manner which complements existing, syntax-based techniques. The case studies also help to develop the empirical foundation of repository analysis in the areas of visualization, maintenance, sampling, profiling and management; the research also shows that students can be viable substitutes for industrial practitioners in certain areas of software engineering research, which weakens one of the primary obstacles to empirical studies in these areas
    corecore