58,049 research outputs found

    Visual Analytics in Software Maintenance:Challenges and Opportunities

    Get PDF
    Visual analytics (VA) is an emerging science at the crossroads of data and information visualization, graphics, data min-ing, and knowledge representation, with many successful applications in engineering, business and finance, security, geo-sciences, and e-governance and health. Tools using visualization, data mining, and data analysis are also prominently present in a different field: software maintenance. However, an integrated VA is relatively new for this field. In this paper, we discuss the specific challenges and particularities of applying VA in software engineering, highlight the added value of a VA approach, as distilled by us from several large-scale software engineering industrial projects. 1

    Exploranative Code Quality Documents

    Full text link
    Good code quality is a prerequisite for efficiently developing maintainable software. In this paper, we present a novel approach to generate exploranative (explanatory and exploratory) data-driven documents that report code quality in an interactive, exploratory environment. We employ a template-based natural language generation method to create textual explanations about the code quality, dependent on data from software metrics. The interactive document is enriched by different kinds of visualization, including parallel coordinates plots and scatterplots for data exploration and graphics embedded into text. We devise an interaction model that allows users to explore code quality with consistent linking between text and visualizations; through integrated explanatory text, users are taught background knowledge about code quality aspects. Our approach to interactive documents was developed in a design study process that included software engineering and visual analytics experts. Although the solution is specific to the software engineering scenario, we discuss how the concept could generalize to multivariate data and report lessons learned in a broader scope.Comment: IEEE VIS VAST 201

    Interaction design in multidimensional visualization : techniques for multidimensional data visualization, exploration and visual analytics

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Interaction is an overloaded term in information visualization. Basically, every software tool is interactive but mostly through the manipulation of a widget. Broadly speaking, a visualization is just a software application. What makes the interactive component of a visualization really distinctive is how well it supports an arbitrary selection of data directly in the interface in order to facilitate subsequent analytic tasks. This is challenging due to over-plotting and visual clutter in the multidimensional space and such phenomenon is commonly known as the curse of dimensionality. Data selection is a frontier of a visualization and too many multidimensional visualizations claiming to be interactive mostly address the change of view without explicitly specifying the core technique of how to materialize such operations. Perhaps, the interactive component is achieved through the traditional widget. To overcome the complexity of truly interacting with multidimensional data for effective visual analytics, we first propose an interactive framework for better understanding of the problem domains. Dynamic data selection is materialized by a novel and sophisticated technique called the Hierarchical Virtual Node which opens an application to interact with data directly in parallel coordinates that would otherwise have been impossible or difficult to achieve by existing methods. It works well even under the circumstance of the curse of dimensionality and offers several advantages over others. For example, the use case only requires a mouse click to select a set of data item(s). To achieve an efficient visual analytics, a set of analytic tasks are also developed in each layer of the proposed framework

    Progressive Analytics: A Computation Paradigm for Exploratory Data Analysis

    Get PDF
    Exploring data requires a fast feedback loop from the analyst to the system, with a latency below about 10 seconds because of human cognitive limitations. When data becomes large or analysis becomes complex, sequential computations can no longer be completed in a few seconds and data exploration is severely hampered. This article describes a novel computation paradigm called Progressive Computation for Data Analysis or more concisely Progressive Analytics, that brings at the programming language level a low-latency guarantee by performing computations in a progressive fashion. Moving this progressive computation at the language level relieves the programmer of exploratory data analysis systems from implementing the whole analytics pipeline in a progressive way from scratch, streamlining the implementation of scalable exploratory data analysis systems. This article describes the new paradigm through a prototype implementation called ProgressiVis, and explains the requirements it implies through examples.Comment: 10 page

    Visual analytics in FCA-based clustering

    Full text link
    Visual analytics is a subdomain of data analysis which combines both human and machine analytical abilities and is applied mostly in decision-making and data mining tasks. Triclustering, based on Formal Concept Analysis (FCA), was developed to detect groups of objects with similar properties under similar conditions. It is used in Social Network Analysis (SNA) and is a basis for certain types of recommender systems. The problem of triclustering algorithms is that they do not always produce meaningful clusters. This article describes a specific triclustering algorithm and a prototype of a visual analytics platform for working with obtained clusters. This tool is designed as a testing frameworkis and is intended to help an analyst to grasp the results of triclustering and recommender algorithms, and to make decisions on meaningfulness of certain triclusters and recommendations.Comment: 11 pages, 3 figures, 2 algorithms, 3rd International Conference on Analysis of Images, Social Networks and Texts (AIST'2014). in Supplementary Proceedings of the 3rd International Conference on Analysis of Images, Social Networks and Texts (AIST 2014), Vol. 1197, CEUR-WS.org, 201

    Designing Improved Sediment Transport Visualizations

    Get PDF
    Monitoring, or more commonly, modeling of sediment transport in the coastal environment is a critical task with relevance to coastline stability, beach erosion, tracking environmental contaminants, and safety of navigation. Increased intensity and regularity of storms such as Superstorm Sandy heighten the importance of our understanding of sediment transport processes. A weakness of current modeling capabilities is the ability to easily visualize the result in an intuitive manner. Many of the available visualization software packages display only a single variable at once, usually as a two-dimensional, plan-view cross-section. With such limited display capabilities, sophisticated 3D models are undermined in both the interpretation of results and dissemination of information to the public. Here we explore a subset of existing modeling capabilities (specifically, modeling scour around man-made structures) and visualization solutions, examine their shortcomings and present a design for a 4D visualization for sediment transport studies that is based on perceptually-focused data visualization research and recent and ongoing developments in multivariate displays. Vector and scalar fields are co-displayed, yet kept independently identifiable utilizing human perception\u27s separation of color, texture, and motion. Bathymetry, sediment grain-size distribution, and forcing hydrodynamics are a subset of the variables investigated for simultaneous representation. Direct interaction with field data is tested to support rapid validation of sediment transport model results. Our goal is a tight integration of both simulated data and real world observations to support analysis and simulation of the impact of major sediment transport events such as hurricanes. We unite modeled results and field observations within a geodatabase designed as an application schema of the Arc Marine Data Model. Our real-world focus is on the Redbird Artificial Reef Site, roughly 18 nautical miles offshor- Delaware Bay, Delaware, where repeated surveys have identified active scour and bedform migration in 27 m water depth amongst the more than 900 deliberately sunken subway cars and vessels. Coincidently collected high-resolution multibeam bathymetry, backscatter, and side-scan sonar data from surface and autonomous underwater vehicle (AUV) systems along with complementary sub-bottom, grab sample, bottom imagery, and wave and current (via ADCP) datasets provide the basis for analysis. This site is particularly attractive due to overlap with the Delaware Bay Operational Forecast System (DBOFS), a model that provides historical and forecast oceanographic data that can be tested in hindcast against significant changes observed at the site during Superstorm Sandy and in predicting future changes through small-scale modeling around the individual reef objects
    corecore