4 research outputs found

    Exploranative Code Quality Documents

    Full text link
    Good code quality is a prerequisite for efficiently developing maintainable software. In this paper, we present a novel approach to generate exploranative (explanatory and exploratory) data-driven documents that report code quality in an interactive, exploratory environment. We employ a template-based natural language generation method to create textual explanations about the code quality, dependent on data from software metrics. The interactive document is enriched by different kinds of visualization, including parallel coordinates plots and scatterplots for data exploration and graphics embedded into text. We devise an interaction model that allows users to explore code quality with consistent linking between text and visualizations; through integrated explanatory text, users are taught background knowledge about code quality aspects. Our approach to interactive documents was developed in a design study process that included software engineering and visual analytics experts. Although the solution is specific to the software engineering scenario, we discuss how the concept could generalize to multivariate data and report lessons learned in a broader scope.Comment: IEEE VIS VAST 201

    See It to Believe It? {T}he Role of Visualisation in Systems Research

    Get PDF

    Improving Automated Software Testing while re-engineering legacy systems in the absence of documentation

    Get PDF
    Legacy software systems are essential assets that contain an organizations' valuable business logic. Because of outdated technologies and methods used in these systems, they are challenging to maintain and expand. Therefore, organizations need to decide whether to redevelop or re-engineer the legacy system. Although in most cases, re-engineering is the safer and less expensive choice, it has risks such as failure to meet the expected quality and delays due to testing blockades. These risks are even more severe when the legacy system does not have adequate documentation. A comprehensive testing strategy, which includes automated tests and reliable test cases, can substantially reduce the risks. To mitigate the hazards associated with re-engineering, we have conducted three studies in this thesis to improve the testing process. Our rst study introduces a new testing model for the re-engineering process and investigates test automation solutions to detect defects in the early re-engineering stages. We implemented this model on the Cold Region Hydrological Model (CRHM) application and discovered bugs that would not likely have been found manually. Although this approach helped us discover great numbers of software defects, designing test cases is very time-consuming due to the lack of documentation, especially for large systems. Therefore, in our second study, we investigated an approach to generate test cases from user footprints automatically. To do this, we extended an existing tool to collect user actions and legacy system reactions, including database and le system changes. Then we analyzed the data based on the order of user actions and time of them and generated human-readable test cases. Our evaluation shows that this approach can detect more bugs than other existing tools. Moreover, the test cases generated using this approach contain detailed oracles that make them suitable for both black-box and white-box testing. Many scienti c legacy systems such as CRHM are data-driven; they take large amounts of data as input and produce massive data after applying mathematical models. Applying test cases and nding bugs is more demanding when we are dealing with large amounts of data. Hence in our third study, we created a comparative visualization tool (ComVis) to compare a legacy system's output after each change. Visualization helps testers to nd data issues resulting from newly introduced bugs. Twenty participants took part in a user study in which they were asked to nd data issued using ComVis and embedded CRHM visualization tool. Our user study shows that ComVis can nd 51% more data issues than embedded visualization tools in the legacy system can. Also, results from the NASA-TLX assessment and thematic analysis of open-ended questions about each task show users prefer to use ComVis over the built-in visualization tool. We believe our introduced approaches and developed systems will signi cantly reduce the risks associated with the re-engineering process. i
    corecore