62 research outputs found

    Using evidential reasoning to make qualified predictions of software quality

    Full text link
    Software quality is commonly characterised in a top-down manner. High-level notions such as quality are decomposed into hierarchies of sub-factors, ranging from abstract notions such as maintainability and reliability to lower-level notions such as test coverage or team-size. Assessments of abstract factors are derived from relevant sources of information about their respective lower-level sub-factors, by surveying sources such as metrics data and inspection reports. This can be difficult because (1) evidence might not be available, (2) interpretations of the data with respect to certain quality factors may be subject to doubt and intuition, and (3) there is no straightforward means of blending hierarchies of heterogeneous data into a single coherent and quantitative prediction of quality. This paper shows how Evidential Reasoning (ER) - a mathematical technique for reasoning about uncertainty and evidence - can address this problem. It enables the quality assessment to proceed in a bottom-up manner, by the provision of low-level assessments that make any uncertainty explicit, and automatically propagating these up to higher-level 'belief-functions' that accurately summarise the developer's opinion and make explicit any doubt or ignorance

    Model clone detection in practice

    Full text link
    fortiss gGmb

    Conclave: ontology-driven measurement of semantic relatedness between source code elements and problem domain concepts

    Get PDF
    Software maintainers are often challenged with source code changes to improve software systems, or eliminate defects, in unfamiliar programs. To undertake these tasks a sufficient understanding of the system (or at least a small part of it) is required. One of the most time consuming tasks of this process is locating which parts of the code are responsible for some key functionality or feature. Feature (or concept) location techniques address this problem. This paper introduces Conclave, an environment for software analysis, and in particular the Conclave-Mapper tool that provides a feature location facility. This tool explores natural language terms used in programs (e.g. function and variable names), and using textual analysis and a collection of Natural Language Processing techniques, computes synonymous sets of terms. These sets are used to score relatedness between program elements, and search queries or problem domain concepts, producing sorted ranks of program elements that address the search criteria, or concepts. An empirical study is also discussed to evaluate the underlying feature location technique.info:eu-repo/semantics/publishedVersio

    Tool support for continuous quality controlling

    Get PDF
    Over time, software systems suffer gradual quality decay and therefore costs can rise if organizations fail to take proactive countermeasures. Quality control is the first step to avoiding this cost trap. Continuous quality assessments help users identify quality problems early, when their removal is still inexpensive; they also aid decision making by providing an integrated view of a software system's current status. As a side effect, continuous and timely feedback helps developers and maintenance personnel improve their skills and thereby decreases the likelihood of future quality defects. To make regular quality control feasible, it must be highly automated, and assessment results must be presented in an aggregated manner to avoid overwhelming users with data. This article offers an overview of tools that aim to address these issues. The authors also discuss their own flexible, open-source toolkit, which supports the creation of dashboards for quality control

    From Reality to Programs and (Not Quite) Back Again

    No full text
    Making explicit the mappings between real-world con-cepts and program elements that implement them is an es-sential step in understanding, using or evaluating the pub-lic interface of programs, libraries and other collections of classes that model core domain concepts. Unfortunately, due to the big abstraction gap between the modeled domain and today’s programming languages, the mapping is most of the times ambiguous as concepts and relations from the real world are distorted and diffused in the code. In this pa-per we present a comprehensive formal framework for de-scribing the many-to-many mappings between domain con-cepts and the program elements, real-world relations and program relations and the real-world concept names and program identifiers. This framework allows us to describe and discuss typical classes of diffusion of the domain knowl-edge in code. Based on our formal framework we describe an algorithm to recover the mappings between entities from an ontology and program elements. We illustrate the frame-work by using examples from the Java standard library. 1
    • …
    corecore