4,954 research outputs found

    A Quality Model for Actionable Analytics in Rapid Software Development

    Get PDF
    Background: Accessing relevant data on the product, process, and usage perspectives of software as well as integrating and analyzing such data is crucial for getting reliable and timely actionable insights aimed at continuously managing software quality in Rapid Software Development (RSD). In this context, several software analytics tools have been developed in recent years. However, there is a lack of explainable software analytics that software practitioners trust. Aims: We aimed at creating a quality model (called Q-Rapids quality model) for actionable analytics in RSD, implementing it, and evaluating its understandability and relevance. Method: We performed workshops at four companies in order to determine relevant metrics as well as product and process factors. We also elicited how these metrics and factors are used and interpreted by practitioners when making decisions in RSD. We specified the Q-Rapids quality model by comparing and integrating the results of the four workshops. Then we implemented the Q-Rapids tool to support the usage of the Q-Rapids quality model as well as the gathering, integration, and analysis of the required data. Afterwards we installed the Q-Rapids tool in the four companies and performed semi-structured interviews with eight product owners to evaluate the understandability and relevance of the Q-Rapids quality model. Results: The participants of the evaluation perceived the metrics as well as the product and process factors of the Q-Rapids quality model as understandable. Also, they considered the Q-Rapids quality model relevant for identifying product and process deficiencies (e.g., blocking code situations). Conclusions: By means of heterogeneous data sources, the Q-Rapids quality model enables detecting problems that take more time to find manually and adds transparency among the perspectives of system, process, and usage.Comment: This is an Author's Accepted Manuscript of a paper to be published by IEEE in the 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA) 2018. The final authenticated version will be available onlin

    A research review of quality assessment for software

    Get PDF
    Measures were recommended to assess the quality of software submitted to the AdaNet program. The quality factors that are important to software reuse are explored and methods of evaluating those factors are discussed. Quality factors important to software reuse are: correctness, reliability, verifiability, understandability, modifiability, and certifiability. Certifiability is included because the documentation of many factors about a software component such as its efficiency, portability, and development history, constitute a class for factors important to some users, not important at all to other, and impossible for AdaNet to distinguish between a priori. The quality factors may be assessed in different ways. There are a few quantitative measures which have been shown to indicate software quality. However, it is believed that there exists many factors that indicate quality and have not been empirically validated due to their subjective nature. These subjective factors are characterized by the way in which they support the software engineering principles of abstraction, information hiding, modularity, localization, confirmability, uniformity, and completeness

    RePOR: Mimicking humans on refactoring tasks. Are we there yet?

    Full text link
    Refactoring is a maintenance activity that aims to improve design quality while preserving the behavior of a system. Several (semi)automated approaches have been proposed to support developers in this maintenance activity, based on the correction of anti-patterns, which are `poor' solutions to recurring design problems. However, little quantitative evidence exists about the impact of automatically refactored code on program comprehension, and in which context automated refactoring can be as effective as manual refactoring. Leveraging RePOR, an automated refactoring approach based on partial order reduction techniques, we performed an empirical study to investigate whether automated refactoring code structure affects the understandability of systems during comprehension tasks. (1) We surveyed 80 developers, asking them to identify from a set of 20 refactoring changes if they were generated by developers or by a tool, and to rate the refactoring changes according to their design quality; (2) we asked 30 developers to complete code comprehension tasks on 10 systems that were refactored by either a freelancer or an automated refactoring tool. To make comparison fair, for a subset of refactoring actions that introduce new code entities, only synthetic identifiers were presented to practitioners. We measured developers' performance using the NASA task load index for their effort, the time that they spent performing the tasks, and their percentages of correct answers. Our findings, despite current technology limitations, show that it is reasonable to expect a refactoring tools to match developer code

    Automatically assessing and improving code readability and understandability

    Get PDF

    Measuring the understandability of WSDL specifications, web service understanding degree approach and system

    Get PDF
    Web Services (WS) are fundamental software artifacts for building service oriented applications and they are usually reused by others. Therefore they must be analyzed and comprehended for maintenance tasks: identification of critical parts, bug fixing, adaptation and improvement. In this article, WSDLUD a method aimed at measuring a priori the understanding degree (UD) of WSDL (Web Service Description Language) descriptions is presented. In order to compute UD several criteria useful to measure the understanding’s complexity of WSDL descriptions must be defined. These criteria are used by LSP (Logic Scoring of Preference), a multicriteria evaluation method, for producing a Global Preference value that indicates the satisfaction level of the WSDL description regarding the evaluation focus, in this case, the understanding degree. All the criteria information required by LSP is extracted from WSDL descriptions by using static analysis techniques and processed by specific algorithms which allow gathering semantic information. This process allows to obtain a priori information about the comprehension difficulty which proves our research hypotheses that states that it is possible to compute the understanding degree of a WSDL description.info:eu-repo/semantics/publishedVersio

    A Metrics-based Framework for Measuring the Reusability of Object-Oriented Software Components

    Get PDF
    The critical role played by software in socioeconomic advancement, has seen a rapid demand for software; creating a large backlog in affordable and quality software that needs to be written.  Although software reuse is capable of addressing this issue, effective reuse is seldom to come by, thus the issue still remains unresolved. In order to achieve effective reuse, practitioners need to focus on reusability: the property that makes software reusable. Although Object Oriented Software Development (OOSD) approach is capable of improving software reusability, a way of ascertaining if the required degree of reusability is being achieved during the OOSD process is required. This can be achieved through measurement. The task involved in measuring reusability of Object oriented (OO) software is to; determine major reusability attributes of reusable components, relate these characteristics with factors that influence them, link each factor with measurable OO design features that determines them, relate each feature with appropriate metrics, and find out how these metrics collectively determine the reusability of components. A novel framework for achieving this task is proposed in this paper. Keywords: Software reuse, Software Reusability, Software Metrics, Software Componen
    corecore