143 research outputs found

    A survey on software coupling relations and tools

    Full text link
    Context Coupling relations reflect the dependencies between software entities and can be used to assess the quality of a program. For this reason, a vast amount of them has been developed, together with tools to compute their related metrics. However, this makes the coupling measures suitable for a given application challenging to find. Goals The first objective of this work is to provide a classification of the different kinds of coupling relations, together with the metrics to measure them. The second consists in presenting an overview of the tools proposed until now by the software engineering academic community to extract these metrics. Method This work constitutes a systematic literature review in software engineering. To retrieve the referenced publications, publicly available scientific research databases were used. These sources were queried using keywords inherent to software coupling. We included publications from the period 2002 to 2017 and highly cited earlier publications. A snowballing technique was used to retrieve further related material. Results Four groups of coupling relations were found: structural, dynamic, semantic and logical. A fifth set of coupling relations includes approaches too recent to be considered an independent group and measures developed for specific environments. The investigation also retrieved tools that extract the metrics belonging to each coupling group. Conclusion This study shows the directions followed by the research on software coupling: e.g., developing metrics for specific environments. Concerning the metric tools, three trends have emerged in recent years: use of visualization techniques, extensibility and scalability. Finally, some coupling metrics applications were presented (e.g., code smell detection), indicating possible future research directions. Public preprint [https://doi.org/10.5281/zenodo.2002001]

    Metrics Dashboard Services: A Framework for Analyzing Free/Open Source Team Repositories

    Get PDF
    Software engineering as practiced today (especially in the industry) is no longer about the stereotypical monolithic life cycle processes (e.g. waterfall, spiral, etc.) found in most software engineering textbooks. These heavyweight methods historically have impeded progress for small/medium sized development teams owing to their inherent complexity and rather limited data collection strategies that predominated the 1980s until relatively recently in the mid-2000s. The discipline and practice of software engineering includes software quality, which has an established theoretical foundation for doing software metrics. Software metrics are a critical tool which provide continuous insight to products and processes and help build reliable software in mission critical environments. Using software metrics we can perform calculations that help assess the effectiveness of the underlying software or process. The type of metrics relevant to our work are in-process metrics. In-process metrics focus on a higher-level view of software quality, measuring information that can provide insight into the underlying software development process. In this thesis, we aim to develop and evaluate a metrics dashboard to support Computational Science and Engineering (CSE) software development projects. This task requires us to perform the following activities: Assess how metrics are used and which general classes/types of metrics will be useful in CSE projects. Develop a metrics dashboard that will work for teams using sites like Github, Bitbucket etc. Assess the effectiveness of the dashboard in terms of project success and developer attitude towards metrics and process. The challenge is to cherry-pick the most-effective practices from a large suite of tools and incorporate them into existing cloud-based workflows. As part of this thesis, we have developed a metrics dashboard based on the currently identified metrics types. The tools we are developing can be incorporated in any existing GitHub-based workflow without requiring the developers to install anything

    By no means : a study on aggregating software metrics

    Get PDF
    Fault prediction models usually employ software metrics which were previously shown to be a strong predictor for defects, e.g., SLOC. However, metrics are usually defined on a microlevel (method, class, package), and should therefore be aggregated in order to provide insights in the evolution at the macro-level (system). In addition to traditional aggregation techniques such as the mean, median, or sum, recently econometric aggregation techniques, such as the Gini, Theil, and Hoover indices have been proposed. In this paper we wish to understand whether the aggregation technique influences the presence and strength of the relation between SLOC and defects. Our results indicate that correlation is not strong, and is influenced by the aggregation technique

    Bayesian Hierarchical Modelling for Tailoring Metric Thresholds

    Full text link
    Software is highly contextual. While there are cross-cutting `global' lessons, individual software projects exhibit many `local' properties. This data heterogeneity makes drawing local conclusions from global data dangerous. A key research challenge is to construct locally accurate prediction models that are informed by global characteristics and data volumes. Previous work has tackled this problem using clustering and transfer learning approaches, which identify locally similar characteristics. This paper applies a simpler approach known as Bayesian hierarchical modeling. We show that hierarchical modeling supports cross-project comparisons, while preserving local context. To demonstrate the approach, we conduct a conceptual replication of an existing study on setting software metrics thresholds. Our emerging results show our hierarchical model reduces model prediction error compared to a global approach by up to 50%.Comment: Short paper, published at MSR '18: 15th International Conference on Mining Software Repositories May 28--29, 2018, Gothenburg, Swede

    Developing Socially-Constructed Quality Metrics in Agile: A Multi-Faceted Perspective

    Get PDF
    This research proposes development of socially-constructed metrics for quality assessment and improvement in Agile Software Development (ASD) projects. The first phase of our research includes an extensive literature review, which indicates that traditional (outcome-focused) metrics that evaluate quality are not directly transferable to adaptive, ASD projects. We then conduct semi-structured interviews confirming the necessity of considering people and process aspects for quality considerations in agile. We propose three dimensions for composite metrics in ASD, namely, (1) evidence (2) expectation and (3) critical evaluation. This combines quantitative and qualitative information drawn from people, process, and outcome-related factors. The proposed model allows ASD teams to concurrently conduct quality assessment and improvement during their projects, producing innovative metrics, adhering to the core principles of the agile manifesto. In our next research stage, this reference model will be tested and validated in practice

    The dance of classes:a stochastic model for software structure evolution

    Get PDF
    In this study, we investigate software structure evolution and growth. We represent software structure by means of a generic macro-topology called Little House, which models the dependencies among classes of object-oriented software systems. We, then, define a stochastic model to predict the way software architectures evolve. The model estimates how the classes of object-oriented programs get connected one to another along the evolution of the systems. To define the model, we analyzed data from 81 versions of six Java based projects. We analyzed each pair of sequential versions, for each project, in order to depict a pattern of software structure evolution based on Little House. To evaluate the model, we performed two experiments: one with the data used to derive the model, and another with data of 35 releases, in total, of four open-source Java project. In both experiments, we found a very low rate of error for the application of the proposed model. The evaluation of the model suggests it is able to predict how a software structure will evolve
    • …
    corecore