51,262 research outputs found
A survey on software coupling relations and tools
Context
Coupling relations reflect the dependencies between software entities and can be used to assess the quality of a program. For this reason, a vast amount of them has been developed, together with tools to compute their related metrics. However, this makes the coupling measures suitable for a given application challenging to find.
Goals
The first objective of this work is to provide a classification of the different kinds of coupling relations, together with the metrics to measure them. The second consists in presenting an overview of the tools proposed until now by the software engineering academic community to extract these metrics.
Method
This work constitutes a systematic literature review in software engineering. To retrieve the referenced publications, publicly available scientific research databases were used. These sources were queried using keywords inherent to software coupling. We included publications from the period 2002 to 2017 and highly cited earlier publications. A snowballing technique was used to retrieve further related material.
Results
Four groups of coupling relations were found: structural, dynamic, semantic and logical. A fifth set of coupling relations includes approaches too recent to be considered an independent group and measures developed for specific environments. The investigation also retrieved tools that extract the metrics belonging to each coupling group.
Conclusion
This study shows the directions followed by the research on software coupling: e.g., developing metrics for specific environments. Concerning the metric tools, three trends have emerged in recent years: use of visualization techniques, extensibility and scalability. Finally, some coupling metrics applications were presented (e.g., code smell detection), indicating possible future research directions. Public preprint [https://doi.org/10.5281/zenodo.2002001]
Some issues in the 'archaeology' of software evolution
During a software project's lifetime, the software goes through many changes, as components are added, removed and modified to fix bugs and add new features. This paper is intended as a lightweight introduction to some of the issues arising from an `archaeological' investigation of software evolution. We use our own work to look at some of the challenges faced, techniques used, findings obtained, and lessons learnt when measuring and visualising the historical changes that happen during the evolution of software
Recommended from our members
Some issues in the 'archaeology' of software evolution
During a software project's lifetime, the software goes through many changes, as components are added, removed and modified to fix bugs and add new features. This paper is intended as a lightweight introduction to some of the issues arising from an `archaeological' investigation of software evolution. We use our own work to look at some of the challenges faced, techniques used, findings obtained, and lessons learnt when measuring and visualising the historical changes that happen during the evolution of software
Usability testing for improving interactive geovisualization techniques
Usability describes a product’s fitness for use according to a set of predefined criteria.
Whatever the aim of the product, it should facilitate users’ tasks or enhance their performance
by providing appropriate analysis tools. In both cases, the main interest is to satisfy users in
terms of providing relevant functionality which they find fit for purpose. “Testing usability
means making sure that people can find and work with [a product’s] functions to meet their
needs” (Dumas and Redish, 1999: 4). It is therefore concerned with establishing whether
people can use a product to complete their tasks with ease and at the same time help them
complete their jobs more effectively.
This document describes the findings of a usability study carried out on DecisionSite Map
Interaction Services (Map IS). DecisionSite, a product of Spotfire, Inc.,1 is an interactive
system for the visual and dynamic exploration of data designed for supporting decisionmaking.
The system was coupled to ArcExplorer (forming DecisionSite Map IS) to provide
limited GIS functionality (simple user interface, basic tools, and data management) and
support users of spatial data. Hence, this study set out to test the suitability of the coupling
between the two software components (DecisionSite and ArcExplorer) for the purpose of
exploring spatial data. The first section briefly discusses DecisionSite’s visualization
functionality. The second section describes the test goals, its design, the participants and data
used. The following section concentrates on the analysis of results, while the final section
discusses future areas of research and possible development
Software tools for conducting bibliometric analysis in science: An up-to-date review
Bibliometrics has become an essential tool for assessing and analyzing the output of scientists, cooperation between
universities, the effect of state-owned science funding on national research and development performance and educational
efficiency, among other applications. Therefore, professionals and scientists need a range of theoretical and practical
tools to measure experimental data. This review aims to provide an up-to-date review of the various tools available
for conducting bibliometric and scientometric analyses, including the sources of data acquisition, performance analysis
and visualization tools. The included tools were divided into three categories: general bibliometric and performance
analysis, science mapping analysis, and libraries; a description of all of them is provided. A comparative analysis of the
database sources support, pre-processing capabilities, analysis and visualization options were also provided in order to
facilitate its understanding. Although there are numerous bibliometric databases to obtain data for bibliometric and
scientometric analysis, they have been developed for a different purpose. The number of exportable records is between
500 and 50,000 and the coverage of the different science fields is unequal in each database. Concerning the analyzed
tools, Bibliometrix contains the more extensive set of techniques and suitable for practitioners through Biblioshiny.
VOSviewer has a fantastic visualization and is capable of loading and exporting information from many sources. SciMAT
is the tool with a powerful pre-processing and export capability. In views of the variability of features, the users need to
decide the desired analysis output and chose the option that better fits into their aims
Should I Bug You? Identifying Domain Experts in Software Projects Using Code Complexity Metrics
In any sufficiently complex software system there are experts, having a
deeper understanding of parts of the system than others. However, it is not
always clear who these experts are and which particular parts of the system
they can provide help with. We propose a framework to elicit the expertise of
developers and recommend experts by analyzing complexity measures over time.
Furthermore, teams can detect those parts of the software for which currently
no, or only few experts exist and take preventive actions to keep the
collective code knowledge and ownership high. We employed the developed
approach at a medium-sized company. The results were evaluated with a survey,
comparing the perceived and the computed expertise of developers. We show that
aggregated code metrics can be used to identify experts for different software
components. The identified experts were rated as acceptable candidates by
developers in over 90% of all cases
Can Network Analysis Techniques help to Predict Design Dependencies? An Initial Study
The degree of dependencies among the modules of a software system is a key
attribute to characterize its design structure and its ability to evolve over
time. Several design problems are often correlated with undesired dependencies
among modules. Being able to anticipate those problems is important for
developers, so they can plan early for maintenance and refactoring efforts.
However, existing tools are limited to detecting undesired dependencies once
they appeared in the system. In this work, we investigate whether module
dependencies can be predicted (before they actually appear). Since the module
structure can be regarded as a network, i.e, a dependency graph, we leverage on
network features to analyze the dynamics of such a structure. In particular, we
apply link prediction techniques for this task. We conducted an evaluation on
two Java projects across several versions, using link prediction and machine
learning techniques, and assessed their performance for identifying new
dependencies from a project version to the next one. The results, although
preliminary, show that the link prediction approach is feasible for package
dependencies. Also, this work opens opportunities for further development of
software-specific strategies for dependency prediction.Comment: Accepted at ICSA 201
RePOR: Mimicking humans on refactoring tasks. Are we there yet?
Refactoring is a maintenance activity that aims to improve design quality
while preserving the behavior of a system. Several (semi)automated approaches
have been proposed to support developers in this maintenance activity, based on
the correction of anti-patterns, which are `poor' solutions to recurring design
problems. However, little quantitative evidence exists about the impact of
automatically refactored code on program comprehension, and in which context
automated refactoring can be as effective as manual refactoring. Leveraging
RePOR, an automated refactoring approach based on partial order reduction
techniques, we performed an empirical study to investigate whether automated
refactoring code structure affects the understandability of systems during
comprehension tasks. (1) We surveyed 80 developers, asking them to identify
from a set of 20 refactoring changes if they were generated by developers or by
a tool, and to rate the refactoring changes according to their design quality;
(2) we asked 30 developers to complete code comprehension tasks on 10 systems
that were refactored by either a freelancer or an automated refactoring tool.
To make comparison fair, for a subset of refactoring actions that introduce new
code entities, only synthetic identifiers were presented to practitioners. We
measured developers' performance using the NASA task load index for their
effort, the time that they spent performing the tasks, and their percentages of
correct answers. Our findings, despite current technology limitations, show
that it is reasonable to expect a refactoring tools to match developer code
- …