102 research outputs found
Towards sound refactoring in erlang
Erlang is an actor-based programming
language used extensively for building concurrent, reactive
systems that are highly available and suff er minimum
downtime. Such systems are often mission critical, making
system correctness vital. Refactoring is code restructuring
that improves the code but does not change
behaviour. While using automated refactoring tools is
less error-prone than performing refactorings manually,
automated refactoring tools still cannot guarantee that
the refactoring is correct, i.e., program behaviour is preserved.
This leads to lack of trust in automated refactoring
tools. We rst survey solutions to this problem
proposed in the literature. Erlang refactoring tools as
commonly use approximation techniques which do not
guarantee behaviour while some other works propose the
use of formal methodologies. In this work we aim to
develop a formal methodology for refactoring Erlang
code. We study behavioural preorders, with a special focus
on the testing preorder as it seems most suited to
our purpose.peer-reviewe
Recommended from our members
An empirical investigation into the impact of refactoring on regression testing
It is widely believed that refactoring improves software quality and developer’s productivity by making it easier to maintain and understand software systems. On the other hand, some believe that refactoring has the risk of functionality regression and increased testing cost. This paper investigates the impact of refactoring edits on regression tests using the version history of Java open source projects: (1) Are there adequate regression tests for refactoring in practice? (2) How many of existing regression tests are relevant to refactoring edits and thus need to be re-run for the new version? (3) What proportion of failure-inducing changes are relevant to refactorings? By using a refactoring reconstruction analysis and a change impact analysis in tandem, we investigate the relationship between the types and locations of refactoring edits identified by RefFinder and the affecting changes and affected tests identified by the FaultTracer change impact analysis. The results on three open source projects, JMeter, XMLSecurity, and ANT, show that only 22% of refactored methods and fields are tested by existing regression tests. While refactorings only constitutes 8% of atomic changes, 38% of affected tests are relevant to refactorings. Furthermore, refactorings are involved in almost a half of failed test cases. These results call for new automated regression test augmentation and selection techniques for validating refactoring edits.Electrical and Computer Engineerin
What to Fix? Distinguishing between design and non-design rules in automated tools
Technical debt---design shortcuts taken to optimize for delivery speed---is a
critical part of long-term software costs. Consequently, automatically
detecting technical debt is a high priority for software practitioners.
Software quality tool vendors have responded to this need by positioning their
tools to detect and manage technical debt. While these tools bundle a number of
rules, it is hard for users to understand which rules identify design issues,
as opposed to syntactic quality. This is important, since previous studies have
revealed the most significant technical debt is related to design issues. Other
research has focused on comparing these tools on open source projects, but
these comparisons have not looked at whether the rules were relevant to design.
We conducted an empirical study using a structured categorization approach, and
manually classify 466 software quality rules from three industry tools---CAST,
SonarQube, and NDepend. We found that most of these rules were easily labeled
as either not design (55%) or design (19%). The remainder (26%) resulted in
disagreements among the labelers. Our results are a first step in formalizing a
definition of a design rule, in order to support automatic detection.Comment: Long version of accepted short paper at International Conference on
Software Architecture 2017 (Gothenburg, SE
RefDiff: Detecting Refactorings in Version Histories
Refactoring is a well-known technique that is widely adopted by software
engineers to improve the design and enable the evolution of a system. Knowing
which refactoring operations were applied in a code change is a valuable
information to understand software evolution, adapt software components, merge
code changes, and other applications. In this paper, we present RefDiff, an
automated approach that identifies refactorings performed between two code
revisions in a git repository. RefDiff employs a combination of heuristics
based on static analysis and code similarity to detect 13 well-known
refactoring types. In an evaluation using an oracle of 448 known refactoring
operations, distributed across seven Java projects, our approach achieved
precision of 100% and recall of 88%. Moreover, our evaluation suggests that
RefDiff has superior precision and recall than existing state-of-the-art
approaches.Comment: Paper accepted at 14th International Conference on Mining Software
Repositories (MSR), pages 1-11, 201
Predicting and Evaluating Software Model Growth in the Automotive Industry
The size of a software artifact influences the software quality and impacts
the development process. In industry, when software size exceeds certain
thresholds, memory errors accumulate and development tools might not be able to
cope anymore, resulting in a lengthy program start up times, failing builds, or
memory problems at unpredictable times. Thus, foreseeing critical growth in
software modules meets a high demand in industrial practice. Predicting the
time when the size grows to the level where maintenance is needed prevents
unexpected efforts and helps to spot problematic artifacts before they become
critical.
Although the amount of prediction approaches in literature is vast, it is
unclear how well they fit with prerequisites and expectations from practice. In
this paper, we perform an industrial case study at an automotive manufacturer
to explore applicability and usability of prediction approaches in practice. In
a first step, we collect the most relevant prediction approaches from
literature, including both, approaches using statistics and machine learning.
Furthermore, we elicit expectations towards predictions from practitioners
using a survey and stakeholder workshops. At the same time, we measure software
size of 48 software artifacts by mining four years of revision history,
resulting in 4,547 data points. In the last step, we assess the applicability
of state-of-the-art prediction approaches using the collected data by
systematically analyzing how well they fulfill the practitioners' expectations.
Our main contribution is a comparison of commonly used prediction approaches
in a real world industrial setting while considering stakeholder expectations.
We show that the approaches provide significantly different results regarding
prediction accuracy and that the statistical approaches fit our data best
- …