661 research outputs found
Dissection of a Bug Dataset: Anatomy of 395 Patches from Defects4J
Well-designed and publicly available datasets of bugs are an invaluable asset
to advance research fields such as fault localization and program repair as
they allow directly and fairly comparison between competing techniques and also
the replication of experiments. These datasets need to be deeply understood by
researchers: the answer for questions like "which bugs can my technique
handle?" and "for which bugs is my technique effective?" depends on the
comprehension of properties related to bugs and their patches. However, such
properties are usually not included in the datasets, and there is still no
widely adopted methodology for characterizing bugs and patches. In this work,
we deeply study 395 patches of the Defects4J dataset. Quantitative properties
(patch size and spreading) were automatically extracted, whereas qualitative
ones (repair actions and patterns) were manually extracted using a thematic
analysis-based approach. We found that 1) the median size of Defects4J patches
is four lines, and almost 30% of the patches contain only addition of lines; 2)
92% of the patches change only one file, and 38% has no spreading at all; 3)
the top-3 most applied repair actions are addition of method calls,
conditionals, and assignments, occurring in 77% of the patches; and 4) nine
repair patterns were found for 95% of the patches, where the most prevalent,
appearing in 43% of the patches, is on conditional blocks. These results are
useful for researchers to perform advanced analysis on their techniques'
results based on Defects4J. Moreover, our set of properties can be used to
characterize and compare different bug datasets.Comment: Accepted for SANER'18 (25th edition of IEEE International Conference
on Software Analysis, Evolution and Reengineering), Campobasso, Ital
Spectrum-based Fault Localization Techniques Application on Multiple-Fault Programs: A Review
Software fault localization is one of the most tedious and costly activities in program debugging in the endeavor to identify faults locations in a software program. In this paper, the studies that used spectrum-based fault localization (SBFL) techniques that makes use of different multiple fault localization debugging methods such as one-bug-at-a-time (OBA) debugging, parallel debugging, and simultaneous debugging in localizing multiple faults are classified and critically analyzed in order to extensively discuss the current research trends, issues, and challenges in this field of study. The outcome strongly shows that there is a high utilization of OBA debugging method, poor fault isolation accuracy, and dominant use of artificial faults that limit the existing techniques applicability in the software industry
Structural Generative Descriptions for Temporal Data
In data mining problems the representation or description of data plays a fundamental role, since it defines the set of essential properties for the extraction and characterisation of patterns. However, for the case of temporal data, such as time series and data streams, one outstanding issue when developing mining algorithms is finding an appropriate data description or representation.
In this thesis two novel domain-independent representation frameworks for temporal data suitable for off-line and online mining tasks are formulated.
First, a domain-independent temporal data representation framework based on a novel data description strategy which combines structural and statistical pattern recognition approaches is developed. The key idea here is to move the structural pattern recognition problem to the probability domain. This framework is composed of three general tasks: a) decomposing input temporal patterns into subpatterns in time or any other transformed domain (for instance, wavelet domain); b) mapping these subpatterns into the probability domain to find attributes of elemental probability subpatterns called primitives; and c) mining input temporal patterns according to the attributes of their corresponding probability domain subpatterns. This framework is referred to as Structural Generative Descriptions (SGDs).
Two off-line and two online algorithmic instantiations of the proposed SGDs framework are then formulated: i) For the off-line case, the first instantiation is based on the use of Discrete Wavelet Transform (DWT) and Wavelet Density Estimators (WDE), while the second algorithm includes DWT and Finite Gaussian Mixtures. ii) For the online case, the first instantiation relies on an online implementation of DWT and a recursive version of WDE (RWDE), whereas the second algorithm is based on a multi-resolution exponentially weighted moving average filter and RWDE. The empirical evaluation of proposed SGDs-based algorithms is performed in the context of time series classification, for off-line algorithms, and in the context of change detection and clustering, for online algorithms. For this purpose, synthetic and publicly available real-world data are used.
Additionally, a novel framework for multidimensional data stream evolution diagnosis incorporating RWDE into the context of Velocity Density Estimation (VDE) is formulated. Changes in streaming data and changes in their correlation structure are characterised by means of local and global evolution coefficients as well as by means of recursive correlation coefficients. The proposed VDE framework is evaluated using temperature data from the UK and air pollution data from Hong Kong.Open Acces
Recommended from our members
The interlocutory tool box: techniques for curtailing coincidental correctness
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonEliminating faults in software systems is important, because they can have catastrophic consequences. This can be achieved by testing and debugging. Testing involves executing the system with a test case to obtain an output. The output is evaluated against the testerâs expectations; deviation from these expectations indicates that a fault has been detected. Debugging involves using information about the fault, that was gleaned during testing, to isolate the fault in the system. Coincidental correctness is a widespread phenomenon in which a fault corrupts a program state, and despite this, the system produces an output that satisfies the testerâs expectations. Coincidental correctness can compromise the effectiveness of testing and debugging techniques.
This thesis investigated methods for alleviating coincidental correctness in testing and debugging. The investigation culminated in four techniques. The first technique is called Interlocutory Testing. Interlocutory Testing is a framework for the development of test oracles that are referred to as Interlocutory Relations. Interlocutory Relations are the first type of oracle that has been specifically designed to operate effectively in the presence of coincidental correctness. Metamorphic Testing was pioneered for testing non-testable systems. However, the effectiveness of this technique can be compromised by coincidental correctness. The second technique, Interlocutory Metamorphic Testing, is a version of Metamorphic Testing that has been integrated with Interlocutory Testing, to alleviate the impact of coincidental correctness on Metamorphic Testing. Interlocutory Mutation Testing is the third technique. This technique uses similar principles to Interlocutory Testing to alleviate the Equivalent Mutant Problem in the presence of coincidental correctness and non-determinism. Finally, the fourth technique is Interlocutory Spectrum-based Fault Localisation. This technique uses Interlocutory Relations to ameliorate the effects of coincidental correctness on fault localisation.
Each technique was empirically evaluated. The results were promising, and indicated that these techniques were capable of mitigating the impact of coincidental correctness
- âŠ