8,639 research outputs found
Proactive Quality Guidance for Model Evolution in Model Libraries
Model evolution in model libraries differs from general model evolution. It
limits the scope to the manageable and allows to develop clear concepts,
approaches, solutions, and methodologies. Looking at model quality in evolving
model libraries, we focus on quality concerns related to reusability. In this
paper, we put forward our proactive quality guidance approach for model
evolution in model libraries. It uses an editing-time assessment linked to a
lightweight quality model, corresponding metrics, and simplified reviews. All
of which help to guide model evolution by means of quality gates fostering
model reusability.Comment: 10 pages, figures. Appears in Models and Evolution Workshop
Proceedings of the ACM/IEEE 16th International Conference on Model Driven
Engineering Languages and Systems, Miami, Florida (USA), September 30, 201
Recommended from our members
Legacy Information Systems, Can They be Agile? A Framework for Assessing Agility
Information systems should contribute to enterprise effectiveness, and usually do so during the operational phase of their lifecycle. From the experience of practitioners, the duration of this lifecycle is often not predetermined, therefore resulting in information systems with a relatively long lifespan and information systems with a relatively short lifespan. An important aspect of application management, is managing the application lifecycle. In the experience of practitioners, deciding the moment to end the lifecycle, refactor it, or leave it be are often not thoroughly researched. The decision to move on to a newer information system is therefore not always sufficiently justified and relies more on a gut feeling. What if the older information system is still able to perform and comply with the changes the enterprise desires? Prolonging the length of an application lifecycle could result in cost reduction in an application portfolio. In this paper, we aim to create a method of assessment of the ability to change of a legacy information system and identifying potential areas in which a legacy information system would need improvement in order to increase this ability to change
Too Trivial To Test? An Inverse View on Defect Prediction to Identify Methods with Low Fault Risk
Background. Test resources are usually limited and therefore it is often not
possible to completely test an application before a release. To cope with the
problem of scarce resources, development teams can apply defect prediction to
identify fault-prone code regions. However, defect prediction tends to low
precision in cross-project prediction scenarios.
Aims. We take an inverse view on defect prediction and aim to identify
methods that can be deferred when testing because they contain hardly any
faults due to their code being "trivial". We expect that characteristics of
such methods might be project-independent, so that our approach could improve
cross-project predictions.
Method. We compute code metrics and apply association rule mining to create
rules for identifying methods with low fault risk. We conduct an empirical
study to assess our approach with six Java open-source projects containing
precise fault data at the method level.
Results. Our results show that inverse defect prediction can identify approx.
32-44% of the methods of a project to have a low fault risk; on average, they
are about six times less likely to contain a fault than other methods. In
cross-project predictions with larger, more diversified training sets,
identified methods are even eleven times less likely to contain a fault.
Conclusions. Inverse defect prediction supports the efficient allocation of
test resources by identifying methods that can be treated with less priority in
testing activities and is well applicable in cross-project prediction
scenarios.Comment: Submitted to PeerJ C
Functional Size Measurement and Model Verification for Software Model-Driven Developments: A COSMIC-based Approach
Historically, software production methods and tools have a unique goal: to produce high quality
software. Since the goal of Model-Driven Development (MDD) methods is no different, MDD
methods have emerged to take advantage of the benefits of using conceptual models to produce
high quality software.
In such MDD contexts, conceptual models are used as input to automatically generate final
applications. Thus, we advocate that there is a relation between the quality of the final software
product and the quality of the models used to generate it. The quality of conceptual models can
be influenced by many factors. In this thesis, we focus on the accuracy of the techniques used to
predict the characteristics of the development process and the generated products.
In terms of the prediction techniques for software development processes, it is widely
accepted that knowing the functional size of applications in order to successfully apply effort
models and budget models is essential. In order to evaluate the quality of generated
applications, defect detection is considered to be the most suitable technique.
The research goal of this thesis is to provide an accurate measurement procedure based on
COSMIC for the automatic sizing of object-oriented OO-Method MDD applications. To
achieve this research goal, it is necessary to accurately measure the conceptual models used in
the generation of object-oriented applications. It is also very important for these models not to
have defects so that the applications to be measured are correctly represented.
In this thesis, we present the OOmCFP (OO-Method COSMIC Function Points) measurement
procedure. This procedure makes a twofold contribution: the accurate measurement of objectoriented
applications generated in MDD environments from the conceptual models involved, and
the verification of conceptual models to allow the complete generation of correct final applications
from the conceptual models involved.
The OOmCFP procedure has been systematically designed, applied, and
automated. This measurement procedure has been validated to conform to the
ISO 14143 standard, the metrology concepts defined in the ISO VIM, and the
accuracy of the measurements obtained according to ISO 5725. This
procedure has also been validated by performing empirical studies.
The results of the empirical studies demonstrate that OOmCFP can obtain
accurate measures of the functional size of applications generated in MDD
environments from the corresponding conceptual models.Marín Campusano, BM. (2011). Functional Size Measurement and Model Verification for Software Model-Driven Developments: A COSMIC-based Approach [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/11237Palanci
Characterizing and Diagnosing Architectural Degeneration of Software Systems from Defect Perspective
The architecture of a software system is known to degrade as the system evolves over time due to change upon change, a phenomenon that is termed architectural degeneration. Previous research has focused largely on structural deviations of an architecture from its baseline. However, another angle to observe architectural degeneration is software defects, especially those that are architecturally related. Such an angle has not been scientifically explored until now. Here, we ask two relevant questions: (1) What do defects indicate about architectural degeneration? and (2) How can architectural degeneration be diagnosed from the defect perspective? To answer question (1), we conducted an exploratory case study analyzing defect data over six releases of a large legacy system (of size approximately 20 million source lines of code and age over 20 years). The relevant defects here are those that span multiple components in the system (called multiple-component defects - MCDs). This case study found that MCDs require more changes to fix and are more persistent across development phases and releases than other types of defects. To answer question (2), we developed an approach (called Diagnosing Architectural Degeneration - DAD) from the defect perspective, and validated it in another, confirmatory, case study involving three releases of a commercial system (of size over 1.5 million source lines of code and age over 13 years). This case study found that components of the system tend to persistently have an impact on architectural degeneration over releases. Especially, such impact of a few components is substantially greater than that of other components. These results are new and they add to the current knowledge on architectural degeneration. The key conclusions from these results are: (i) analysis of MCDs is a viable approach to characterizing architectural degeneration; and (ii) a method such as DAD can be developed for diagnosing architectural degeneration
Connecting Software Metrics across Versions to Predict Defects
Accurate software defect prediction could help software practitioners
allocate test resources to defect-prone modules effectively and efficiently. In
the last decades, much effort has been devoted to build accurate defect
prediction models, including developing quality defect predictors and modeling
techniques. However, current widely used defect predictors such as code metrics
and process metrics could not well describe how software modules change over
the project evolution, which we believe is important for defect prediction. In
order to deal with this problem, in this paper, we propose to use the
Historical Version Sequence of Metrics (HVSM) in continuous software versions
as defect predictors. Furthermore, we leverage Recurrent Neural Network (RNN),
a popular modeling technique, to take HVSM as the input to build software
prediction models. The experimental results show that, in most cases, the
proposed HVSM-based RNN model has a significantly better effort-aware ranking
effectiveness than the commonly used baseline models
The Software Vulnerability Ecosystem: Software Development In The Context Of Adversarial Behavior
Software vulnerabilities are the root cause of many computer system security fail- ures. This dissertation addresses software vulnerabilities in the context of a software lifecycle, with a particular focus on three stages: (1) improving software quality dur- ing development; (2) pre- release bug discovery and repair; and (3) revising software as vulnerabilities are found.
The question I pose regarding software quality during development is whether long-standing software engineering principles and practices such as code reuse help or hurt with respect to vulnerabilities. Using a novel data-driven analysis of large databases of vulnerabilities, I show the surprising result that software quality and software security are distinct. Most notably, the analysis uncovered a counterintu- itive phenomenon, namely that newly introduced software enjoys a period with no vulnerability discoveries, and further that this “Honeymoon Effect” (a term I coined) is well-explained by the unfamiliarity of the code to malicious actors. An important consequence for code reuse, intended to raise software quality, is that protections inherent in delays in vulnerability discovery from new code are reduced.
The second question I pose is the predictive power of this effect. My experimental design exploited a large-scale open source software system, Mozilla Firefox, in which two development methodologies are pursued in parallel, making that the sole variable in outcomes. Comparing the methodologies using a novel synthesis of data from vulnerability databases, These results suggest that the rapid-release cycles used in agile software development (in which new software is introduced frequently) have a vulnerability discovery rate equivalent to conventional development.
Finally, I pose the question of the relationship between the intrinsic security of software, stemming from design and development, and the ecosystem into which the software is embedded and in which it operates. I use the early development
lifecycle to examine this question, and again use vulnerability data as the means of answering it. Defect discovery rates should decrease in a purely intrinsic model, with software maturity making vulnerabilities increasingly rare. The data, which show that vulnerability rates increase after a delay, contradict this. Software security therefore must be modeled including extrinsic factors, thus comprising an ecosystem
Continuous maintenance and the future – Foundations and technological challenges
High value and long life products require continuous maintenance throughout their life cycle to achieve required performance with optimum through-life cost. This paper presents foundations and technologies required to offer the maintenance service. Component and system level degradation science, assessment and modelling along with life cycle ‘big data’ analytics are the two most important knowledge and skill base required for the continuous maintenance. Advanced computing and visualisation technologies will improve efficiency of the maintenance and reduce through-life cost of the product. Future of continuous maintenance within the Industry 4.0 context also identifies the role of IoT, standards and cyber security
- …