17,969 research outputs found
Estimation of Defect proneness Using Design complexity Measurements in Object- Oriented Software
Software engineering is continuously facing the challenges of growing
complexity of software packages and increased level of data on defects and
drawbacks from software production process. This makes a clarion call for
inventions and methods which can enable a more reusable, reliable, easily
maintainable and high quality software systems with deeper control on software
generation process. Quality and productivity are indeed the two most important
parameters for controlling any industrial process. Implementation of a
successful control system requires some means of measurement. Software metrics
play an important role in the management aspects of the software development
process such as better planning, assessment of improvements, resource
allocation and reduction of unpredictability. The process involving early
detection of potential problems, productivity evaluation and evaluating
external quality factors such as reusability, maintainability, defect proneness
and complexity are of utmost importance. Here we discuss the application of CK
metrics and estimation model to predict the external quality parameters for
optimizing the design process and production process for desired levels of
quality. Estimation of defect-proneness in object-oriented system at design
level is developed using a novel methodology where models of relationship
between CK metrics and defect-proneness index is achieved. A multifunctional
estimation approach captures the correlation between CK metrics and defect
proneness level of software modules.Comment: 5 pages, 1 figur
A Quality Model for Actionable Analytics in Rapid Software Development
Background: Accessing relevant data on the product, process, and usage
perspectives of software as well as integrating and analyzing such data is
crucial for getting reliable and timely actionable insights aimed at
continuously managing software quality in Rapid Software Development (RSD). In
this context, several software analytics tools have been developed in recent
years. However, there is a lack of explainable software analytics that software
practitioners trust. Aims: We aimed at creating a quality model (called
Q-Rapids quality model) for actionable analytics in RSD, implementing it, and
evaluating its understandability and relevance. Method: We performed workshops
at four companies in order to determine relevant metrics as well as product and
process factors. We also elicited how these metrics and factors are used and
interpreted by practitioners when making decisions in RSD. We specified the
Q-Rapids quality model by comparing and integrating the results of the four
workshops. Then we implemented the Q-Rapids tool to support the usage of the
Q-Rapids quality model as well as the gathering, integration, and analysis of
the required data. Afterwards we installed the Q-Rapids tool in the four
companies and performed semi-structured interviews with eight product owners to
evaluate the understandability and relevance of the Q-Rapids quality model.
Results: The participants of the evaluation perceived the metrics as well as
the product and process factors of the Q-Rapids quality model as
understandable. Also, they considered the Q-Rapids quality model relevant for
identifying product and process deficiencies (e.g., blocking code situations).
Conclusions: By means of heterogeneous data sources, the Q-Rapids quality model
enables detecting problems that take more time to find manually and adds
transparency among the perspectives of system, process, and usage.Comment: This is an Author's Accepted Manuscript of a paper to be published by
IEEE in the 44th Euromicro Conference on Software Engineering and Advanced
Applications (SEAA) 2018. The final authenticated version will be available
onlin
An evaluation framework to drive future evolution of a research prototype
The Open Source Component Artefact Repository (OSCAR) requires
evaluation to confirm its suitability as a development environment
for distributed software engineers. The evaluation will take note of
several factors including usability of OSCAR as a stand-alone system,
scalability and maintainability of the system and novel features not
provided by existing artefact management systems. Additionally, the
evaluation design attempts to address some of the omissions (due to
time constraints) from the industrial partner evaluations.
This evaluation is intended to be a prelude to the evaluation of the
awareness support being added to OSCAR; thus establishing a baseline
to which the effects of awareness support may be compared
Using Automatic Static Analysis to Identify Technical Debt
The technical debt (TD) metaphor describes a tradeoff between short-term and long-term goals in software development. Developers, in such situations, accept compromises in one dimension (e.g. maintainability) to meet an urgent demand in another dimension (e.g. delivering a release on time). Since TD produces interests in terms of time spent to correct the code and accomplish quality goals, accumulation of TD in software systems is dangerous because it could lead to more difficult and expensive maintenance. The research presented in this paper is focused on the usage of automatic static analysis to identify Technical Debt at code level with respect to different quality dimensions. The methodological approach is that of Empirical Software Engineering and both past and current achieved results are presented, focusing on functionality, efficiency and maintainabilit
Quality-aware model-driven service engineering
Service engineering and service-oriented architecture as an integration and platform technology is a recent approach to software systems integration. Quality aspects
ranging from interoperability to maintainability to performance are of central importance for the integration of heterogeneous, distributed service-based systems. Architecture models can substantially influence quality attributes of the implemented software systems. Besides the benefits of explicit architectures on maintainability and reuse, architectural constraints such as styles, reference architectures and architectural patterns can influence observable software properties such as performance. Empirical performance evaluation is a process of measuring and evaluating the performance of implemented software. We present an approach for addressing the quality of services and service-based systems at the model-level in the context of model-driven service engineering. The focus on architecture-level models is a consequence of the black-box
character of services
Exploring Maintainability Assurance Research for Service- and Microservice-Based Systems: Directions and Differences
To ensure sustainable software maintenance and evolution, a diverse set of activities and concepts like metrics, change impact analysis, or antipattern detection can be used. Special maintainability assurance techniques have been proposed for service- and microservice-based systems, but it is difficult to get a comprehensive overview of this publication landscape. We therefore conducted a systematic literature review (SLR) to collect and categorize maintainability assurance approaches for service-oriented architecture (SOA) and microservices. Our search strategy led to the selection of 223 primary studies from 2007 to 2018 which we categorized with a threefold taxonomy: a) architectural (SOA, microservices, both), b) methodical (method or contribution of the study), and c) thematic (maintainability assurance subfield). We discuss the distribution among these categories and present different research directions as well as exemplary studies per thematic category. The primary finding of our SLR is that, while very few approaches have been suggested for microservices so far (24 of 223, ?11%), we identified several thematic categories where existing SOA techniques could be adapted for the maintainability assurance of microservices
- âŠ