2,805 research outputs found

    Legacy Software Restructuring: Analyzing a Concrete Case

    Get PDF
    Software re-modularization is an old preoccupation of reverse engineering research. The advantages of a well structured or modularized system are well known. Yet after so much time and efforts, the field seems unable to come up with solutions that make a clear difference in practice. Recently, some researchers started to question whether some basic assumptions of the field were not overrated. The main one consists in evaluating the high-cohesion/low-coupling dogma with metrics of unknown relevance. In this paper, we study a real structuring case (on the Eclipse platform) to try to better understand if (some) existing metrics would have helped the software engineers in the task. Results show that the cohesion and coupling metrics used in the experiment did not behave as expected and would probably not have helped the maintainers reach there goal. We also measured another possible restructuring which is to decrease the number of cyclic dependencies between modules. Again, the results did not meet expectations

    Evaluating defect prediction approaches: a benchmark and an extensive comparison

    Get PDF
    Reliably predicting software defects is one of the holy grails of software engineering. Researchers have devised and implemented a plethora of defect/bug prediction approaches varying in terms of accuracy, complexity and the input data they require. However, the absence of an established benchmark makes it hard, if not impossible, to compare approaches. We present a benchmark for defect prediction, in the form of a publicly available dataset consisting of several software systems, and provide an extensive comparison of well-known bug prediction approaches, together with novel approaches we devised. We evaluate the performance of the approaches using different performance indicators: classification of entities as defect-prone or not, ranking of the entities, with and without taking into account the effort to review an entity. We performed three sets of experiments aimed at (1) comparing the approaches across different systems, (2) testing whether the differences in performance are statistically significant, and (3) investigating the stability of approaches across different learners. Our results indicate that, while some approaches perform better than others in a statistically significant manner, external validity in defect prediction is still an open problem, as generalizing results to different contexts/learners proved to be a partially unsuccessful endeavo

    Model-based risk assessment

    Get PDF
    In this research effort, we focus on model-based risk assessment. Risk assessment is essential in any plan intended to manage software development or maintenance process. Subjective techniques are human intensive and error-prone. Risk assessment should be based on architectural attributes that we can quantitatively measure using architectural level metrics. Software architectures are emerging as an important concept in the study and practice of software engineering nowadays, due to their emphasis on large-scale composition of software product, and to their support for emerging software engineering paradigms, such as product line engineering, component based software engineering, and software evolution.;In this dissertation, we generalize our earlier work on reliability-based risk assessment. We introduce error propagation probability in the assessment methodology to account for the dependency among the system components. Also, we generalize the reliability-based risk assessment to account for inherent functional dependencies.;Furthermore, we develop a generic framework for maintainability-based risk assessment which can accommodate different types of software maintenance. First, we introduce and define maintainability-based risk assessment for software architecture. Within our assessment framework, we investigate the maintainability-based risk for the components of the system, and the effect of performing the maintenance tasks on these components. We propose a methodology for estimating the maintainability-based risk when considering different types of maintenance. As a proof of concept, we apply the proposed methodology on several case studies. Moreover, we automate the estimation of the maintainability-based risk assessment methodology

    Architectural level risk assessment

    Get PDF
    Many companies develop and maintain large-scale software systems for public and financial institutions. Should a failure occur in one of these systems, the impact would be enormous. It is therefore essential, in maintaining a system\u27s quality, to identify any defects early on in the development process in order to prevent the occurrence of failures. However, testing all modules of these systems to identify defects can be very expensive. There is therefore a need for methodologies and tools that support software engineers in identifying the defected and complex software components early on in the development process.;Risk assessment is an essential process for ensuring high quality software products. By performing risk assessment during the early software development phases we can identify complex modules, thus enables us to enhance resource allocation decisions.;To assess the risk of software systems early on in the software\u27s life cycle, we propose an architectural level risk assessment methodology. It uses UML specifications of software systems which are available early on in the software life cycle. It combines the probability of software failures and the severity associated with these failures to estimate software risk factors of software architectural elements (components/connectors), the scenarios, the use cases and systems. As a result, remedial actions to control and improve the quality of the software product can be taken.;We build a risk assessment model which will enable us to identify complex and noncomplex software components. We will be able to estimate programming and service effort, and estimate testing effort. This model will enable us also to identify components with high risk factor which would require the development of effective fault tolerant mechanisms.;To estimate the probability of software failure we introduced and developed a set of dynamic metrics which are used to measure dynamic of software architectural elements from UML static models.;To estimate severity of software failure we propose UML based severity methodology. Also we propose a validation process for both risk and severity methodologies. Finally we propose prototype tool support for the automation of the risk assessment methodology

    Analysing the Contribution of Coupling Metrics for the Development and Management of Process Architectures

    Get PDF
    Currently, the development and modeling of enterprise architectures is an intensively discussed topic in both science and practice. Process architectures represent a core element in recent enterprise architecture frameworks. With process models being a central means for communicating and documenting the process architectures, both their quality and understandability are decisive. However, the concept of process model quality is still not fully understood. The recent development has highlighted the role of coupling in models. Coupling is expected to represent an important dimension of quality for conceptual models. Still, this perspective is hardly understood and its definition vague. Therefore, this work collects diverse coupling interpretations in the field of process modelling and integrates them to a common and precise definition. Once introduced and formally specified, the metrics serve as a basis for a discussion on coupling and on how the future development in respect to coupling could look like. The main findings are that currently metrics evaluate either the documentation of the process architecture regarding its understandability or they contribute to the individual applications of process architectures. These findings support practitioners selecting metrics for a particular task and scientists to identify research gaps for further development

    EMPIRICAL ASSESSMENT OF THE IMPACT OF USING AUTOMATIC STATIC ANALYSIS ON CODE QUALITY

    Get PDF
    Automatic static analysis (ASA) tools analyze the source or compiled code looking for violations of recommended programming practices (called issues) that might cause faults or might degrade some dimensions of software quality. Antonio Vetro' has focused his PhD in studying how applying ASA impacts software quality, taking as reference point the different quality dimensions specified by the standard ISO/IEC 25010. The epistemological approach he used is that one of empirical software engineering. During his three years PhD, he's been conducting experiments and case studies on three main areas: Functionality/Reliability, Performance and Maintainability. He empirically proved that specific ASA issues had impact on these quality characteristics in the contexts under study: thus, removing them from the code resulted in a quality improvement. Vetro' has also investigated and proposed new research directions for this field: using ASA to improve software energy efficiency and to detect the problems deriving from the interaction of multiple languages. The contribution is enriched with the final recommendation of a generalized process for researchers and practitioners with a twofold goal: improve software quality through ASA and create a body of knowledge on the impact of using ASA on specific software quality dimensions, based on empirical evidence. This thesis represents a first step towards this goa
    • …
    corecore