114 research outputs found

    Model-based risk assessment

    Get PDF
    In this research effort, we focus on model-based risk assessment. Risk assessment is essential in any plan intended to manage software development or maintenance process. Subjective techniques are human intensive and error-prone. Risk assessment should be based on architectural attributes that we can quantitatively measure using architectural level metrics. Software architectures are emerging as an important concept in the study and practice of software engineering nowadays, due to their emphasis on large-scale composition of software product, and to their support for emerging software engineering paradigms, such as product line engineering, component based software engineering, and software evolution.;In this dissertation, we generalize our earlier work on reliability-based risk assessment. We introduce error propagation probability in the assessment methodology to account for the dependency among the system components. Also, we generalize the reliability-based risk assessment to account for inherent functional dependencies.;Furthermore, we develop a generic framework for maintainability-based risk assessment which can accommodate different types of software maintenance. First, we introduce and define maintainability-based risk assessment for software architecture. Within our assessment framework, we investigate the maintainability-based risk for the components of the system, and the effect of performing the maintenance tasks on these components. We propose a methodology for estimating the maintainability-based risk when considering different types of maintenance. As a proof of concept, we apply the proposed methodology on several case studies. Moreover, we automate the estimation of the maintainability-based risk assessment methodology

    Architectural level risk assessment

    Get PDF
    Many companies develop and maintain large-scale software systems for public and financial institutions. Should a failure occur in one of these systems, the impact would be enormous. It is therefore essential, in maintaining a system\u27s quality, to identify any defects early on in the development process in order to prevent the occurrence of failures. However, testing all modules of these systems to identify defects can be very expensive. There is therefore a need for methodologies and tools that support software engineers in identifying the defected and complex software components early on in the development process.;Risk assessment is an essential process for ensuring high quality software products. By performing risk assessment during the early software development phases we can identify complex modules, thus enables us to enhance resource allocation decisions.;To assess the risk of software systems early on in the software\u27s life cycle, we propose an architectural level risk assessment methodology. It uses UML specifications of software systems which are available early on in the software life cycle. It combines the probability of software failures and the severity associated with these failures to estimate software risk factors of software architectural elements (components/connectors), the scenarios, the use cases and systems. As a result, remedial actions to control and improve the quality of the software product can be taken.;We build a risk assessment model which will enable us to identify complex and noncomplex software components. We will be able to estimate programming and service effort, and estimate testing effort. This model will enable us also to identify components with high risk factor which would require the development of effective fault tolerant mechanisms.;To estimate the probability of software failure we introduced and developed a set of dynamic metrics which are used to measure dynamic of software architectural elements from UML static models.;To estimate severity of software failure we propose UML based severity methodology. Also we propose a validation process for both risk and severity methodologies. Finally we propose prototype tool support for the automation of the risk assessment methodology

    Recognising object-oriented software design quality : a practitioner-based questionnaire survey

    Get PDF
    Design quality is vital if software is to be maintainable. What practices do developers actually use to achieve design quality in their day-to-day work and which of these do they find most useful? To discover the extent to which practitioners concern themselves with object-oriented design quality and the approaches used when determining quality in practice, a questionnaire survey of 102 software practitioners, approximately half from the UK and the remainder from elsewhere around the world was used. Individual and peer experience are major contributors to design quality. Classic design guidelines, well-known lower level practices, tools and metrics all can also contribute positively to design quality. There is a potential relationship between testing practices and design quality. Inexperience, time pressures, novel problems, novel technology, and imprecise or changing requirements may have a negative impact on quality. Respondents with most experience are more confident in their design decisions, place more value on reviews by team leads and are more likely to rate design quality as very important. For practitioners, these results identify the techniques and tools that other practitioners find effective. For researchers, the results highlight a need for more work investigating the role of experience in the design process and the contribution experience makes to quality. There is also the potential for more in-depth studies of how practitioners are actually using design guidance, including Clean Code. Lastly, the potential relationship between testing practices and design quality merits further investigation

    A Fault-Based Model of Fault Localization Techniques

    Get PDF
    Every day, ordinary people depend on software working properly. We take it for granted; from banking software, to railroad switching software, to flight control software, to software that controls medical devices such as pacemakers or even gas pumps, our lives are touched by software that we expect to work. It is well known that the main technique/activity used to ensure the quality of software is testing. Often it is the only quality assurance activity undertaken, making it that much more important. In a typical experiment studying these techniques, a researcher will intentionally seed a fault (intentionally breaking the functionality of some source code) with the hopes that the automated techniques under study will be able to identify the fault\u27s location in the source code. These faults are picked arbitrarily; there is potential for bias in the selection of the faults. Previous researchers have established an ontology for understanding or expressing this bias called fault size. This research captures the fault size ontology in the form of a probabilistic model. The results of applying this model to measure fault size suggest that many faults generated through program mutation (the systematic replacement of source code operators to create faults) are very large and easily found. Secondary measures generated in the assessment of the model suggest a new static analysis method, called testability, for predicting the likelihood that code will contain a fault in the future. While software testing researchers are not statisticians, they nonetheless make extensive use of statistics in their experiments to assess fault localization techniques. Researchers often select their statistical techniques without justification. This is a very worrisome situation because it can lead to incorrect conclusions about the significance of research. This research introduces an algorithm, MeansTest, which helps automate some aspects of the selection of appropriate statistical techniques. The results of an evaluation of MeansTest suggest that MeansTest performs well relative to its peers. This research then surveys recent work in software testing using MeansTest to evaluate the significance of researchers\u27 work. The results of the survey indicate that software testing researchers are underreporting the significance of their work

    State of Refactoring Adoption: Towards Better Understanding Developer Perception of Refactoring

    Get PDF
    Context: Refactoring is the art of improving the structural design of a software system without altering its external behavior. Today, refactoring has become a well-established and disciplined software engineering practice that has attracted a significant amount of research presuming that refactoring is primarily motivated by the need to improve system structures. However, recent studies have shown that developers may incorporate refactoring strategies in other development-related activities that go beyond improving the design especially with the emerging challenges in contemporary software engineering. Unfortunately, these studies are limited to developer interviews and a reduced set of projects. Objective: We aim at exploring how developers document their refactoring activities during the software life cycle. We call such activity Self-Affirmed Refactoring (SAR), which is an indication of the developer-related refactoring events in the commit messages. After that, we propose an approach to identify whether a commit describes developer-related refactoring events, to classify them according to the refactoring common quality improvement categories. To complement this goal, we aim to reveal insights into how reviewers develop a decision about accepting or rejecting a submitted refactoring request, what makes such review challenging, and how to the efficiency of refactoring code review. Method: Our empirically driven study follows a mixture of qualitative and quantitative methods. We text mine refactoring-related documentation, then we develop a refactoring taxonomy, and automatically classify a large set of commits containing refactoring activities, and identify, among the various quality models presented in the literature, the ones that are more in-line with the developer\u27s vision of quality optimization, when they explicitly mention that they are refactoring to improve them to obtain an enhanced understanding of the motivation behind refactoring. After that, we performed an industrial case study with professional developers at Xerox to study the motivations, documentation practices, challenges, verification, and implications of refactoring activities during code review. Result: We introduced SAR taxonomy on how developers document their refactoring strategies in commit messages and proposed a SAR model to automate the detection of refactoring. Our survey with code reviewers has revealed several difficulties related to understanding the refactoring intent and implications on the functional and non-functional aspects of the software. Conclusion: Our SAR taxonomy and model, can work in conjunction with refactoring detectors, to report any early inconsistency between refactoring types and their documentation and can serve as a solid background for various empirical investigations. In light of our findings of the industrial case study, we recommended a procedure to properly document refactoring activities, as part of our survey feedback

    Prioritizing Program Elements: A Pre-testing Effort To Improve Software Quality

    Get PDF
    Test effort prioritization is a powerful technique that enables the tester to effectively utilize the test resources by streamlining the test effort. The distribution of test effort is important to test organization. We address prioritization-based testing strategies in order to do the best possible job with limited test resources. Our proposed techniques give benefit to the tester, when applied in the case of looming deadlines and limited resources. Some parts of a system are more critical and sensitive to bugs than others, and thus should be tested thoroughly. The rationale behind this thesis is to estimate the criticality of various parts within a system and prioritize the parts for testing according to their estimated criticality. We propose several prioritization techniques at different phases of Software Development Life Cycle (SDLC). Different chapters of the thesis aim at setting test priority based on various factors of the system. The purpose is to identify and focus on the critical and strategic areas and detect the important defects as early as possible, before the product release. Focusing on the critical and strategic areas helps to improve the reliability of the system within the available resources. We present code-based and architecture-based techniques to prioritize the testing tasks. In these techniques, we analyze the criticality of a component within a system using a combination of its internal and external factors. We have conducted a set of experiments on the case studies and observed that the proposed techniques are efficient and address the challenge of prioritization. We propose a novel idea of calculating the influence of a component, where in-fluence refers to the contribution or usage of the component at every execution step. This influence value serves as a metric in test effort prioritization. We first calculate the influence through static analysis of the source code and then, refine our work by calculating it through dynamic analysis. We have experimentally proved that decreasing the reliability of an element with high influence value drastically increases the failure rate of the system, which is not true in case of an element with low influence value. We estimate the criticality of a component within a system by considering its both internal and external factors such as influence value, average execution time, structural complexity, severity and business value. We prioritize the components for testing according to their estimated criticality. We have compared our approach with a related approach, in which the components were prioritized on the basis of their structural complexity only. From the experimental results, we observed that our approach helps to reduce the failure rate at the operational environment. The consequence of the observed failures were also low compared to the related approach. Priority should be established by order of importance or urgency. As the importance of a component may vary at different points of the testing phase, we propose a multi cycle-based test effort prioritization approach, in which we assign different priorities to the same component at different test cycles. Test effort prioritization at the initial phase of SDLC has a greater impact than that made at a later phase. As the analysis and design stage is critical compared to other stages, detecting and correcting errors at this stage is less costly compared to later stages of SDLC. Designing metrics at this stage help the test manager in decision making for allocating resources. We propose a technique to estimate the criticality of a use case at the design level. The criticality is computed on the basis of complexity and business value. We evaluated the complexity of a use case analytically through a set of data collected at the design level. We experimentally observed that assigning test effort to various use cases according to their estimated criticality improves the reliability of a system under test. Test effort prioritization based on risk is a powerful technique for streamlining the test effort. The tester can exploit the relationship between risk and testing effort. We proposed a technique to estimate the risk associated with various states at the component level and risk associated with use case scenarios at the system level. The estimated risks are used for enhancing the resource allocation decision. An intermediate graph called Inter-Component State-Dependence graph (ISDG) is introduced for getting the complexity for a state of a component, which is used for risk estimation. We empirically evaluated the estimated risks. We assigned test priority to the components / scenarios within a system according to their estimated risks. We performed an experimental comparative analysis and observed that the testing team guided by our technique achieved high test efficiency compared to a related approach

    An online corpus of UML Design Models : construction and empirical studies

    Get PDF
    We address two problems in Software Engineering. The first problem is how to assess the severity of software defects? The second problem we address is that of studying software designs. Automated support for assessing the severity of software defects helps human developers to perform this task more efficiently and more accurately. We present (MAPDESO) for assessing the severity of software defects based on IEEE Standard Classification for Software Anomalies. The novelty of the approach lies in its use of uses ontologies and ontology-based reasoning which links defects to system level quality properties. One of the main reasons that makes studying of software designs challenging is the lack of their availability. We decided to collect software designs represented by UML models stored in image formats and use image processing techniques to convert them to models. We present the 'UML Repository' which contains UML diagrams (in image and XMI format) and design metrics. We conducted a series of empirical studies using the UML Repository. These empirical studies are a drop in the ocean empirical studies that can be conducted using the repository. Yet these studies show the versatility of useful studies that can be based on this novel repository of UML designs.Erasmus Mundus program (JOSYLEEN)Algorithms and the Foundations of Software technolog

    On Preserving the Behavior in Software Refactoring: A Systematic Mapping Study

    Get PDF
    Context: Refactoring is the art of modifying the design of a system without altering its behavior. The idea is to reorganize variables, classes and methods to facilitate their future adaptations and comprehension. As the concept of behavior preservation is fundamental for refactoring, several studies, using formal verification, language transformation and dynamic analysis, have been proposed to monitor the execution of refactoring operations and their impact on the program semantics. However, there is no existing study that examines the available behavior preservation strategies for each refactoring operation. Objective: This paper identifies behavior preservation approaches in the research literature. Method: We conduct, in this paper, a systematic mapping study, to capture all existing behavior preservation approaches that we classify based on several criteria including their methodology, applicability, and their degree of automation. Results: The results indicate that several behavior preservation approaches have been proposed in the literature. The approaches vary between using formalisms and techniques, developing automatic refactoring safety tools, and performing a manual analysis of the source code. Conclusion: Our taxonomy reveals that there exist some types of refactoring operations whose behavior preservation is under-researched. Our classification also indicates that several possible strategies can be combined to better detect any violation of the program semantics
    corecore