9 research outputs found

    Do System Test Cases Grow Old?

    Full text link
    Companies increasingly use either manual or automated system testing to ensure the quality of their software products. As a system evolves and is extended with new features the test suite also typically grows as new test cases are added. To ensure software quality throughout this process the test suite is continously executed, often on a daily basis. It seems likely that newly added tests would be more likely to fail than older tests but this has not been investigated in any detail on large-scale, industrial software systems. Also it is not clear which methods should be used to conduct such an analysis. This paper proposes three main concepts that can be used to investigate aging effects in the use and failure behavior of system test cases: test case activation curves, test case hazard curves, and test case half-life. To evaluate these concepts and the type of analysis they enable we apply them on an industrial software system containing more than one million lines of code. The data sets comes from a total of 1,620 system test cases executed a total of more than half a million times over a time period of two and a half years. For the investigated system we find that system test cases stay active as they age but really do grow old; they go through an infant mortality phase with higher failure rates which then decline over time. The test case half-life is between 5 to 12 months for the two studied data sets.Comment: Updated with nicer figs without border around the

    Case Study Analyses of Reliability of Software Application “ePasuria”

    Get PDF
    The focus of the research study is set on analyzes of the reliability of software application, aiming to determine the ways of measurement and determine the parameters of a reliable software application through the case study realized. Measurements of software reliability are important because it can be used to plan and control resources while implementing the software application and offer reliability regarding the correctness of the developed software. Throughout the study we elaborate the analyses of different problems that are encountered in order to maintain a higher level of reliable software application, especially the systems that are more complex and the process of their implementation depends on sensitive data. Furthermore we elaborate ways of detailed analysis and studies in achieving the reliability of software application and researches on the assessment of reliability of the software, and the measurement of the level of the failures in order to realize the level of reliability of a software application

    Sensitivity analysis of reliability for structure-based software via simulation

    Get PDF
    Computer simulation is an appealing approach for the reliability analysis of structure-based software systems as it can accommodate complexities present in realistic systems. When the system is complex, a screening experiment to quickly identify important factors (components) can significantly improve efficiency of the analysis. The challenge is to guarantee the correctness of the screening results with stochastic simulation responses. Control Sequential Bifurcation (CSB) is a new method for factors screening using simulation experiments, when only main effects models are considered. By grouping factors, CSB can identify the importance of factors while reducing the simulation effort. With appropriate hypothesis testing procedures embedded, CSB procedure can simultaneously control the Type I error probability and the power. The existing work has focused on normally distributed output responses. This thesis extends the existing CSB procedure by embedding Meeker\u27s conditional sequential test to deal with binary responses and guarantee the desired error control for factor screening results. The effectiveness of the extended factor screening procedure is demonstrated with the application on a software system

    A Hierarchical Framework for Estimating Heterogeneous Architecture-based Software Reliability

    Get PDF
    Problem. The composite model approach that follows a DTMC process with constant failure rate is not analytically tractable for improving its method of solution for estimating software reliability. In this case, a hierarchical approach is preferred to improve accuracy for the method of solution for estimating reliability. Very few studies have been conducted on heterogeneous architecture-based software reliability, and those that have been done use the composite model for reliability estimation. To my knowledge, no research has been done where a hierarchical approach is taken to estimate heterogeneous architecture-based software reliability. This paper explores the use and effectiveness of a hierarchical framework to estimate heterogeneous architecture-based software reliability. -- Method. Concepts of reliability and reliability prediction models for heterogeneous software architecture were surveyed. The different architectural styles were identified as batch-sequential, parallel filter, fault tolerance, and call and return. A method for evaluating these four styles solely on the basis of transition probability was proposed. Four case studies were selected from similar researches which have been done to test the effectiveness of the proposed hierarchical framework. The study assumes that the method of extracting the information about the software architecture was accurate and that the actual reliability of the systems used were free of software errors. -- Results. The percentage difference in results of the reliability estimated by the proposed hierarchical framework compared with the actual reliability was 5.12%, 11.09%, 0.82%, and 52.14% for Cases 1, 2, 3, and 4 respectively. The proposed hierarchical framework did not work for Case 4, which showed much higher values in component utilization and therefore higher interactions between components when compared with the other cases. -- Conclusions. The proposed hierarchical framework generally showed close comparison with the actual reliability of the software systems used in the case studies. However, the results obtained by the proposed hierarchical framework compared to the actual reliability were in disagreement for Case 4. This is due to the higher component interactions in Case 4 when compared with other cases and showed that there are limitations to the extent to which the proposed hierarchical framework can be applied. The reasoning for the limitations of the hierarchical approach has not been cited in any research on the subject matter. Even with the limitations, the hierarchical framework for estimating heterogeneous architecture-based software reliability can still be applied when high accuracy is not required and not too high interactions among components in the software system exist. Thesis (M.S.) -- Andrews University, College of Arts and Sciences, 201

    ARCHITECTURE-BASED RELIABILITY ANALYSIS OF WEB SERVICES

    Get PDF
    In a Service Oriented Architecture (SOA), the hierarchical complexity of Web Services (WS) and their interactions with the underlying Application Server (AS) create new challenges in providing a realistic estimate of WS performance and reliability. The current approaches often treat the entire WS environment as a black-box. Thus, the sensitivity of the overall reliability and performance to the behavior of the underlying WS architectures and AS components are not well-understood. In other words, the current research on the architecture-based analysis of WSs is limited. This dissertation presents a novel methodology for modeling the reliability and performance of web services. WSs are treated as atomic entities but the AS is broken down into layers. More specifically, interactions of WSs with the underlying layers of an AS are investigated. One important feature of the research is investigating the impact of dynamic parameters that exist at the layers, such as configuration parameters. These parameters may have negative impact on WSs performance if they are not configured properly. WSs are developed in house and the AS considered is JBoss AS. An experimental environment is setup so that controlled service requests can be generated and important performance metrics can be recorded under various configurations of the AS. On the other hand, a simulation model is developed from the source code and run-time behavior of the existing WS and AS implementations. The model mimics the logical behavior of the WSs based on their communication with the AS layers. The simulation results are compared to the experimental results to ensure the correctness of the model. The architecture of the simulation model, which is based on Stochastic Petri Nets (SPN), is modularized in accordance to the layers and their interactions. As the web services are often executed in a complex and distributed environment, the modularized approach enables a user or a designer to observe and investigate the performance of the entire system under various conditions. In contrast, most approaches to WSs analyses are monolithic in that the entire system is treated as a closed box. The results show that 1) the simulation model can be a viable tool for measuring the performance and reliability of WSs under different loads and conditions that may be of great interest to WS designers and the professionals involved; 2) Configuration parameters have big impacts on the overall performance; 3) The simulation model can be tuned to account for various speeds in terms of communication, hardware, and software; 4) As the simulation model is modularized, it may be used as a foundation for aggregating the modules (layers), nullifying modules, or the model can be enhanced to include other aspects of the WS architecture such as network characteristics and the hardware/operating system on which the AS and WSs execute; and 5) The simulation model is beneficial to predict the performance of web services for those cases that are difficult to replicate in a field study

    Identifying Common Patterns and Unusual Dependencies in Faults, Failures and Fixes for Large-scale Safety-critical Software

    Get PDF
    As software evolves, becoming a more integral part of complex systems, modern society becomes more reliant on the proper functioning of such systems. However, the field of software quality assurance lacks detailed empirical studies from which best practices can be determined. The fundamental factors that contribute to software quality are faults, failures and fixes, and although some studies have considered specific aspects of each, comprehensive studies have been quite rare. Thus, the fact that we establish the cause-effect relationship between the fault(s) that caused individual failures, as well as the link to the fixes made to prevent the failures from (re)occurring appears to be a unique characteristic of our work. In particular, we analyze fault types, verification activities, severity levels, investigation effort, artifacts fixed, components fixed, and the effort required to implement fixes for a large industrial case study. The analysis includes descriptive statistics, statistical inference through formal hypothesis testing, and data mining. Some of the most interesting empirical results include (1) Contrary to popular belief, later life-cycle faults dominate as causes of failures. Furthermore, over 50% of high priority failures (e.g., post-release failures and safety-critical failures) were caused by coding faults. (2) 15% of failures led to fixes spread across multiple components and the spread was largely affected by the software architecture. (3) The amount of effort spent fixing faults associated with each failure was not uniformly distributed across failures; fixes with a greater spread across components and artifacts, required more effort. Overall, the work indicates that fault prevention and elimination efforts focused on later life cycle faults is essential as coding faults were the dominating cause of safety-critical failures and post-release failures. Further, statistical correlation and/or traditional data mining techniques show potential for assessment and prediction of the locations of fixes and the associated effort. By providing quantitative results and including statistical hypothesis testing, which is not yet a standard practice in software engineering, our work enriches the empirical knowledge needed to improve the state-of-the-art and practice in software quality assurance

    Empirical analysis of software reliability

    Get PDF
    This thesis presents an empirical study of architecture-based software reliability based on large real case studies. It undoubtedly demonstrates the value of using open source software to empirically study software reliability. The major goal is to empirically analyze the applicability, adequacy and accuracy of architecture-based software reliability models. In both our studies we found evidence that the number of failures due to faults in more than one component is not insignificant. Consequently, existing models that make such simplifying assumptions must be improved to account for this phenomenon. This thesis\u27 contributions include developing automatic methods for efficient extraction of necessary data from the available repositories, and using this data to test how and when architecture-based software reliability models work. We study their limitations and ways to improve them. Our results show the importance of knowledge gained from the interaction between theoretical and empirical research

    Integrated Software Architecture-Based Reliability Prediction for IT Systems

    Get PDF
    With the increasing importance of reliability in business and industrial IT systems, new techniques for architecture-based software reliability prediction are becoming an integral part of the development process. This dissertation thesis introduces a novel reliability modelling and prediction technique that considers the software architecture with its component structure, control and data flow, recovery mechanisms, its deployment to distributed hardware resources and the system\u27s usage profile
    corecore