10,762 research outputs found

    Software reliability optimization by redundancy and software quality management

    Get PDF
    This study investigates both the trade-offs among system reliability improvement, resource consumption, and other relevant constraints, and the application of statistical control methods to monitor variations. A process for reliability-related quality programming is developed to fill existing gaps in software design and development so that a quality programming plan can be achieved. A software reliability-to-cost relation is developed both from a software reliability-related cost model and software redundancy models with common-cause failures. The software reliability optimization problem will be formulated into a mixed-integer programming problem and solved by a branch-and-bound technique;A procedure will be developed to identify, define, develop, and demonstrate a quality performance measure to improve system operation that is based on statistical control methods. Despite the most painful effort to control product quality, variation in product quality is unavoidable. Though the use of process control techniques, such as statistical control chart, unusual variations in the software development process can be controlled and reduced

    Integration of software reliability into systems reliability optimization

    Get PDF
    Reliability optimization originally developed for hardware systems is extended to incorporate software into an integrated system reliability optimization. This hardware-software reliability optimization problem is formulated into a mixed-integer programming problem. The integer variables are the number of redundancies, while the real variables are the components reliabilities;To search a common framework under which hardware systems and software systems can be combined, a review and classification of existing software reliability models is conducted. A software redundancy model with common-cause failure is developed to represent the objective function. This model includes hardware redundancy with independent failure as a special case. A software reliability-cost function is then derived based on a binomial-type software reliability model to represent the constraint function;Two techniques, the combination of heuristic redundancy method with sequential search method, and the Lagrange multiplier method with the branch-and-bound method, are proposed to solve this mixed-integer reliability optimization problem. The relative merits of four major heuristic redundancy methods and two sequential search methods are investigated through a simulation study. The results indicate that the sequential search method is a dominating factor of the combination method. Comparison of the two proposed mixed-integer programming techniques is also studied by solving two numerical problems, a series system with linear constraints and a bridge system with nonlinear constraints. The Lagrange multiplier method with the branch-and-bound method has been shown to be superior to all other existing methods in obtaining the optimal solution;Finally an illustration is performed for integrating software reliability model into systems reliability optimization

    Model based test suite minimization using metaheuristics

    Get PDF
    Software testing is one of the most widely used methods for quality assurance and fault detection purposes. However, it is one of the most expensive, tedious and time consuming activities in software development life cycle. Code-based and specification-based testing has been going on for almost four decades. Model-based testing (MBT) is a relatively new approach to software testing where the software models as opposed to other artifacts (i.e. source code) are used as primary source of test cases. Models are simplified representation of a software system and are cheaper to execute than the original or deployed system. The main objective of the research presented in this thesis is the development of a framework for improving the efficiency and effectiveness of test suites generated from UML models. It focuses on three activities: transformation of Activity Diagram (AD) model into Colored Petri Net (CPN) model, generation and evaluation of AD based test suite and optimization of AD based test suite. Unified Modeling Language (UML) is a de facto standard for software system analysis and design. UML models can be categorized into structural and behavioral models. AD is a behavioral type of UML model and since major revision in UML version 2.x it has a new Petri Nets like semantics. It has wide application scope including embedded, workflow and web-service systems. For this reason this thesis concentrates on AD models. Informal semantics of UML generally and AD specially is a major challenge in the development of UML based verification and validation tools. One solution to this challenge is transforming a UML model into an executable formal model. In the thesis, a three step transformation methodology is proposed for resolving ambiguities in an AD model and then transforming it into a CPN representation which is a well known formal language with extensive tool support. Test case generation is one of the most critical and labor intensive activities in testing processes. The flow oriented semantic of AD suits modeling both sequential and concurrent systems. The thesis presented a novel technique to generate test cases from AD using a stochastic algorithm. In order to determine if the generated test suite is adequate, two test suite adequacy analysis techniques based on structural coverage and mutation have been proposed. In terms of structural coverage, two separate coverage criteria are also proposed to evaluate the adequacy of the test suite from both perspectives, sequential and concurrent. Mutation analysis is a fault-based technique to determine if the test suite is adequate for detecting particular types of faults. Four categories of mutation operators are defined to seed specific faults into the mutant model. Another focus of thesis is to improve the test suite efficiency without compromising its effectiveness. One way of achieving this is identifying and removing the redundant test cases. It has been shown that the test suite minimization by removing redundant test cases is a combinatorial optimization problem. An evolutionary computation based test suite minimization technique is developed to address the test suite minimization problem and its performance is empirically compared with other well known heuristic algorithms. Additionally, statistical analysis is performed to characterize the fitness landscape of test suite minimization problems. The proposed test suite minimization solution is extended to include multi-objective minimization. As the redundancy is contextual, different criteria and their combination can significantly change the solution test suite. Therefore, the last part of the thesis describes an investigation into multi-objective test suite minimization and optimization algorithms. The proposed framework is demonstrated and evaluated using prototype tools and case study models. Empirical results have shown that the techniques developed within the framework are effective in model based test suite generation and optimizatio

    â„“1\ell^1-Analysis Minimization and Generalized (Co-)Sparsity: When Does Recovery Succeed?

    Full text link
    This paper investigates the problem of signal estimation from undersampled noisy sub-Gaussian measurements under the assumption of a cosparse model. Based on generalized notions of sparsity, we derive novel recovery guarantees for the â„“1\ell^{1}-analysis basis pursuit, enabling highly accurate predictions of its sample complexity. The corresponding bounds on the number of required measurements do explicitly depend on the Gram matrix of the analysis operator and therefore particularly account for its mutual coherence structure. Our findings defy conventional wisdom which promotes the sparsity of analysis coefficients as the crucial quantity to study. In fact, this common paradigm breaks down completely in many situations of practical interest, for instance, when applying a redundant (multilevel) frame as analysis prior. By extensive numerical experiments, we demonstrate that, in contrast, our theoretical sampling-rate bounds reliably capture the recovery capability of various examples, such as redundant Haar wavelets systems, total variation, or random frames. The proofs of our main results build upon recent achievements in the convex geometry of data mining problems. More precisely, we establish a sophisticated upper bound on the conic Gaussian mean width that is associated with the underlying â„“1\ell^{1}-analysis polytope. Due to a novel localization argument, it turns out that the presented framework naturally extends to stable recovery, allowing us to incorporate compressible coefficient sequences as well

    Software reliability through fault-avoidance and fault-tolerance

    Get PDF
    The use of back-to-back, or comparison, testing for regression test or porting is examined. The efficiency and the cost of the strategy is compared with manual and table-driven single version testing. Some of the key parameters that influence the efficiency and the cost of the approach are the failure identification effort during single version program testing, the extent of implemented changes, the nature of the regression test data (e.g., random), and the nature of the inter-version failure correlation and fault-masking. The advantages and disadvantages of the technique are discussed, together with some suggestions concerning its practical use

    Software redundancy: what, where, how

    Get PDF
    Software systems have become pervasive in everyday life and are the core component of many crucial activities. An inadequate level of reliability may determine the commercial failure of a software product. Still, despite the commitment and the rigorous verification processes employed by developers, software is deployed with faults. To increase the reliability of software systems, researchers have investigated the use of various form of redundancy. Informally, a software system is redundant when it performs the same functionality through the execution of different elements. Redundancy has been extensively exploited in many software engineering techniques, for example for fault-tolerance and reliability engineering, and in self-adaptive and self- healing programs. Despite the many uses, though, there is no formalization or study of software redundancy to support a proper and effective design of software. Our intuition is that a systematic and formal investigation of software redundancy will lead to more, and more effective uses of redundancy. This thesis develops this intuition and proposes a set of ways to characterize qualitatively as well as quantitatively redundancy. We first formalize the intuitive notion of redundancy whereby two code fragments are considered redundant when they perform the same functionality through different executions. On the basis of this abstract and general notion, we then develop a practical method to obtain a measure of software redundancy. We prove the effectiveness of our measure by showing that it distinguishes between shallow differences, where apparently different code fragments reduce to the same underlying code, and deep code differences, where the algorithmic nature of the computations differs. We also demonstrate that our measure is useful for developers, since it is a good predictor of the effectiveness of techniques that exploit redundancy. Besides formalizing the notion of redundancy, we investigate the pervasiveness of redundancy intrinsically found in modern software systems. Intrinsic redundancy is a form of redundancy that occurs as a by-product of modern design and development practices. We have observed that intrinsic redundancy is indeed present in software systems, and that it can be successfully exploited for good purposes. This thesis proposes a technique to automatically identify equivalent method sequences in software systems to help developers assess the presence of intrinsic redundancy. We demonstrate the effectiveness of the technique by showing that it identifies the majority of equivalent method sequences in a system with good precision and performance

    Using machine learning techniques to evaluate multicore soft error reliability

    Get PDF
    Virtual platform frameworks have been extended to allow earlier soft error analysis of more realistic multicore systems (i.e., real software stacks, state-of-the-art ISAs). The high observability and simulation performance of underlying frameworks enable to generate and collect more error/failurerelated data, considering complex software stack configurations, in a reasonable time. When dealing with sizeable failure-related data sets obtained from multiple fault campaigns, it is essential to filter out parameters (i.e., features) without a direct relationship with the system soft error analysis. In this regard, this paper proposes the use of supervised and unsupervised machine learning techniques, aiming to eliminate non-relevant information as well as identify the correlation between fault injection results and application and platform characteristics. This novel approach provides engineers with appropriate means that able are able to investigate new and more efficient fault mitigation techniques. The underlying approach is validated with an extensive data set gathered from more than 1.2 million fault injections, comprising several benchmarks, a Linux OS and parallelization libraries (e.g., MPI, OpenMP), as well as through a realistic automotive case study
    • …
    corecore