25,796 research outputs found

    Improved techniques for software testing based on Markov chain usage models

    Get PDF
    In statistical testing of software all possible uses of the software, at some level of abstraction, are represented by a statistical model wherein each possible use of the software has an associated probability of occurrence [16]. Test cases are drawn from the sample population of possible uses according to the sample distribution and run against the software under test. Various statistics of interest, such as the estimated failure rate and mean time to failure of the software are computed. The testing performed is evaluated relative to the population of uses to determine whether or not to stop testing. The model used to represent use of the software in this work is a finite state, time homogeneous, discrete parameter, irreducible Markov chain [10]. In a Markov chain usage model the states of use of the software are represented as states in the Markov chain [1,18, 21,22]. User actions are represented as state transitions in the Markov chain. The probability of a user performing a certain action given that the software is in a particular state of use is represented by the associated transition probability in the Markov chain. The usage model always contains two special states, the (Invoke) state and (Terminate) state. The (Invoke) state represents the software prior to invocation and the (Terminate) state represents the software after execution has ceased. All test cases start from the (Invoke,) state and end in the (Terminate) state. Given a Markov chain based usage model it is possible to analytically compute a number of statistics useful for validation of the model, test planning, test monitoring, and evaluation of the software under test. Example statistics include the expected test case length and associated variance, the probability of a state or arc appearing in a test case, the long run probability of the software being in a certain state of use [22]. Test cases are randomly generated from the usage model, i.e., randomly sampled based on the use distribution. These test cases are run against the software and estimates of reliability are computed

    On A Simpler and Faster Derivation of Single Use Reliability Mean and Variance for Model-Based Statistical Testing

    Get PDF
    Markov chain usage-based statistical testing has proved sound and effective in providing audit trails of evidence in certifying software-intensive systems. The system end-toend reliability is derived analytically in closed form, following an arc-based Bayesian model. System reliability is represented by an important statistic called single use reliability, and defined as the probability of a randomly selected use being successful. This paper continues our earlier work on a simpler and faster derivation of the single use reliability mean, and proposes a new derivation of the single use reliability variance by applying a well-known theorem and eliminating the need to compute the second moments of arc failure probabilities. Our new results complete a new analysis that could be shown to be simpler, faster, and more direct while also rendering a more intuitive explanation. Our new theory is illustrated with three simple Markov chain usage models with manual derivations and experimental results

    Robust Dynamic Selection of Tested Modules in Software Testing for Maximizing Delivered Reliability

    Full text link
    Software testing is aimed to improve the delivered reliability of the users. Delivered reliability is the reliability of using the software after it is delivered to the users. Usually the software consists of many modules. Thus, the delivered reliability is dependent on the operational profile which specifies how the users will use these modules as well as the defect number remaining in each module. Therefore, a good testing policy should take the operational profile into account and dynamically select tested modules according to the current state of the software during the testing process. This paper discusses how to dynamically select tested modules in order to maximize delivered reliability by formulating the selection problem as a dynamic programming problem. As the testing process is performed only once, risk must be considered during the testing process, which is described by the tester's utility function in this paper. Besides, since usually the tester has no accurate estimate of the operational profile, by employing robust optimization technique, we analysis the selection problem in the worst case, given the uncertainty set of operational profile. By numerical examples, we show the necessity of maximizing delivered reliability directly and using robust optimization technique when the tester has no clear idea of the operational profile. Moreover, it is shown that the risk averse behavior of the tester has a major influence on the delivered reliability.Comment: 19 pages, 4 figure

    A compositional method for reliability analysis of workflows affected by multiple failure modes

    Get PDF
    We focus on reliability analysis for systems designed as workflow based compositions of components. Components are characterized by their failure profiles, which take into account possible multiple failure modes. A compositional calculus is provided to evaluate the failure profile of a composite system, given failure profiles of the components. The calculus is described as a syntax-driven procedure that synthesizes a workflows failure profile. The method is viewed as a design-time aid that can help software engineers reason about systems reliability in the early stage of development. A simple case study is presented to illustrate the proposed approach

    Evaluation of A Resilience Embedded System Using Probabilistic Model-Checking

    Full text link
    If a Micro Processor Unit (MPU) receives an external electric signal as noise, the system function will freeze or malfunction easily. A new resilience strategy is implemented in order to reset the MPU automatically and stop the MPU from freezing or malfunctioning. The technique is useful for embedded systems which work in non-human environments. However, evaluating resilience strategies is difficult because their effectiveness depends on numerous, complex, interacting factors. In this paper, we use probabilistic model checking to evaluate the embedded systems installed with the above mentioned new resilience strategy. Qualitative evaluations are implemented with 6 PCTL formulas, and quantitative evaluations use two kinds of evaluation. One is system failure reduction, and the other is ADT (Average Down Time), the industry standard. Our work demonstrates the benefits brought by the resilience strategy. Experimental results indicate that our evaluation is cost-effective and reliable.Comment: In Proceedings ESSS 2014, arXiv:1405.055

    Taming Uncertainty in the Assurance Process of Self-Adaptive Systems: a Goal-Oriented Approach

    Full text link
    Goals are first-class entities in a self-adaptive system (SAS) as they guide the self-adaptation. A SAS often operates in dynamic and partially unknown environments, which cause uncertainty that the SAS has to address to achieve its goals. Moreover, besides the environment, other classes of uncertainty have been identified. However, these various classes and their sources are not systematically addressed by current approaches throughout the life cycle of the SAS. In general, uncertainty typically makes the assurance provision of SAS goals exclusively at design time not viable. This calls for an assurance process that spans the whole life cycle of the SAS. In this work, we propose a goal-oriented assurance process that supports taming different sources (within different classes) of uncertainty from defining the goals at design time to performing self-adaptation at runtime. Based on a goal model augmented with uncertainty annotations, we automatically generate parametric symbolic formulae with parameterized uncertainties at design time using symbolic model checking. These formulae and the goal model guide the synthesis of adaptation policies by engineers. At runtime, the generated formulae are evaluated to resolve the uncertainty and to steer the self-adaptation using the policies. In this paper, we focus on reliability and cost properties, for which we evaluate our approach on the Body Sensor Network (BSN) implemented in OpenDaVINCI. The results of the validation are promising and show that our approach is able to systematically tame multiple classes of uncertainty, and that it is effective and efficient in providing assurances for the goals of self-adaptive systems

    Analysis of Software Aging in a Web Server

    Get PDF
    A number of recent studies have reported the phenomenon of ā€œsoftware agingā€, characterized by progressive performance degradation and/or an increased occurrence rate of hang/crash failures of a software system due to the exhaustion of operating system resources or the accumulation of errors. To counteract this phenomenon, a proactive technique called 'software rejuvenation' has been proposed. It essentially involves stopping the running software, cleaning its internal state and/or its environment and then restarting it. Software rejuvenation, being preventive in nature, begs the question as to when to schedule it. Periodic rejuvenation, while straightforward to implement, may not yield the best results, because the rate at which software ages is not constant, but it depends on the time-varying system workload. Software rejuvenation should therefore be planned and initiated in the face of the actual system behavior. This requires the measurement, analysis and prediction of system resource usage. In this paper, we study the development of resource usage in a web server while subjecting it to an artificial workload. We first collect data on several system resource usage and activity parameters. Non-parametric statistical methods are then applied for detecting and estimating trends in the data sets. Finally, we fit time series models to the data collected. Unlike the models used previously in the research on software aging, these time series models allow for seasonal patterns, and we show how the exploitation of the seasonal variation can help in adequately predicting the future resource usage. Based on the models employed here, proactive management techniques like software rejuvenation triggered by actual measurements can be built. --Software aging,software rejuvenation,Linux,Apache,web server,performance monitoring,prediction of resource utilization,non-parametric trend analysis,time series analysis

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling
    • ā€¦
    corecore