24,641 research outputs found

    Evaluating testing methods by delivered reliability

    Get PDF
    There are two main goals in testing software: (1) to achieve adequate quality (debug testing), where the objective is to probe the software for defects so that these can be removed, and (2) to assess existing quality (operational testing), where the objective is to gain confidence that the software is reliable. Debug methods tend to ignore random selection of test data from an operational profile, while for operational methods this selection is all-important. Debug methods are thought to be good at uncovering defects so that these can be repaired, but having done so they do not provide a technically defensible assessment of the reliability that results. On the other hand, operational methods provide accurate assessment, but may not be as useful for achieving reliability. This paper examines the relationship between the two testing goals, using a probabilistic analysis. We define simple models of programs and their testing, and try to answer the question of how to attain program reliability: is it better to test by probing for defects as in debug testing, or to assess reliability directly as in operational testing? Testing methods are compared in a model where program failures are detected and the software changed to eliminate them. The “better” method delivers higher reliability after all test failures have been eliminated. Special cases are exhibited in which each kind of testing is superior. An analysis of the distribution of the delivered reliability indicates that even simple models have unusual statistical properties, suggesting caution in interpreting theoretical comparisons

    Proportional sampling strategy: A compendium and some insights

    Get PDF
    There have been numerous studies on the effectiveness of partition and random testing. In particular, the proportional sampling (PS) strategy has been proved, under certain conditions, to be the only form of partition testing that outperforms random testing regardless of where the failure-causing inputs are. This paper provides an integrated synthesis and overview of our recent studies on the PS strategy and its related work. Through this synthesis, we offer a perspective that properly interprets the results obtained so far, and present some of the interesting issues involved and new insights obtained during the course of this research. © 2001 Elsevier Science Inc. All rights reserved.postprin

    The Association Between Rate and Severity of Exacerbations in Chronic Obstructive Pulmonary Disease: An Application of a Joint Frailty-Logistic Model.

    Get PDF
    Exacerbations are a hallmark of chronic obstructive pulmonary disease (COPD). Evidence suggests the presence of substantial between-individual variability (heterogeneity) in exacerbation rates. The question of whether individuals vary in their tendency towards experiencing severe (versus mild) exacerbations, or whether there is an association between exacerbation rate and severity, has not yet been studied. We used data from the MACRO Study, a 1-year randomized trial of the use of azithromycin for prevention of COPD exacerbations (United States and Canada, 2006-2010; n = 1,107, mean age = 65.2 years, 59.1% male). A parametric frailty model was combined with a logistic regression model, with bivariate random effects capturing heterogeneity in rate and severity. The average rate of exacerbation was 1.53 episodes/year, with 95% of subjects having a model-estimated rate of 0.47-4.22 episodes/year. The overall ratio of severe exacerbations to total exacerbations was 0.22, with 95% of subjects having a model-estimated ratio of 0.04-0.60. We did not confirm an association between exacerbation rate and severity (P = 0.099). A unified model, implemented in standard software, could estimate joint heterogeneity in COPD exacerbation rate and severity and can have applications in similar contexts where inference on event time and intensity is considered. We provide SAS code (SAS Institute, Inc., Cary, North Carolina) and a simulated data set to facilitate further uses of this method

    Improving the Cybersecurity of Cyber-Physical Systems Through Behavioral Game Theory and Model Checking in Practice and in Education

    Get PDF
    This dissertation presents automated methods based on behavioral game theory and model checking to improve the cybersecurity of cyber-physical systems (CPSs) and advocates teaching certain foundational principles of these methods to cybersecurity students. First, it encodes behavioral game theory\u27s concept of level-k reasoning into an integer linear program that models a newly defined security Colonel Blotto game. This approach is designed to achieve an efficient allocation of scarce protection resources by anticipating attack allocations. A human subjects experiment based on a CPS infrastructure demonstrates its effectiveness. Next, it rigorously defines the term adversarial thinking, one of cybersecurity educations most important and elusive learning objectives, but for which no proper definition exists. It spells out what it means to think like a hacker by examining the characteristic thought processes of hackers through the lens of Sternberg\u27s triarchic theory of intelligence. Next, a classroom experiment demonstrates that teaching basic game theory concepts to cybersecurity students significantly improves their strategic reasoning abilities. Finally, this dissertation applies the SPIN model checker to an electric power protection system and demonstrates a straightforward and effective technique for rigorously characterizing the degree of fault tolerance of complex CPSs, a key step in improving their defensive posture

    Critical Fault-Detecting Time Evaluation in Software with Discrete Compound Poisson Models

    Get PDF
    Software developers predict their product’s failure rate using reliability growth models that are typically based on nonhomogeneous Poisson (NHP) processes. In this article, we extend that practice to a nonhomogeneous discrete-compound Poisson process that allows for multiple faults of a system at the same time point. Along with traditional reliability metrics such as average number of failures in a time interval, we propose an alternative reliability index called critical fault-detecting time in order to provide more information for software managers making software quality evaluation and critical market policy decisions. We illustrate the significant potential for improved analysis using wireless failure data as well as simulated data

    COMPARING AUTOMATED UNIT TESTING STRATEGIES

    Get PDF
    Software testing plays a:critical role in the software development lifecycle. Auto­ mated unit testing strategies allow a tester to execute a large number of test cases to detect faulty behaviours in a piece of software. Many different automated unit testing strategies can be applied to test a program. In order to better understand the relationship between these strategies, “explorative” strategies are defined as those which select unit tests by exploring a large search space with a relatively simple data structure. This thesis focuses on comparing three particular explorative strategies: bounded-exhaustive, randomized, and a combined strategy. In order to precisely compare these three strategies, a test program is developed to provide a universal framework for generating and executing test cases. The test program implements the three strategies as well. In addition, we perform several experiments on these three strategies using the test program. The experimental data is collected and analyzed to illustrate the relationship between these strategies

    Prey aggregation is an effective olfactory predator avoidance strategy

    Get PDF
    Predator–prey interactions have a major effect on species abundance and diversity, and aggregation is a well-known anti-predator behaviour. For immobile prey, the effectiveness of aggregation depends on two conditions: (a) the inability of the predator to consume all prey in a group and (b) detection of a single large group not being proportionally easier than that of several small groups. How prey aggregation influences predation rates when visual cues are restricted, such as in turbid water, has not been thoroughly investigated. We carried out foraging (predation) experiments using a fish predator and (dead) chironomid larvae as prey in both laboratory and field settings. In the laboratory, a reduction in visual cue availability (in turbid water) led to a delay in the location of aggregated prey compared to when visual cues were available. Aggregated prey suffered high mortality once discovered, leading to better survival of dispersed prey in the longer term. We attribute this to the inability of the dead prey to take evasive action. In the field (where prey were placed in feeding stations that allowed transmission of olfactory but not visual cues), aggregated (large groups) and semi-dispersed prey survived for longer than dispersed prey—including long termsurvival. Together, our results indicate that similar to systems where predators hunt using vision, aggregation is an effective anti-predator behaviour for prey avoiding olfactory predators
    • …
    corecore