1,469 research outputs found

    Evaluating testing methods by delivered reliability

    Get PDF
    There are two main goals in testing software: (1) to achieve adequate quality (debug testing), where the objective is to probe the software for defects so that these can be removed, and (2) to assess existing quality (operational testing), where the objective is to gain confidence that the software is reliable. Debug methods tend to ignore random selection of test data from an operational profile, while for operational methods this selection is all-important. Debug methods are thought to be good at uncovering defects so that these can be repaired, but having done so they do not provide a technically defensible assessment of the reliability that results. On the other hand, operational methods provide accurate assessment, but may not be as useful for achieving reliability. This paper examines the relationship between the two testing goals, using a probabilistic analysis. We define simple models of programs and their testing, and try to answer the question of how to attain program reliability: is it better to test by probing for defects as in debug testing, or to assess reliability directly as in operational testing? Testing methods are compared in a model where program failures are detected and the software changed to eliminate them. The “better” method delivers higher reliability after all test failures have been eliminated. Special cases are exhibited in which each kind of testing is superior. An analysis of the distribution of the delivered reliability indicates that even simple models have unusual statistical properties, suggesting caution in interpreting theoretical comparisons

    Effective Theories for Circuits and Automata

    Full text link
    Abstracting an effective theory from a complicated process is central to the study of complexity. Even when the underlying mechanisms are understood, or at least measurable, the presence of dissipation and irreversibility in biological, computational and social systems makes the problem harder. Here we demonstrate the construction of effective theories in the presence of both irreversibility and noise, in a dynamical model with underlying feedback. We use the Krohn-Rhodes theorem to show how the composition of underlying mechanisms can lead to innovations in the emergent effective theory. We show how dissipation and irreversibility fundamentally limit the lifetimes of these emergent structures, even though, on short timescales, the group properties may be enriched compared to their noiseless counterparts.Comment: 11 pages, 9 figure

    Learning Non-robustness using Simulation-based Testing: a Network Traffic-shaping Case Study

    Full text link
    An input to a system reveals a non-robust behaviour when, by making a small change in the input, the output of the system changes from acceptable (passing) to unacceptable (failing) or vice versa. Identifying inputs that lead to non-robust behaviours is important for many types of systems, e.g., cyber-physical and network systems, whose inputs are prone to perturbations. In this paper, we propose an approach that combines simulation-based testing with regression tree models to generate value ranges for inputs in response to which a system is likely to exhibit non-robust behaviours. We apply our approach to a network traffic-shaping system (NTSS) -- a novel case study from the network domain. In this case study, developed and conducted in collaboration with a network solutions provider, RabbitRun Technologies, input ranges that lead to non-robustness are of interest as a way to identify and mitigate network quality-of-service issues. We demonstrate that our approach accurately characterizes non-robust test inputs of NTSS by achieving a precision of 84% and a recall of 100%, significantly outperforming a standard baseline. In addition, we show that there is no statistically significant difference between the results obtained from our simulated testbed and a hardware testbed with identical configurations. Finally we describe lessons learned from our industrial collaboration, offering insights about how simulation helps discover unknown and undocumented behaviours as well as a new perspective on using non-robustness as a measure for system re-configuration.Comment: This paper is accepted at the 16th IEEE International Conference on Software Testing, Verification and Validation (ICST 2023

    Conflict detection in software-defined networks

    Get PDF
    The SDN architecture facilitates the flexible deployment of network functions. While promoting innovation, this architecture induces yet a higher chance of conflicts compared to conventional networks. The detection of conflicts in SDN is the focus of this work. Restrictions of the formal analytical approach drive our choice of an experimental approach, in which we determine a parameter space and a methodology to perform experiments. We have created a dataset covering a number of situations occurring in SDN. The investigation of the dataset yields a conflict taxonomy composed of various classes organized in three broad types: local, distributed and hidden conflicts. Interestingly, hidden conflicts caused by side-effects of control applications‘ behaviour are completely new. We introduce the new concept of multi-property set, and the ·r (“dot r”) operator for the effective comparison of SDN rules. With these capable means, we present algorithms to detect conflicts and develop a conflict detection prototype. The evaluation of the prototype justifies the correctness and the realizability of our proposed concepts and methodologies for classifying as well as for detecting conflicts. Altogether, our work establishes a foundation for further conflict handling efforts in SDN, e.g., conflict resolution and avoidance. In addition, we point out challenges to be explored. Cuong Tran won the DAAD scholarship for his doctoral research at the Munich Network Management Team, Ludwig-Maximilians-Universität München, and achieved the degree in 2022. He loves to do research on policy conflicts in networked systems, IP multicast and alternatives, network security, and virtualized systems. Besides, teaching and sharing are also among his interests

    User experience and robustness in social virtual reality applications

    Get PDF
    Cloud-based applications that rely on emerging technologies such as social virtual reality are increasingly being deployed at high-scale in e.g., remote-learning, public safety, and healthcare. These applications increasingly need mechanisms to maintain robustness and immersive user experience as a joint consideration to minimize disruption in service availability due to cyber attacks/faults. Specifically, effective modeling and real-time adaptation approaches need to be investigated to ensure that the application functionality is resilient and does not induce undesired cybersickness levels. In this thesis, we investigate a novel ‘DevSecOps' paradigm to jointly tune both the robustness and immersive performance factors in social virtual reality application design/operations. We characterize robustness factors considering Security, Privacy and Safety (SPS), and immersive performance factors considering Quality of Application, Quality of Service, and Quality of Experience (3Q). We achieve “harmonized security and performance by design” via modeling the SPS and 3Q factors in cloud-hosted applications using attack-fault trees (AFT) and an accurate quantitative analysis via formal verification techniques i.e., statistical model checking (SMC). We develop a real-time adaptive control capability to manage SPS/3Q issues affecting a critical anomaly event that induces undesired cybersickness. This control capability features a novel dynamic rule-based approach for closed-loop decision making augmented by a knowledge base for the SPS/3Q issues of individual and/or combination events. Correspondingly, we collect threat intelligence on application and network based cyber-attacks that disrupt immersiveness, and develop a multi-label K-NN classifier as well as statistical analysis techniques for critical anomaly event detection. We validate the effectiveness of our solution approach in a real-time cloud testbed featuring vSocial, a social virtual reality based learning environment that supports delivery of Social Competence Intervention (SCI) curriculum for youth. Based on our experiment findings, we show that our solution approach enables: (i) identification of the most vulnerable components that impact user immersive experience to formally conduct risk assessment, (ii) dynamic decision making for controlling SPS/3Q issues inducing undesirable cybersickness levels via quantitative metrics of user feedback and effective anomaly detection, and (iii) rule-based policies following the NIST SP 800-160 principles and cloud-hosting recommendations for a more secure, privacy-preserving, and robust cloud-based application configuration with satisfactory immersive user experience.Includes bibliographical references (pages 133-146)

    Sampling-Based Methods for Factored Task and Motion Planning

    Full text link
    This paper presents a general-purpose formulation of a large class of discrete-time planning problems, with hybrid state and control-spaces, as factored transition systems. Factoring allows state transitions to be described as the intersection of several constraints each affecting a subset of the state and control variables. Robotic manipulation problems with many movable objects involve constraints that only affect several variables at a time and therefore exhibit large amounts of factoring. We develop a theoretical framework for solving factored transition systems with sampling-based algorithms. The framework characterizes conditions on the submanifold in which solutions lie, leading to a characterization of robust feasibility that incorporates dimensionality-reducing constraints. It then connects those conditions to corresponding conditional samplers that can be composed to produce values on this submanifold. We present two domain-independent, probabilistically complete planning algorithms that take, as input, a set of conditional samplers. We demonstrate the empirical efficiency of these algorithms on a set of challenging task and motion planning problems involving picking, placing, and pushing

    Interaction Testing, Fault Location, and Anonymous Attribute-Based Authorization

    Get PDF
    abstract: This dissertation studies three classes of combinatorial arrays with practical applications in testing, measurement, and security. Covering arrays are widely studied in software and hardware testing to indicate the presence of faulty interactions. Locating arrays extend covering arrays to achieve identification of the interactions causing a fault by requiring additional conditions on how interactions are covered in rows. This dissertation introduces a new class, the anonymizing arrays, to guarantee a degree of anonymity by bounding the probability a particular row is identified by the interaction presented. Similarities among these arrays lead to common algorithmic techniques for their construction which this dissertation explores. Differences arising from their application domains lead to the unique features of each class, requiring tailoring the techniques to the specifics of each problem. One contribution of this work is a conditional expectation algorithm to build covering arrays via an intermediate combinatorial object. Conditional expectation efficiently finds intermediate-sized arrays that are particularly useful as ingredients for additional recursive algorithms. A cut-and-paste method creates large arrays from small ingredients. Performing transformations on the copies makes further improvements by reducing redundancy in the composed arrays and leads to fewer rows. This work contains the first algorithm for constructing locating arrays for general values of dd and tt. A randomized computational search algorithmic framework verifies if a candidate array is (dˉ,t)(\bar{d},t)-locating by partitioning the search space and performs random resampling if a candidate fails. Algorithmic parameters determine which columns to resample and when to add additional rows to the candidate array. Additionally, analysis is conducted on the performance of the algorithmic parameters to provide guidance on how to tune parameters to prioritize speed, accuracy, or a combination of both. This work proposes anonymizing arrays as a class related to covering arrays with a higher coverage requirement and constraints. The algorithms for covering and locating arrays are tailored to anonymizing array construction. An additional property, homogeneity, is introduced to meet the needs of attribute-based authorization. Two metrics, local and global homogeneity, are designed to compare anonymizing arrays with the same parameters. Finally, a post-optimization approach reduces the homogeneity of an anonymizing array.Dissertation/ThesisDoctoral Dissertation Computer Science 201
    corecore