17,489 research outputs found

    Consistent SDNs through Network State Fuzzing

    No full text
    The conventional wisdom is that a software-defined network (SDN) operates under the premise that the logically centralized control plane has an accurate representation of the actual data plane state. Nevertheless, bugs, misconfigurations, faults or attacks can introduce inconsistencies that undermine correct operation. Previous work in this area, however, lacks a holistic methodology to tackle this problem and thus, addresses only certain parts of the problem. Yet, the consistency of the overall system is only as good as its least consistent part. Motivated by an analogy of network consistency checking with program testing, we propose to add active probe-based network state fuzzing to our consistency check repertoire. Hereby, our system, PAZZ, combines production traffic with active probes to continuously test if the actual forwarding path and decision elements (on the data plane) correspond to the expected ones (on the control plane). Our insight is that active traffic covers the inconsistency cases beyond the ones identified by passive traffic. PAZZ prototype was built and evaluated on topologies of varying scale and complexity. Our results show that PAZZ requires minimal network resources to detect persistent data plane faults through fuzzing and localize them quickly

    Consistent SDNs through Network State Fuzzing

    Full text link
    The conventional wisdom is that a software-defined network (SDN) operates under the premise that the logically centralized control plane has an accurate representation of the actual data plane state. Unfortunately, bugs, misconfigurations, faults or attacks can introduce inconsistencies that undermine correct operation. Previous work in this area, however, lacks a holistic methodology to tackle this problem and thus, addresses only certain parts of the problem. Yet, the consistency of the overall system is only as good as its least consistent part. Motivated by an analogy of network consistency checking with program testing, we propose to add active probe-based network state fuzzing to our consistency check repertoire. Hereby, our system, PAZZ, combines production traffic with active probes to periodically test if the actual forwarding path and decision elements (on the data plane) correspond to the expected ones (on the control plane). Our insight is that active traffic covers the inconsistency cases beyond the ones identified by passive traffic. PAZZ prototype was built and evaluated on topologies of varying scale and complexity. Our results show that PAZZ requires minimal network resources to detect persistent data plane faults through fuzzing and localize them quickly while outperforming baseline approaches.Comment: Added three extra relevant references, the arXiv later was accepted in IEEE Transactions of Network and Service Management (TNSM), 2019 with the title "Towards Consistent SDNs: A Case for Network State Fuzzing

    Frequency and fundamental signal measurement algorithms for distributed control and protection applications

    Get PDF
    Increasing penetration of distributed generation within electricity networks leads to the requirement for cheap, integrated, protection and control systems. To minimise cost, algorithms for the measurement of AC voltage and current waveforms can be implemented on a single microcontroller, which also carries out other protection and control tasks, including communication and data logging. This limits the frame rate of the major algorithms, although analogue to digital converters (ADCs) can be oversampled using peripheral control processors on suitable microcontrollers. Measurement algorithms also have to be tolerant of poor power quality, which may arise within grid-connected or islanded (e.g. emergency, battlefield or marine) power system scenarios. This study presents a 'Clarke-FLL hybrid' architecture, which combines a three-phase Clarke transformation measurement with a frequency-locked loop (FLL). This hybrid contains suitable algorithms for the measurement of frequency, amplitude and phase within dynamic three-phase AC power systems. The Clarke-FLL hybrid is shown to be robust and accurate, with harmonic content up to and above 28% total harmonic distortion (THD), and with the major algorithms executing at only 500 samples per second. This is achieved by careful optimisation and cascaded use of exact-time averaging techniques, which prove to be useful at all stages of the measurements: from DC bias removal through low-sample-rate Fourier analysis to sub-harmonic ripple removal. Platform-independent algorithms for three-phase nodal power flow analysis are benchmarked on three processors, including the Infineon TC1796 microcontroller, on which only 10% of the 2000 mus frame time is required, leaving the remainder free for other algorithms

    Redundancy management for efficient fault recovery in NASA's distributed computing system

    Get PDF
    The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance

    Modelling and Test Generation for Crosstalk Faults in DSM Chips

    Get PDF
    In the era of deep submicron technology (DSM), many System-on-Chip (SoC) applications require the components to be operating at high clock speeds. With the shrinking feature size and ever increasing clock frequencies, the DSM technology has led to a well-known problem of Signal Integrity (SI) more especially in the connecting layout design. The increasing aspect ratios of metal wires and also the ratio of coupling capacitance over substrate capacitance result in electrical coupling of interconnects which leads to crosstalk problems. In this thesis, first the work carried out to model the crosstalk behaviour between aggressor and victim by considering the distributed RLGC parameters of interconnect and the coupling capacitance and mutual conductance between the two nets is presented. The proposed model also considers the RC linear models of the CMOS drivers and receivers. The behaviour of crosstalk in case of under etching problem has been studied and modelled by distributing and approximating the defect behaviour throughout the nets. Next, the proposed model has also been extended to model the behaviour of crosstalk in case of one victim is influenced by several aggressors by considering all aggressors have similar effect (worst-case) on victim. In all the above cases simulation experiments were also carried out and compared with well-known circuit simulation tool PSPICE. It has been proved that the generated crosstalk model is faster and the results generated are within 10% of error margin compared to latter simulation tool. Because of the accuracy and speed of the proposed model, the model is very useful for both SoC designers and test engineers to analyse the crosstalk behaviour. Each manufactured device needs to be tested thoroughly to ensure the functionality before its delivery. The test pattern generation for crosstalk faults is also necessary to test the corresponding crosstalk faults. In this thesis, the well-known PODEM algorithm for stuck-at faults is extended to generate the test patterns for crosstalk faults between single aggressor and single victim. To apply modified PODEM for crosstalk faults, the transition behaviour has been divided into two logic parts as before transition and after transition. After finding individually required test patterns for before transition and after transition, the generated logic vectors are appended to create transition test patterns for crosstalk faults. The developed algorithm is also applied for a few ISCAS 85 benchmark circuits and the fault coverage is found excellent in most circuits. With the incorporation of proposed algorithm into the ATPG tools, the efficiency of testing will be improved by generating the test patterns for crosstalk faults besides for the conventional stuck-at faults. In generating test patterns for crosstalk faults on single victim due to multiple aggressors, the modified PODEM algorithm is found to be more time consuming. The search capability of Genetic Algorithms in finding the required combination of several input factors for any optimized problem fascinated to apply GA for generating test patterns as generating the test pattern is also similar to finding the required vector out of several input transitions. Initially the GA is applied for generating test patterns for stuck-at faults and compared the results with PODEM algorithm. As the fault coverage is almost similar to the deterministic algorithm PODEM, the GA developed for stuck-at faults is extended to find test patterns for crosstalk faults between single aggressor and single victim. The elitist GA is also applied for a few ISCAS 85 benchmark circuits. Later the algorithm is extended to generate test patterns for worst-case crosstalk faults. It has been proved that elitist GA developed in this thesis is also very useful in generating test patterns for crosstalk faults especially for multiple aggressor and single victim crosstalk faults

    Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1

    Get PDF
    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified

    Empirical Evaluation of Mutation-based Test Prioritization Techniques

    Full text link
    We propose a new test case prioritization technique that combines both mutation-based and diversity-based approaches. Our diversity-aware mutation-based technique relies on the notion of mutant distinguishment, which aims to distinguish one mutant's behavior from another, rather than from the original program. We empirically investigate the relative cost and effectiveness of the mutation-based prioritization techniques (i.e., using both the traditional mutant kill and the proposed mutant distinguishment) with 352 real faults and 553,477 developer-written test cases. The empirical evaluation considers both the traditional and the diversity-aware mutation criteria in various settings: single-objective greedy, hybrid, and multi-objective optimization. The results show that there is no single dominant technique across all the studied faults. To this end, \rev{we we show when and the reason why each one of the mutation-based prioritization criteria performs poorly, using a graphical model called Mutant Distinguishment Graph (MDG) that demonstrates the distribution of the fault detecting test cases with respect to mutant kills and distinguishment
    • …
    corecore