42,376 research outputs found

    Large Scale Distributed Testing for Fault Classification and Isolation

    Get PDF
    Developing confidence in the quality of software is an increasingly difficult problem. As the complexity and integration of software systems increases, the tools and techniques used to perform quality assurance (QA) tasks must evolve with them. To date, several quality assurance tools have been developed to help ensure of quality in modern software, but there are still several limitations to be overcome. Among the challenges faced by current QA tools are (1) increased use of distributed software solutions, (2) limited test resources and constrained time schedules and (3) difficult to replicate and possibly rarely occurring failures. While existing distributed continuous quality assurance (DCQA) tools and techniques, including our own Skoll project, begin to address these issues, new and novel approaches are needed to address these challenges. This dissertation explores three strategies to do this. First, I present an improved version of our Skoll distributed quality assurance system. Skoll provides a platform for executing sophisticated, long-running QA processes across a large number of distributed, heterogeneous computing nodes. This dissertation details changes to Skoll resulting in a more robust, configurable, and user-friendly implementation for both the client and server components. Additionally, this dissertation details infrastructure development done to support the evaluation of DCQA processes using Skoll -- specifically the design and deployment of a dedicated 120-node computing cluster for evaluating DCQA practices. The techniques and case studies presented in the latter parts of this work leveraged the improvements to Skoll as their testbed. Second, I present techniques for automatically classifying test execution outcomes based on an adaptive-sampling classification technique along with a case study on the Java Architecture for Bytecode Analysis (JABA) system. One common need for these techniques is the ability to distinguish test execution outcomes (e.g., to collect only data corresponding to some behavior or to determine how often and under which conditions a specific behavior occurs). Most current approaches, however, do not perform any kind of classification of remote executions and either focus on easily observable behaviors (e.g., crashes) or assume that outcomes' classifications are externally provided (e.g., by the users). In this work, I present an empirical study on JABA where we automatically classified execution data into passing and failing behaviors using adaptive association trees. Finally, I present a long-term case study of the highly-configurable MySQL open-source project. Exhaustive testing of real-world software systems can involve configuration spaces that are too large to test exhaustively, but that nonetheless contain subtle interactions that lead to failure-inducing system faults. In the literature covering arrays, in combination with classification techniques, have been used to effectively sample these large configuration spaces and to detect problematic configuration dependencies. Applying this approach in practice, however, is tricky because testing time and resource availability are unpredictable. Therefore we developed and evaluated an alternative approach that incrementally builds covering array schedules. This approach begins at a low strength, and then iteratively increases strength as resources allow reusing previous test results to avoid duplicated effort. The results are test schedules that allow for successful classification with fewer test executions and that require less test-subject specific information to develop

    International White Book on DER Protection : Review and Testing Procedures

    Get PDF
    This white book provides an insight into the issues surrounding the impact of increasing levels of DER on the generator and network protection and the resulting necessary improvements in protection testing practices. Particular focus is placed on ever increasing inverter-interfaced DER installations and the challenges of utility network integration. This white book should also serve as a starting point for specifying DER protection testing requirements and procedures. A comprehensive review of international DER protection practices, standards and recommendations is presented. This is accompanied by the identiïŹ cation of the main performance challenges related to these protection schemes under varied network operational conditions and the nature of DER generator and interface technologies. Emphasis is placed on the importance of dynamic testing that can only be delivered through laboratory-based platforms such as real-time simulators, integrated substation automation infrastructure and ïŹ‚ exible, inverter-equipped testing microgrids. To this end, the combination of ïŹ‚ exible network operation and new DER technologies underlines the importance of utilising the laboratory testing facilities available within the DERlab Network of Excellence. This not only informs the shaping of new protection testing and network integration practices by end users but also enables the process of de-risking new DER protection technologies. In order to support the issues discussed in the white paper, a comparative case study between UK and German DER protection and scheme testing practices is presented. This also highlights the level of complexity associated with standardisation and approval mechanisms adopted by different countries

    Space shuttle main engine fault detection using neural networks

    Get PDF
    A method for on-line Space Shuttle Main Engine (SSME) anomaly detection and fault typing using a feedback neural network is described. The method involves the computation of features representing time-variance of SSME sensor parameters, using historical test case data. The network is trained, using backpropagation, to recognize a set of fault cases. The network is then able to diagnose new fault cases correctly. An essential element of the training technique is the inclusion of randomly generated data along with the real data, in order to span the entire input space of potential non-nominal data

    Beyond Power over Ethernet : the development of Digital Energy Networks for buildings

    Get PDF
    Alternating current power distribution using analogue control and safety devices has been the dominant process of power distribution within our buildings since the electricity industry began in the late 19th century. However, with advances in digital technology, the seeds of change have been growing over the last decade. Now, with the simultaneous dramatic fall in power requirements of digital devices and corresponding rise in capability of Power over Ethernet, an entire desktop environment can be powered by a single direct current (dc) Ethernet cable. Going beyond this, it will soon be possible to power entire office buildings using dc networks. This means the logic of “one-size fits all” from the existing ac system is no longer relevant and instead there is an opportunity to redesign the power topology to be appropriate for different applications, devices and end-users throughout the building. This paper proposes a 3-tier classification system for the topology of direct current microgrids in commercial buildings – called a Digital Energy Network or DEN. The first tier is power distribution at a full building level (otherwise known as the microgrid); the second tier is power distribution at a room level (the nanogrid); and the third tier is power distribution at a desktop or appliance level (the picogrid). An important aspect of this classification system is how the design focus changes for each grid. For example; a key driver of the picogrid is the usability of the network – high data rates, and low power requirements; however, in the microgrid, the main driver is high power and efficiency at low cost

    Automatic programming methodologies for electronic hardware fault monitoring

    Get PDF
    This paper presents three variants of Genetic Programming (GP) approaches for intelligent online performance monitoring of electronic circuits and systems. Reliability modeling of electronic circuits can be best performed by the Stressor - susceptibility interaction model. A circuit or a system is considered to be failed once the stressor has exceeded the susceptibility limits. For on-line prediction, validated stressor vectors may be obtained by direct measurements or sensors, which after pre-processing and standardization are fed into the GP models. Empirical results are compared with artificial neural networks trained using backpropagation algorithm and classification and regression trees. The performance of the proposed method is evaluated by comparing the experiment results with the actual failure model values. The developed model reveals that GP could play an important role for future fault monitoring systems.This research was supported by the International Joint Research Grant of the IITA (Institute of Information Technology Assessment) foreign professor invitation program of the MIC (Ministry of Information and Communication), Korea

    Event Analysis of Pulse-reclosers in Distribution Systems Through Sparse Representation

    Full text link
    The pulse-recloser uses pulse testing technology to verify that the line is clear of faults before initiating a reclose operation, which significantly reduces stress on the system components (e.g. substation transformers) and voltage sags on adjacent feeders. Online event analysis of pulse-reclosers are essential to increases the overall utility of the devices, especially when there are numerous devices installed throughout the distribution system. In this paper, field data recorded from several devices were analyzed to identify specific activity and fault locations. An algorithm is developed to screen the data to identify the status of each pole and to tag time windows with a possible pulse event. In the next step, selected time windows are further analyzed and classified using a sparse representation technique by solving an l1-regularized least-square problem. This classification is obtained by comparing the pulse signature with the reference dictionary to find a set that most closely matches the pulse features. This work also sheds additional light on the possibility of fault classification based on the pulse signature. Field data collected from a distribution system are used to verify the effectiveness and reliability of the proposed method.Comment: Accepted in: 19th International Conference on Intelligent System Application to Power Systems (ISAP), San Antonio, TX, 201
    • 

    corecore