2,307 research outputs found

    Susceptible workload driven selective fault tolerance using a probabilistic fault model

    No full text
    In this paper, we present a novel fault tolerance design technique, which is applicable at the register transfer level, based on protecting the functionality of logic circuits using a probabilistic fault model. The proposed technique selects the most susceptible workload of combinational circuits to protect against probabilistic faults. The workload susceptibility is ranked as the likelihood of any fault to bypass the inherent logical masking of the circuit and propagate an erroneous response to its outputs, when that workload is executed. The workload protection is achieved through a Triple Modular Redundancy (TMR) scheme by using the patterns that have been evaluated as most susceptible. We apply the proposed technique on LGSynth91 and ISCAS85 benchmarks and evaluate its fault tolerance capabilities against errors induced by permanent faults and soft errors. We show that the proposed technique, when it is applied to protect only the 32 most susceptible patterns, achieves on average of all the examined benchmarks, an error coverage improvement of 98% and 94% against errors induced by single stuck-at faults (permanent faults) and soft errors (transient faults), respectively, compared to a reduced TMR scheme that protects the same number of susceptible patterns without ranking them

    Susceptible Workload Evaluation and Protection using Selective Fault Tolerance

    Get PDF
    This is an Open Access article distributed under the terms of the Creative Commons Attribution International License CC-BY 4.0 ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.Low power fault tolerance design techniques trade reliability to reduce the area cost and the power overhead of integrated circuits by protecting only a subset of their workload or their most vulnerable parts. However, in the presence of faults not all workloads are equally susceptible to errors. In this paper, we present a low power fault tolerance design technique that selects and protects the most susceptible workload. We propose to rank the workload susceptibility as the likelihood of any error to bypass the logic masking of the circuit and propagate to its outputs. The susceptible workload is protected by a partial Triple Modular Redundancy (TMR) scheme. We evaluate the proposed technique on timing-independent and timing-dependent errors induced by permanent and transient faults. In comparison with unranked selective fault tolerance approach, we demonstrate a) a similar error coverage with a 39.7% average reduction of the area overhead or b) a 86.9% average error coverage improvement for a similar area overhead. For the same area overhead case, we observe an error coverage improvement of 53.1% and 53.5% against permanent stuck-at and transition faults, respectively, and an average error coverage improvement of 151.8% and 89.0% against timing-dependent and timing-independent transient faults, respectively. Compared to TMR, the proposed technique achieves an area and power overhead reduction of 145.8% to 182.0%.Peer reviewedFinal Published versio

    Data generation and model usage for machine learning-based dynamic security assessment and control

    Get PDF
    The global effort to decarbonise, decentralise and digitise electricity grids in response to climate change and evolving electricity markets with active consumers (prosumers) is gaining traction in countries around the world. This effort introduces new challenges to electricity grid operation. For instance, the introduction of variable renewable energy generation like wind and solar energy to replace conventional power generation like oil, gas, and coal increases the uncertainty in power systems operation. Additionally, the dynamics introduced by these renewable energy sources that are interfaced through converters are much faster than those in conventional system with thermal power plants. This thesis investigates new operating tools for the system operator that are data-driven to help manage the increased operational uncertainty in this transition. The presented work aims to an- swer some open questions regarding the implementation of these machine learning approaches in real-time operation, primarily related to the quality of training data to train accurate machine- learned models for predicting dynamic behaviour, and the use of these machine-learned models in the control room for real-time operation. To answer the first question, this thesis presents a novel sampling approach for generating ’rare’ operating conditions that are physically feasible but have not been experienced by power systems before. In so doing, the aim is to move away from historical observations that are often limited in describing the full range of operating conditions. Then, the thesis presents a novel approach based on Wasserstein distance and entropy to efficiently combine both historical and ’rare’ operating conditions to create an enriched database capable of training a high- performance classifier. To answer the second question, this thesis presents a scalable and rigorous workflow to trade-off multiple objective criteria when choosing decision tree models for real-time operation by system operators. Then, showcases a practical implementation for using a machine-learned model to optimise power system operation cost using topological control actions. Future research directions are underscored by the crucial role of machine learning in securing low inertia systems, and this thesis identifies research gaps covering physics-informed learning, machine learning-based network planning for secure operation, and robust training datasets are outlined.Open Acces

    Assessing and augmenting SCADA cyber security: a survey of techniques

    Get PDF
    SCADA systems monitor and control critical infrastructures of national importance such as power generation and distribution, water supply, transportation networks, and manufacturing facilities. The pervasiveness, miniaturisations and declining costs of internet connectivity have transformed these systems from strictly isolated to highly interconnected networks. The connectivity provides immense benefits such as reliability, scalability and remote connectivity, but at the same time exposes an otherwise isolated and secure system, to global cyber security threats. This inevitable transformation to highly connected systems thus necessitates effective security safeguards to be in place as any compromise or downtime of SCADA systems can have severe economic, safety and security ramifications. One way to ensure vital asset protection is to adopt a viewpoint similar to an attacker to determine weaknesses and loopholes in defences. Such mind sets help to identify and fix potential breaches before their exploitation. This paper surveys tools and techniques to uncover SCADA system vulnerabilities. A comprehensive review of the selected approaches is provided along with their applicability

    Unconventional machine learning of genome-wide human cancer data

    Full text link
    Recent advances in high-throughput genomic technologies coupled with exponential increases in computer processing and memory have allowed us to interrogate the complex aberrant molecular underpinnings of human disease from a genome-wide perspective. While the deluge of genomic information is expected to increase, a bottleneck in conventional high-performance computing is rapidly approaching. Inspired in part by recent advances in physical quantum processors, we evaluated several unconventional machine learning (ML) strategies on actual human tumor data. Here we show for the first time the efficacy of multiple annealing-based ML algorithms for classification of high-dimensional, multi-omics human cancer data from the Cancer Genome Atlas. To assess algorithm performance, we compared these classifiers to a variety of standard ML methods. Our results indicate the feasibility of using annealing-based ML to provide competitive classification of human cancer types and associated molecular subtypes and superior performance with smaller training datasets, thus providing compelling empirical evidence for the potential future application of unconventional computing architectures in the biomedical sciences

    A Framework for the Verification and Validation of Artificial Intelligence Machine Learning Systems

    Get PDF
    An effective verification and validation (V&V) process framework for the white-box and black-box testing of artificial intelligence (AI) machine learning (ML) systems is not readily available. This research uses grounded theory to develop a framework that leads to the most effective and informative white-box and black-box methods for the V&V of AI ML systems. Verification of the system ensures that the system adheres to the requirements and specifications developed and given by the major stakeholders, while validation confirms that the system properly performs with representative users in the intended environment and does not perform in an unexpected manner. Beginning with definitions, descriptions, and examples of ML processes and systems, the research results identify a clear and general process to effectively test these systems. The developed framework ensures the most productive and accurate testing results. Formerly, and occasionally still, the system definition and requirements exist in scattered documents that make it difficult to integrate, trace, and test through V&V. Modern system engineers along with system developers and stakeholders collaborate to produce a full system model using model-based systems engineering (MBSE). MBSE employs a Unified Modeling Language (UML) or System Modeling Language (SysML) representation of the system and its requirements that readily passes from each stakeholder for system information and additional input. The comprehensive and detailed MBSE model allows for direct traceability to the system requirements. xxiv To thoroughly test a ML system, one performs either white-box or black-box testing or both. Black-box testing is a testing method in which the internal model structure, design, and implementation of the system under test is unknown to the test engineer. Testers and analysts are simply looking at performance of the system given input and output. White-box testing is a testing method in which the internal model structure, design, and implementation of the system under test is known to the test engineer. When possible, test engineers and analysts perform both black-box and white-box testing. However, sometimes testers lack authorization to access the internal structure of the system. The researcher captures this decision in the ML framework. No two ML systems are exactly alike and therefore, the testing of each system must be custom to some degree. Even though there is customization, an effective process exists. This research includes some specialized methods, based on grounded theory, to use in the testing of the internal structure and performance. Through the study and organization of proven methods, this research develops an effective ML V&V framework. Systems engineers and analysts are able to simply apply the framework for various white-box and black-box V&V testing circumstances

    ANOMALY INFERENCE BASED ON HETEROGENEOUS DATA SOURCES IN AN ELECTRICAL DISTRIBUTION SYSTEM

    Get PDF
    Harnessing the heterogeneous data sets would improve system observability. While the current metering infrastructure in distribution network has been utilized for the operational purpose to tackle abnormal events, such as weather-related disturbance, the new normal we face today can be at a greater magnitude. Strengthening the inter-dependencies as well as incorporating new crowd-sourced information can enhance operational aspects such as system reconfigurability under extreme conditions. Such resilience is crucial to the recovery of any catastrophic events. In this dissertation, it is focused on the anomaly of potential foul play within an electrical distribution system, both primary and secondary networks as well as its potential to relate to other feeders from other utilities. The distributed generation has been part of the smart grid mission, the addition can be prone to electronic manipulation. This dissertation provides a comprehensive establishment in the emerging platform where the computing resources have been ubiquitous in the electrical distribution network. The topics covered in this thesis is wide-ranging where the anomaly inference includes load modeling and profile enhancement from other sources to infer of topological changes in the primary distribution network. While metering infrastructure has been the technological deployment to enable remote-controlled capability on the dis-connectors, this scholarly contribution represents the critical knowledge of new paradigm to address security-related issues, such as, irregularity (tampering by individuals) as well as potential malware (a large-scale form) that can massively manipulate the existing network control variables, resulting into large impact to the power grid
    • …
    corecore