13,128 research outputs found

    KNOWLEDGE-BASED NEURAL NETWORK FOR LINE FLOW CONTINGENCY SELECTION AND RANKING

    Get PDF
    The Line flow Contingency Selection and Ranking (CS & R) is performed to rank the critical contingencies in order of their severity. An Artificial Neural Network based method for MW security assessment corresponding to line outage events have been reported by various authors in the literature. One way to provide an understanding of the behaviour of Neural Networks is to extract rules that can be provided to the user. The domain knowledge (fuzzy rules extracted from Multi-layer Perceptron model trained by Back Propagation algorithm) is integrated into a Neural Network for fast and accurate CS & R in an IEEE 14-bus system, for unknown load patterns and are found to be suitable for on-line applications at Energy Management Centers. The system user is provided with the capability to determine the set of conditions under which a line-outage is critical, and if critical, then how severe it is, thereby providing some degree of transparency of the ANN solution

    Impact Assessment of Hypothesized Cyberattacks on Interconnected Bulk Power Systems

    Full text link
    The first-ever Ukraine cyberattack on power grid has proven its devastation by hacking into their critical cyber assets. With administrative privileges accessing substation networks/local control centers, one intelligent way of coordinated cyberattacks is to execute a series of disruptive switching executions on multiple substations using compromised supervisory control and data acquisition (SCADA) systems. These actions can cause significant impacts to an interconnected power grid. Unlike the previous power blackouts, such high-impact initiating events can aggravate operating conditions, initiating instability that may lead to system-wide cascading failure. A systemic evaluation of "nightmare" scenarios is highly desirable for asset owners to manage and prioritize the maintenance and investment in protecting their cyberinfrastructure. This survey paper is a conceptual expansion of real-time monitoring, anomaly detection, impact analyses, and mitigation (RAIM) framework that emphasizes on the resulting impacts, both on steady-state and dynamic aspects of power system stability. Hypothetically, we associate the combinatorial analyses of steady state on substations/components outages and dynamics of the sequential switching orders as part of the permutation. The expanded framework includes (1) critical/noncritical combination verification, (2) cascade confirmation, and (3) combination re-evaluation. This paper ends with a discussion of the open issues for metrics and future design pertaining the impact quantification of cyber-related contingencies

    Deep Space Network information system architecture study

    Get PDF
    The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control

    Unsupervised method to classify PM10 pollutant concentrations

    Get PDF
    In this paper a method based mainly on Data Fusion and Artificial Neural Networks to classify one of the most important pollutants such as Particulate Matter less than 10 micrometer in diameter (PM10) concentrations is proposed. The main objective is to classify in two pollution levels (Non-Contingency and Contingency) the pollutant concentration. Pollutant concentrations and meteorological variables have been considered in order to build a Representative Vector (RV) of pollution. RV is used to train an Artificial Neural Network in order to classify pollutant events determined by meteorological variables. In the experiments, real time series gathered from the Automatic Environmental Monitoring Network (AEMN) in Salamanca Guanajuato Mexico have been used. The method can help to establish a better air quality monitoring methodology that is essential for assessing the effectiveness of imposed pollution controls, strategies, and facilitate the pollutants reduction

    Unsupervised system to classify SO2 pollutant concentrations in Salamanca, Mexico

    Get PDF
    Salamanca is cataloged as one of the most polluted cities in Mexico. In order to observe the behavior and clarify the influence of wind parameters on the Sulphur Dioxide (SO2) concentrations a Self-Organizing Maps (SOM) Neural Network have been implemented at three monitoring locations for the period from January 1 to December 31, 2006. The maximum and minimum daily values of SO2 concentrations measured during the year of 2006 were correlated with the wind parameters of the same period. The main advantages of the SOM Neural Network is that it allows to integrate data from different sensors and provide readily interpretation results. Especially, it is powerful mapping and classification tool, which others information in an easier way and facilitates the task of establishing an order of priority between the distinguished groups of concentrations depending on their need for further research or remediation actions in subsequent management steps. For each monitoring location, SOM classifications were evaluated with respect to pollution levels established by Health Authorities. The classification system can help to establish a better air quality monitoring methodology that is essential for assessing the effectiveness of imposed pollution controls, strategies, and facilitate the pollutants reduction

    Vulnerability Assessment and Privacy-preserving Computations in Smart Grid

    Get PDF
    Modern advances in sensor, computing, and communication technologies enable various smart grid applications which highlight the vulnerability that requires novel approaches to the field of cybersecurity. While substantial numbers of technologies have been adopted to protect cyber attacks in smart grid, there lacks a comprehensive review of the implementations, impacts, and solutions of cyber attacks specific to the smart grid.In this dissertation, we are motivated to evaluate the security requirements for the smart grid which include three main properties: confidentiality, integrity, and availability. First, we review the cyber-physical security of the synchrophasor network, which highlights all three aspects of security issues. Taking the synchrophasor network as an example, we give an overview of how to attack a smart grid network. We test three types of attacks and show the impact of each attack consisting of denial-of-service attack, sniffing attack, and false data injection attack.Next, we discuss how to protect against each attack. For protecting availability, we examine possible defense strategies for the associated vulnerabilities.For protecting data integrity, a small-scale prototype of secure synchrophasor network is presented with different cryptosystems. Besides, a deep learning based time-series anomaly detector is proposed to detect injected measurement. Our approach observes both data measurements and network traffic features to jointly learn system states and can detect attacks when state vector estimator fails.For protecting data confidentiality, we propose privacy-preserving algorithms for two important smart grid applications. 1) A distributed privacy-preserving quadratic optimization algorithm to solve Security Constrained Optimal Power Flow (SCOPF) problem. The SCOPF problem is decomposed into small subproblems using the Alternating Direction Method of Multipliers (ADMM) and gradient projection algorithms. 2) We use Paillier cryptosystem to secure the computation of the power system dynamic simulation. The IEEE 3-Machine 9-Bus System is used to implement and demonstrate the proposed scheme. The security and performance analysis of our implementations demonstrate that our algorithms can prevent chosen-ciphertext attacks at a reasonable cost

    The Profiling Potential of Computer Vision and the Challenge of Computational Empiricism

    Full text link
    Computer vision and other biometrics data science applications have commenced a new project of profiling people. Rather than using 'transaction generated information', these systems measure the 'real world' and produce an assessment of the 'world state' - in this case an assessment of some individual trait. Instead of using proxies or scores to evaluate people, they increasingly deploy a logic of revealing the truth about reality and the people within it. While these profiling knowledge claims are sometimes tentative, they increasingly suggest that only through computation can these excesses of reality be captured and understood. This article explores the bases of those claims in the systems of measurement, representation, and classification deployed in computer vision. It asks if there is something new in this type of knowledge claim, sketches an account of a new form of computational empiricism being operationalised, and questions what kind of human subject is being constructed by these technological systems and practices. Finally, the article explores legal mechanisms for contesting the emergence of computational empiricism as the dominant knowledge platform for understanding the world and the people within it

    Model-based and Model-free Approaches for Power System Security Assessment

    Get PDF
    Continuous security assessment of a power system is necessary to insure a reliable, stable, and continuous supply of electrical power to customers. To this end, this dissertation identifies and explores some of the various challenges encountered in the field of power system security assessment. Accordingly, several model-based and/or model-free approaches were developed to overcome these challenges. First, a voltage stability index, named TAVSI, is proposed. This index has three important features: TAVSI applies to general load models including ZIP, exponential, and induction motor loads; TAVSI can be used for both measurement-based and model-based voltage stability assessment; and finally, TAVSI is calculated based on normalized sensitivities which enables identification of weak buses and the definition of a global instability threshold. TAVSI was tested on both the IEEE 14-bus and the 181-bus WECC systems. Results show that TAVSI gives a reliable assessment of system stability. Second, a data-driven and model-based hybrid reinforcement learning approach is proposed for training a control agent to re-dispatch generators’ output power in order to relieve stressed branches. For large power systems, the agent’s action space is highly dimensioned which challenges the successful training of data-driven agents. Therefore, we propose a hybrid approach where model-based actions are utilized to help the agent learn an optimal control policy. The proposed approach was tested and compared to the generic data-driven DDPG-based approach on the IEEE 118-bus system and a larger 2749-bus real-world system. Results show that the hybrid approach performs well for large power systems and that it is superior to the DDPG-based approach. Finally, a Convolutional Neural Network (CNN) based approach is proposed as a faster alternative to the classical AC power flow-based contingency screening. The proposed approach is investigated on both the IEEE 118-bus system and the Texas 2000-bus synthetic system. For such large systems, the implementation of the proposed approach came with several challenges, such as computational burden, learning from imbalanced datasets, and performance evaluation of trained models. Accordingly, this work contributes a set of novel techniques and best practices that enables both efficient and successful implementation of CNN-based multi-contingency classifiers for large power systems

    Power system security boundary visualization using intelligent techniques

    Get PDF
    In the open access environment, one of the challenges for utilities is that typical operating conditions tend to be much closer to security boundaries. Consequently, security levels for the transmission network must be accurately assessed and easily identified on-line by system operators;Security assessment through boundary visualization provides the operator with knowledge of system security levels in terms of easily monitorable pre-contingency operating parameters. The traditional boundary visualization approach results in a two-dimensional graph called a nomogram. However, an intensive labor involvement, inaccurate boundary representation, and little flexibility in integrating with the energy management system greatly restrict use of nomograms under competitive utility environment. Motivated by the new operating environment and based on the traditional nomogram development procedure, an automatic security boundary visualization methodology has been developed using neural networks with feature selection. This methodology provides a new security assessment tool for power system operations;The main steps for this methodology include data generation, feature selection, neural network training, and boundary visualization. In data generation, a systematic approach to data generation has been developed to generate high quality data. Several data analysis techniques have been used to analyze the data before neural network training. In feature selection, genetic algorithm based methods have been used to select the most predicative precontingency operating parameters. Following neural network training, a confidence interval calculation method to measure the neural network output reliability has been derived. Sensitivity analysis of the neural network output with respect to input parameters has also been derived. In boundary visualization, a composite security boundary visualization algorithm has been proposed to present accurate boundaries in two dimensional diagrams to operators for any type of security problem;This methodology has been applied to thermal overload, voltage instability problems for a sample system
    • …
    corecore