13,584 research outputs found

    A survey on the development status and application prospects of knowledge graph in smart grids

    Full text link
    With the advent of the electric power big data era, semantic interoperability and interconnection of power data have received extensive attention. Knowledge graph technology is a new method describing the complex relationships between concepts and entities in the objective world, which is widely concerned because of its robust knowledge inference ability. Especially with the proliferation of measurement devices and exponential growth of electric power data empowers, electric power knowledge graph provides new opportunities to solve the contradictions between the massive power resources and the continuously increasing demands for intelligent applications. In an attempt to fulfil the potential of knowledge graph and deal with the various challenges faced, as well as to obtain insights to achieve business applications of smart grids, this work first presents a holistic study of knowledge-driven intelligent application integration. Specifically, a detailed overview of electric power knowledge mining is provided. Then, the overview of the knowledge graph in smart grids is introduced. Moreover, the architecture of the big knowledge graph platform for smart grids and critical technologies are described. Furthermore, this paper comprehensively elaborates on the application prospects leveraged by knowledge graph oriented to smart grids, power consumer service, decision-making in dispatching, and operation and maintenance of power equipment. Finally, issues and challenges are summarised.Comment: IET Generation, Transmission & Distributio

    Una nueva capa de protección a través de súper alarmas con capacidad de diagnóstico

    Get PDF
    An alarm management methodology can be proposed as a discrete event sequence recognition problem where time patterns are used to identify the process safe condition, especially in the start-up and shutdown stages. Industrial plants, particularly in the petrochemical, energy, and chemical sectors, require a combined approach of all the events that can result in a catastrophic accident. This document introduces a new layer of protection (super-alarm) for industrial processes based on a diagnostic stage. Alarms and actions of the standard operating procedure are considered discrete events involved in sequences, where the diagnostic stage corresponds to the recognition of a special situation when these sequences occur. This is meant to provide operators with pertinent information regarding the normal or abnormal situations induced by the flow of alarms. Chronicles Based Alarm Management (CBAM) is the methodology used to build the chronicles that will permit to generate the super-alarms furthermore, a case study of the petrochemical sector using CBAM is presented to build the chronicles of the normal startup, abnormal start-up, and normal shutdown scenarios. Finally, the scenario validation is performed for an abnormal start-up, showing how a super-alarm is generated.Se puede formular una metodología de gestión de alarmas como un problema de reconocimiento de secuencia de eventos discretos en el que se utilizan patrones de tiempo para identificar la condición segura del proceso, especialmente en las etapas de arranque y parada de planta. Las plantas industriales, particularmente en las industrias petroquímica, energética y química, requieren una administración combinada de todos los eventos que pueden producir un accidente catastrófico. En este documento, se introduce una nueva capa de protección (súper alarma) a los procesos industriales basados en una etapa de diagnóstico. Las alarmas y las acciones estándar del procedimiento operativo son asumidas como eventos discretos involucrados en las secuencias, luego la etapa de diagnóstico corresponde al reconocimiento de la situación cuando ocurren estas secuencias. Esto proporciona a los operadores información pertinente sobre las situaciones normales o anormales inducidas por el flujo de alarmas. La gestión de alarmas basadas en crónicas (CBAM) es la metodología utilizada en este artículo para construir las crónicas que permitirán generar las super alarmas, además, se presenta un caso de estudio del sector petroquímico que usa CBAM para construir las crónicas de los escenarios de un arranque normal, un arranque anormal y un apagado normal. Finalmente, la validación del escenario se realiza para un arranque anormal, mostrando cómo se genera una súper alarma

    Big Data and the Internet of Things

    Full text link
    Advances in sensing and computing capabilities are making it possible to embed increasing computing power in small devices. This has enabled the sensing devices not just to passively capture data at very high resolution but also to take sophisticated actions in response. Combined with advances in communication, this is resulting in an ecosystem of highly interconnected devices referred to as the Internet of Things - IoT. In conjunction, the advances in machine learning have allowed building models on this ever increasing amounts of data. Consequently, devices all the way from heavy assets such as aircraft engines to wearables such as health monitors can all now not only generate massive amounts of data but can draw back on aggregate analytics to "improve" their performance over time. Big data analytics has been identified as a key enabler for the IoT. In this chapter, we discuss various avenues of the IoT where big data analytics either is already making a significant impact or is on the cusp of doing so. We also discuss social implications and areas of concern.Comment: 33 pages. draft of upcoming book chapter in Japkowicz and Stefanowski (eds.) Big Data Analysis: New algorithms for a new society, Springer Series on Studies in Big Data, to appea

    Monitoring and control for NGL recovery plant

    Get PDF
    The thesis explores the production of natural gas liquids (NGL) and the challenge of monitoring and controlling the fractionation process. NGLs are the C2+ hydrocarbon fraction contained in natural gas, which includes useful feedstocks for industrial production processes. Since NGLs have greater economic value compared to natural gas, their recovery has become increasingly economically significant, leading to a need for efficient fractionation. This energy-intensive process is typically conducted in separation trains that include cryogenic distillation columns. Given the high cost of composition analyzers and the related significant delays, this work proposes the use of only indirect composition control strategies, as well as data-driven control strategies to achieve the desired product quality and optimize the plant energy consumption under typical disturbances. Feedforward neural networks (FFNs) were used for the development of soft sensors used in data-driven control schemes. Given the multitude of data made available by the process simulator, this work aims to develop a demethanizer digital twin that can approximate the column dynamics with reduced computation time. Long Short-Term Memory neural networks (LSTM), along with physical knowledge, were used to develop different neural network architectures compared to select the most suitable for the surrogate model development. Realistic measurement noises were considered to accurately reflect the measurements of real industrial plants and only easy-to-measure variables were used as input data for the developed neural model. Overall, the research presents an energy-efficient NGL recovery offering a cost-effective and efficient alternative to traditional measuring instruments. Moreover, the study illustrates a novel application of LSTM for distillation columns digital twins realization, providing a useful tool for optimization, monitoring and control by employing available plant measurements

    ASPIE: A Framework for Active Sensing and Processing of Complex Events in the Internet of Manufacturing Things

    Get PDF
    Rapid perception and processing of critical monitoring events are essential to ensure healthy operation of Internet of Manufacturing Things (IoMT)-based manufacturing processes. In this paper, we proposed a framework (active sensing and processing architecture (ASPIE)) for active sensing and processing of critical events in IoMT-based manufacturing based on the characteristics of IoMT architecture as well as its perception model. A relation model of complex events in manufacturing processes, together with related operators and unified XML-based semantic definitions, are developed to effectively process the complex event big data. A template based processing method for complex events is further introduced to conduct complex event matching using the Apriori frequent item mining algorithm. To evaluate the proposed models and methods, we developed a software platform based on ASPIE for a local chili sauce manufacturing company, which demonstrated the feasibility and effectiveness of the proposed methods for active perception and processing of complex events in IoMT-based manufacturing
    corecore