1,603 research outputs found

    Secure Data Management and Transmission Infrastructure for the Future Smart Grid

    Get PDF
    Power grid has played a crucial role since its inception in the Industrial Age. It has evolved from a wide network supplying energy for incorporated multiple areas to the largest cyber-physical system. Its security and reliability are crucial to any country’s economy and stability [1]. With the emergence of the new technologies and the growing pressure of the global warming, the aging power grid can no longer meet the requirements of the modern industry, which leads to the proposal of ‘smart grid’. In smart grid, both electricity and control information communicate in a massively distributed power network. It is essential for smart grid to deliver real-time data by communication network. By using smart meter, AMI can measure energy consumption, monitor loads, collect data and forward information to collectors. Smart grid is an intelligent network consists of many technologies in not only power but also information, telecommunications and control. The most famous structure of smart grid is the three-layer structure. It divides smart grid into three different layers, each layer has its own duty. All these three layers work together, providing us a smart grid that monitor and optimize the operations of all functional units from power generation to all the end-customers [2]. To enhance the security level of future smart grid, deploying a high secure level data transmission scheme on critical nodes is an effective and practical approach. A critical node is a communication node in a cyber-physical network which can be developed to meet certain requirements. It also has firewalls and capability of intrusion detection, so it is useful for a time-critical network system, in other words, it is suitable for future smart grid. The deployment of such a scheme can be tricky regarding to different network topologies. A simple and general way is to install it on every node in the network, that is to say all nodes in this network are critical nodes, but this way takes time, energy and money. Obviously, it is not the best way to do so. Thus, we propose a multi-objective evolutionary algorithm for the searching of critical nodes. A new scheme should be proposed for smart grid. Also, an optimal planning in power grid for embedding large system can effectively ensure every power station and substation to operate safely and detect anomalies in time. Using such a new method is a reliable method to meet increasing security challenges. The evolutionary frame helps in getting optimum without calculating the gradient of the objective function. In the meanwhile, a means of decomposition is useful for exploring solutions evenly in decision space. Furthermore, constraints handling technologies can place critical nodes on optimal locations so as to enhance system security even with several constraints of limited resources and/or hardware. The high-quality experimental results have validated the efficiency and applicability of the proposed approach. It has good reason to believe that the new algorithm has a promising space over the real-world multi-objective optimization problems extracted from power grid security domain. In this thesis, a cloud-based information infrastructure is proposed to deal with the big data storage and computation problems for the future smart grid, some challenges and limitations are addressed, and a new secure data management and transmission strategy regarding increasing security challenges of future smart grid are given as well

    Secure Data Management and Transmission Infrastructure for the Future Smart Grid

    Get PDF
    Power grid has played a crucial role since its inception in the Industrial Age. It has evolved from a wide network supplying energy for incorporated multiple areas to the largest cyber-physical system. Its security and reliability are crucial to any country’s economy and stability [1]. With the emergence of the new technologies and the growing pressure of the global warming, the aging power grid can no longer meet the requirements of the modern industry, which leads to the proposal of ‘smart grid’. In smart grid, both electricity and control information communicate in a massively distributed power network. It is essential for smart grid to deliver real-time data by communication network. By using smart meter, AMI can measure energy consumption, monitor loads, collect data and forward information to collectors. Smart grid is an intelligent network consists of many technologies in not only power but also information, telecommunications and control. The most famous structure of smart grid is the three-layer structure. It divides smart grid into three different layers, each layer has its own duty. All these three layers work together, providing us a smart grid that monitor and optimize the operations of all functional units from power generation to all the end-customers [2]. To enhance the security level of future smart grid, deploying a high secure level data transmission scheme on critical nodes is an effective and practical approach. A critical node is a communication node in a cyber-physical network which can be developed to meet certain requirements. It also has firewalls and capability of intrusion detection, so it is useful for a time-critical network system, in other words, it is suitable for future smart grid. The deployment of such a scheme can be tricky regarding to different network topologies. A simple and general way is to install it on every node in the network, that is to say all nodes in this network are critical nodes, but this way takes time, energy and money. Obviously, it is not the best way to do so. Thus, we propose a multi-objective evolutionary algorithm for the searching of critical nodes. A new scheme should be proposed for smart grid. Also, an optimal planning in power grid for embedding large system can effectively ensure every power station and substation to operate safely and detect anomalies in time. Using such a new method is a reliable method to meet increasing security challenges. The evolutionary frame helps in getting optimum without calculating the gradient of the objective function. In the meanwhile, a means of decomposition is useful for exploring solutions evenly in decision space. Furthermore, constraints handling technologies can place critical nodes on optimal locations so as to enhance system security even with several constraints of limited resources and/or hardware. The high-quality experimental results have validated the efficiency and applicability of the proposed approach. It has good reason to believe that the new algorithm has a promising space over the real-world multi-objective optimization problems extracted from power grid security domain. In this thesis, a cloud-based information infrastructure is proposed to deal with the big data storage and computation problems for the future smart grid, some challenges and limitations are addressed, and a new secure data management and transmission strategy regarding increasing security challenges of future smart grid are given as well

    Cloud Computing cost and energy optimization through Federated Cloud SoS

    Get PDF
    2017 Fall.Includes bibliographical references.The two most significant differentiators amongst contemporary Cloud Computing service providers have increased green energy use and datacenter resource utilization. This work addresses these two issues from a system's architectural optimization viewpoint. The proposed approach herein, allows multiple cloud providers to utilize their individual computing resources in three ways by: (1) cutting the number of datacenters needed, (2) scheduling available datacenter grid energy via aggregators to reduce costs and power outages, and lastly by (3) utilizing, where appropriate, more renewable and carbon-free energy sources. Altogether our proposed approach creates an alternative paradigm for a Federated Cloud SoS approach. The proposed paradigm employs a novel control methodology that is tuned to obtain both financial and environmental advantages. It also supports dynamic expansion and contraction of computing capabilities for handling sudden variations in service demand as well as for maximizing usage of time varying green energy supplies. Herein we analyze the core SoS requirements, concept synthesis, and functional architecture with an eye on avoiding inadvertent cascading conditions. We suggest a physical architecture that diminishes unwanted outcomes while encouraging desirable results. Finally, in our approach, the constituent cloud services retain their independent ownership, objectives, funding, and sustainability means. This work analyzes the core SoS requirements, concept synthesis, and functional architecture. It suggests a physical structure that simulates the primary SoS emergent behavior to diminish unwanted outcomes while encouraging desirable results. The report will analyze optimal computing generation methods, optimal energy utilization for computing generation as well as a procedure for building optimal datacenters using a unique hardware computing system design based on the openCompute community as an illustrative collaboration platform. Finally, the research concludes with security features cloud federation requires to support to protect its constituents, its constituents tenants and itself from security risks

    Artificial intelligence driven anomaly detection for big data systems

    Get PDF
    The main goal of this thesis is to contribute to the research on automated performance anomaly detection and interference prediction by implementing Artificial Intelligence (AI) solutions for complex distributed systems, especially for Big Data platforms within cloud computing environments. The late detection and manual resolutions of performance anomalies and system interference in Big Data systems may lead to performance violations and financial penalties. Motivated by this issue, we propose AI-based methodologies for anomaly detection and interference prediction tailored to Big Data and containerized batch platforms to better analyze system performance and effectively utilize computing resources within cloud environments. Therefore, new precise and efficient performance management methods are the key to handling performance anomalies and interference impacts to improve the efficiency of data center resources. The first part of this thesis contributes to performance anomaly detection for in-memory Big Data platforms. We examine the performance of Big Data platforms and justify our choice of selecting the in-memory Apache Spark platform. An artificial neural network-driven methodology is proposed to detect and classify performance anomalies for batch workloads based on the RDD characteristics and operating system monitoring metrics. Our method is evaluated against other popular machine learning algorithms (ML), as well as against four different monitoring datasets. The results prove that our proposed method outperforms other ML methods, typically achieving 98–99% F-scores. Moreover, we prove that a random start instant, a random duration, and overlapped anomalies do not significantly impact the performance of our proposed methodology. The second contribution addresses the challenge of anomaly identification within an in-memory streaming Big Data platform by investigating agile hybrid learning techniques. We develop TRACK (neural neTwoRk Anomaly deteCtion in sparK) and TRACK-Plus, two methods to efficiently train a class of machine learning models for performance anomaly detection using a fixed number of experiments. Our model revolves around using artificial neural networks with Bayesian Optimization (BO) to find the optimal training dataset size and configuration parameters to efficiently train the anomaly detection model to achieve high accuracy. The objective is to accelerate the search process for finding the size of the training dataset, optimizing neural network configurations, and improving the performance of anomaly classification. A validation based on several datasets from a real Apache Spark Streaming system is performed, demonstrating that the proposed methodology can efficiently identify performance anomalies, near-optimal configuration parameters, and a near-optimal training dataset size while reducing the number of experiments up to 75% compared with naïve anomaly detection training. The last contribution overcomes the challenges of predicting completion time of containerized batch jobs and proactively avoiding performance interference by introducing an automated prediction solution to estimate interference among colocated batch jobs within the same computing environment. An AI-driven model is implemented to predict the interference among batch jobs before it occurs within system. Our interference detection model can alleviate and estimate the task slowdown affected by the interference. This model assists the system operators in making an accurate decision to optimize job placement. Our model is agnostic to the business logic internal to each job. Instead, it is learned from system performance data by applying artificial neural networks to establish the completion time prediction of batch jobs within the cloud environments. We compare our model with three other baseline models (queueing-theoretic model, operational analysis, and an empirical method) on historical measurements of job completion time and CPU run-queue size (i.e., the number of active threads in the system). The proposed model captures multithreading, operating system scheduling, sleeping time, and job priorities. A validation based on 4500 experiments based on the DaCapo benchmarking suite was carried out, confirming the predictive efficiency and capabilities of the proposed model by achieving up to 10% MAPE compared with the other models.Open Acces

    Automated cache optimisations of stencil computations for partial differential equations

    Get PDF
    This thesis focuses on numerical methods that solve partial differential equations. Our focal point is the finite difference method, which solves partial differential equations by approximating derivatives with explicit finite differences. These partial differential equation solvers consist of stencil computations on structured grids. Stencils for computing real-world practical applications are patterns often characterised by many memory accesses and non-trivial arithmetic expressions that lead to high computational costs compared to simple stencils used in much prior proof-of-concept work. In addition, the loop nests to express stencils on structured grids may often be complicated. This work is highly motivated by a specific domain of stencil computations where one of the challenges is non-aligned to the structured grid ("off-the-grid") operations. These operations update neighbouring grid points through scatter and gather operations via non-affine memory accesses, such as {A[B[i]]}. In addition to this challenge, these practical stencils often include many computation fields (need to store multiple grid copies), complex data dependencies and imperfect loop nests. In this work, we aim to increase the performance of stencil kernel execution. We study automated cache-memory-dependent optimisations for stencil computations. This work consists of two core parts with their respective contributions.The first part of our work tries to reduce the data movement in stencil computations of practical interest. Data movement is a dominant factor affecting the performance of high-performance computing applications. It has long been a target of optimisations due to its impact on execution time and energy consumption. This thesis tries to relieve this cost by applying temporal blocking optimisations, also known as time-tiling, to stencil computations. Temporal blocking is a well-known technique to enhance data reuse in stencil computations. However, it is rarely used in practical applications but rather in theoretical examples to prove its efficacy. Applying temporal blocking to scientific simulations is more complex. More specifically, in this work, we focus on the application context of seismic and medical imaging. In this area, we often encounter scatter and gather operations due to signal sources and receivers at arbitrary locations in the computational domain. These operations make the application of temporal blocking challenging. We present an approach to overcome this challenge and successfully apply temporal blocking.In the second part of our work, we extend the first part as an automated approach targeting a wide range of simulations modelled with partial differential equations. Since temporal blocking is error-prone, tedious to apply by hand and highly complex to assimilate theoretically and practically, we are motivated to automate its application and automatically generate code that benefits from it. We discuss algorithmic approaches and present a generalised compiler pipeline to automate the application of temporal blocking. These passes are written in the Devito compiler. They are used to accelerate the computation of stencil kernels in areas such as seismic and medical imaging, computational fluid dynamics and machine learning. \href{www.devitoproject.org}{Devito} is a Python package to implement optimised stencil computation (e.g., finite differences, image processing, machine learning) from high-level symbolic problem definitions. Devito builds on \href{www.sympy.org}{SymPy} and employs automated code generation and just-in-time compilation to execute optimised computational kernels on several computer platforms, including CPUs, GPUs, and clusters thereof. We show how we automate temporal blocking code generation without user intervention and often achieve better time-to-solution. We enable domain-specific optimisation through compiler passes and offer temporal blocking gains from a high-level symbolic abstraction. These automated optimisations benefit various computational kernels for solving real-world application problems.Open Acces

    Stochastic Model Predictive Control and Machine Learning for the Participation of Virtual Power Plants in Simultaneous Energy Markets

    Get PDF
    The emergence of distributed energy resources in the electricity system involves new scenarios in which domestic consumers (end-users) can be aggregated to participate in energy markets, acting as prosumers. Every prosumer is considered to work as an individual energy node, which has its own renewable generation source, its controllable and non-controllable energy loads, or even its own individual tariffs to trade. The nodes can build aggregations which are managed by a system operator. The participation in energy markets is not trivial for individual prosumers due to different aspects such as the technical requirements which must be satisfied, or the need to trade with a minimum volume of energy. These requirements can be solved by the definition of aggregated participations. In this context, the aggregators handle the difficult task of coordinating and stabilizing the prosumers' operations, not only at an individual level, but also at a system level, so that the set of energy nodes behaves as a single entity with respect to the market. The system operators can act as a trading-distributing company, or only as a trading one. For this reason, the optimization model must consider not only aggregated tariffs, but also individual tariffs to allow individual billing for each energy node. The energy node must have the required technical and legal competences, as well as the necessary equipment to manage their participation in energy markets or to delegate it to the system operator. This aggregation, according to business rules and not only to physical locations, is known as virtual power plant. The optimization of the aggregated participation in the different energy markets requires the introduction of the concept of dynamic storage virtualization. Therefore, every energy node in the system under study will have a battery installed to store excess energy. This dynamic virtualization defines logical partitions in the storage system to allow its use for different purposes. As an example, two different partitions can be defined: one for the aggregated participation in the day-ahead market, and the other one for the demand-response program. There are several criteria which must be considered when defining the participation strategy. A risky strategy will report more benefits in terms of trading; however, this strategy will also be more likely to get penalties for not meeting the contract due to uncertainties or operation errors. On the other hand, a conservative strategy would result worse economically in terms of trading, but it will reduce these potential penalties. The inclusion of dynamic intent profiles allows to set risky bids when there exist a potential low error of forecast in terms of generation, load or failures; and conservative bids otherwise. The system operator is the agent who decides how much energy will be reserved to trade, how much to energy node self consumption, how much to demand-response program participation etc. The large number of variables and states makes this problem too complex to be solved by classical methods, especially considering the fact that slight differences in wrong decisions would imply important economic issues in the short term. The concept of dynamic storage virtualization has been studied and implemented to allow the simultaneous participation in multiple energy markets. The simultaneous participations can be optimized considering the objective of potential profits, potential risks or even a combination of both considering more advanced criteria related to the system operator's know-how. Day-ahead bidding algorithms, demand-response program participation optimization and a penalty-reduction operation control algorithm have been developed. A stochastic layer has been defined and implemented to improve the robustness inherent to forecast-dependent systems. This layer has been developed with chance-constraints, which includes the possibility of combining an intelligent agent based on a encoder-decoder arquitecture built with neural networks composed of gated recurrent units. The formulation and the implementation allow a total decouplement among all the algorithms without any dependency among them. Nevertheless, they are completely engaged because the individual execution of each one considers both the current scenario and the selected strategy. This makes possible a wider and better context definition and a more real and accurate situation awareness. In addition to the relevant simulation runs, the platform has also been tested on a real system composed of 40 energy nodes during one year in the German island of Borkum. This experience allowed the extraction of very satisfactory conclusions about the deployment of the platform in real environments.La irrupción de los sistemas de generación distribuidos en los sistemas eléctricos dan lugar a nuevos escenarios donde los consumidores domésticos (usuarios finales) pueden participar en los mercados de energía actuando como prosumidores. Cada prosumidor es considerado como un nodo de energía con su propia fuente de generación de energía renovable, sus cargas controlables y no controlables e incluso sus propias tarifas. Los nodos pueden formar agregaciones que serán gestionadas por un agente denominado operador del sistema. La participación en los mercados energéticos no es trivial, bien sea por requerimientos técnicos de instalación o debido a la necesidad de cubrir un volumen mínimo de energía por transacción, que cada nodo debe cumplir individualmente. Estas limitaciones hacen casi imposible la participación individual, pero pueden ser salvadas mediante participaciones agregadas. El agregador llevará a cabo la ardua tarea de coordinar y estabilizar las operaciones de los nodos de energía, tanto individualmente como a nivel de sistema, para que todo el conjunto se comporte como una unidad con respecto al mercado. Las entidades que gestionan el sistema pueden ser meras comercializadoras, o distribuidoras y comercializadoras simultáneamente. Por este motivo, el modelo de optimización sobre el que basarán sus decisiones deberá considerar, además de las tarifas agregadas, otras individuales para permitir facturaciones independientes. Los nodos deberán tener autonomía legal y técnica, así como el equipamiento necesario y suficiente para poder gestionar, o delegar en el operador del sistema, su participación en los mercados de energía. Esta agregación atendiendo a reglas de negocio y no solamente a restricciones de localización física es lo que se conoce como Virtual Power Plant. La optimización de la participación agregada en los mercados, desde el punto de vista técnico y económico, requiere de la introducción del concepto de virtualización dinámica del almacenamiento, para lo que será indispensable que los nodos pertenecientes al sistema bajo estudio consten de una batería para almacenar la energía sobrante. Esta virtualización dinámica definirá particiones lógicas en el sistema de almacenamiento para dedicar diferentes porcentajes de la energía almacenada para propósitos distintos. Como ejemplo, se podría hacer una virtualización en dos particiones lógicas diferentes: una de demand-response. Así, el sistema podría operar y satisfacer ambos mercados de manera simultánea con el mismo grid y el mismo almacenamiento. El potencial de estas particiones lógicas es que se pueden definir de manera dinámica, dependiendo del contexto de ejecución y del estado, tanto de la red, como de cada uno de los nodos a nivel individual. Para establecer una estrategia de participación se pueden considerar apuestas arriesgadas que reportarán más beneficios en términos de compra-venta, pero también posibles penalizaciones por no poder cumplir con el contrato. Por el contrario, una estrategia conservadora podría resultar menos beneficiosa económicamente en dichos términos de compra-venta, pero reducirá las penalizaciones. La inclusión del concepto de perfiles de intención dinámicos permitirá hacer pujas que sean arriesgadas, cuando existan errores de predicción potencialmente pequeños en términos de generación, consumo o fallos; y pujas más conservadoras en caso contrario. El operador del sistema es el agente que definirá cuánta energía utiliza para comercializar, cuánta para asegurar autoconsumo, cuánta desea tener disponible para participar en el programa de demand-response etc. El gran número de variables y de situaciones posibles hacen que este problema sea muy costoso y complejo de resolver mediante métodos clásicos, sobre todo teniendo en cuenta que pequeñas variaciones en la toma de decisiones pueden tener grandes implicaciones económicas incluso a corto plazo. En esta tesis se ha investigado en el concepto de virtualización dinámica del almacenamiento para permitir una participación simultánea en múltiples mercados. La estrategia de optimización definida permite participaciones simultáneas en diferentes mercados que pueden ser controladas con el objetivo de optimizar el beneficio potencial, el riesgo potencial, o incluso una combinación mixta de ambas en base a otros criterios más avanzados marcados por el know-how del operador del sistema. Se han desarrollado algoritmos de optimización para el mercado del day-ahead, para la participación en el programa de demand-response y un algoritmo de control para reducir las penalizaciones durante la operación mediante modelos de control predictivo. Se ha realizado la definición e implementación de un componente estocástico para hacer el sistema más robusto frente a la incertidumbre inherente a estos sistemas en los que hay tanto peso de una componente de tipo forecasing. La formulación de esta capa se ha realizado mediante chance-constraints, que incluye la posibilidad de combinar diferentes componentes para mejorar la precisión de la optimización. Para el caso de uso presentado se ha elegido la combinación de métodos estadísticos por probabilidad junto a un agente inteligente basado en una arquitectura de codificador-decodificador construida con redes neuronales compuestas de Gated Recurrent Units. La formulación y la implementación utilizada permiten que, aunque todos los algoritmos estén completamente desacoplados y no presenten dependencias entre ellos, todos se actual como la estrategia seleccionada. Esto permite la definición de un contexto mucho más amplio en la ejecución de las optimizaciones y una toma de decisiones más consciente, real y ajustada a la situación que condiciona al proceso. Además de las pertinentes pruebas de simulación, parte de la herramienta ha sido probada en un sistema real compuesto por 40 nodos domésticos, convenientemente equipados, durante un año en una infraestructura implantada en la isla alemana de Borkum. Esta experiencia ha permitido extraer conclusiones muy interesantes sobre la implantación de la plataforma en entornos reales
    • …
    corecore