569 research outputs found

    Deep Reinforcement Learning for Control of Microgrids: A Review

    Get PDF
    A microgrid is widely accepted as a prominent solution to enhance resilience and performance in distributed power systems. Microgrids are flexible for adding distributed energy resources in the ecosystem of the electrical networks. Control techniques are used to synchronize distributed energy resources (DERs) due to their turbulent nature. DERs including alternating current, direct current and hybrid load with storage systems have been used in microgrids quite frequently due to which controlling the flow of energy in microgrids have been complex task with traditional control approaches. Distributed as well central approach to apply control algorithms is well-known methods to regulate frequency and voltage in microgrids. Recently techniques based of artificial intelligence are being applied for the problems that arise in operation and control of latest generation microgrids and smart grids. Such techniques are categorized in machine learning and deep learning in broader terms. The objective of this research is to survey the latest strategies of control in microgrids using the deep reinforcement learning approach (DRL). Other techniques of artificial intelligence had already been reviewed extensively but the use of DRL has increased in the past couple of years. To bridge the gap for the researchers, this survey paper is being presented with a focus on only Microgrids control DRL techniques for voltage control and frequency regulation with distributed, cooperative and multi agent approaches are presented in this research

    Networking - A Statistical Physics Perspective

    Get PDF
    Efficient networking has a substantial economic and societal impact in a broad range of areas including transportation systems, wired and wireless communications and a range of Internet applications. As transportation and communication networks become increasingly more complex, the ever increasing demand for congestion control, higher traffic capacity, quality of service, robustness and reduced energy consumption require new tools and methods to meet these conflicting requirements. The new methodology should serve for gaining better understanding of the properties of networking systems at the macroscopic level, as well as for the development of new principled optimization and management algorithms at the microscopic level. Methods of statistical physics seem best placed to provide new approaches as they have been developed specifically to deal with non-linear large scale systems. This paper aims at presenting an overview of tools and methods that have been developed within the statistical physics community and that can be readily applied to address the emerging problems in networking. These include diffusion processes, methods from disordered systems and polymer physics, probabilistic inference, which have direct relevance to network routing, file and frequency distribution, the exploration of network structures and vulnerability, and various other practical networking applications.Comment: (Review article) 71 pages, 14 figure

    Real-time Prediction of Cascading Failures in Power Systems

    Get PDF
    Blackouts in power systems cause major financial and societal losses, which necessitate devising better prediction techniques that are specifically tailored to detecting and preventing them. Since blackouts begin as a cascading failure (CF), an early detection of these CFs gives the operators ample time to stop the cascade from propagating into a large-scale blackout. In this thesis, a real-time load-based prediction model for CFs using phasor measurement units (PMUs) is proposed. The proposed model provides load-based predictions; therefore, it has the advantages of being applicable as a controller input and providing the operators with better information about the affected regions. In addition, it can aid in visualizing the effects of the CF on the grid. To extend the functionality and robustness of the proposed model, prediction intervals are incorporated based on the convergence width criterion (CWC) to allow the model to account for the uncertainties of the network, which was not available in previous works. Although this model addresses many issues in previous works, it has limitations in both scalability and capturing of transient behaviours. Hence, a second model based on recurrent neural network (RNN) long short-term memory (LSTM) ensemble is proposed. The RNN-LSTM is added to better capture the dynamics of the power system while also giving faster responses. To accommodate for the scalability of the model, a novel selection criterion for inputs is introduced to minimize the inputs while maintaining a high information entropy. The criteria include distance between buses as per graph theory, centrality of the buses with respect to fault location, and the information entropy of the bus. These criteria are merged using higher statistical moments to reflect the importance of each bus and generate indices that describe the grid with a smaller set of inputs. The results indicate that this model has the potential to provide more meaningful and accurate results than what is available in the previous literature and can be used as part of the integrated remedial action scheme (RAS) system either as a warning tool or a controller input as the accuracy of detecting affected regions reached 99.9% with a maximum delay of 400 ms. Finally, a validation loop extension is introduced to allow the model to self-update in real-time using importance sampling and case-based reasoning to extend the practicality of the model by allowing it to learn from historical data as time progresses

    Socio-hydrological modelling: a review asking “why, what and how?”

    Get PDF
    Interactions between humans and the environment are occurring on a scale that has never previously been seen; the scale of human interaction with the water cycle, along with the coupling present between social and hydrological systems, means that decisions that impact water also impact people. Models are often used to assist in decision-making regarding hydrological systems, and so in order for effective decisions to be made regarding water resource management, these interactions and feedbacks should be accounted for in models used to analyse systems in which water and humans interact. This paper reviews literature surrounding aspects of socio-hydrological modelling. It begins with background information regarding the current state of socio-hydrology as a discipline, before covering reasons for modelling and potential applications. Some important concepts that underlie socio-hydrological modelling efforts are then discussed, including ways of viewing socio-hydrological systems, space and time in modelling, complexity, data and model conceptualisation. Several modelling approaches are described, the stages in their development detailed and their applicability to socio-hydrological cases discussed. Gaps in research are then highlighted to guide directions for future research. The review of literature suggests that the nature of socio-hydrological study, being interdisciplinary, focusing on complex interactions between human and natural systems, and dealing with long horizons, is such that modelling will always present a challenge; it is, however, the task of the modeller to use the wide range of tools afforded to them to overcome these challenges as much as possible. The focus in socio-hydrology is on understanding the human–water system in a holistic sense, which differs from the problem solving focus of other water management fields, and as such models in socio-hydrology should be developed with a view to gaining new insight into these dynamics. There is an essential choice that socio-hydrological modellers face in deciding between representing individual system processes or viewing the system from a more abstracted level and modelling it as such; using these different approaches has implications for model development, applicability and the insight that they are capable of giving, and so the decision regarding how to model the system requires thorough consideration of, among other things, the nature of understanding that is sought

    Failure Analysis in Next-Generation Critical Cellular Communication Infrastructures

    Full text link
    The advent of communication technologies marks a transformative phase in critical infrastructure construction, where the meticulous analysis of failures becomes paramount in achieving the fundamental objectives of continuity, security, and availability. This survey enriches the discourse on failures, failure analysis, and countermeasures in the context of the next-generation critical communication infrastructures. Through an exhaustive examination of existing literature, we discern and categorize prominent research orientations with focuses on, namely resource depletion, security vulnerabilities, and system availability concerns. We also analyze constructive countermeasures tailored to address identified failure scenarios and their prevention. Furthermore, the survey emphasizes the imperative for standardization in addressing failures related to Artificial Intelligence (AI) within the ambit of the sixth-generation (6G) networks, accounting for the forward-looking perspective for the envisioned intelligence of 6G network architecture. By identifying new challenges and delineating future research directions, this survey can help guide stakeholders toward unexplored territories, fostering innovation and resilience in critical communication infrastructure development and failure prevention

    Design Space Exploration and Resource Management of Multi/Many-Core Systems

    Get PDF
    The increasing demand of processing a higher number of applications and related data on computing platforms has resulted in reliance on multi-/many-core chips as they facilitate parallel processing. However, there is a desire for these platforms to be energy-efficient and reliable, and they need to perform secure computations for the interest of the whole community. This book provides perspectives on the aforementioned aspects from leading researchers in terms of state-of-the-art contributions and upcoming trends

    The future of Cybersecurity in Italy: Strategic focus area

    Get PDF
    This volume has been created as a continuation of the previous one, with the aim of outlining a set of focus areas and actions that the Italian Nation research community considers essential. The book touches many aspects of cyber security, ranging from the definition of the infrastructure and controls needed to organize cyberdefence to the actions and technologies to be developed to be better protected, from the identification of the main technologies to be defended to the proposal of a set of horizontal actions for training, awareness raising, and risk management

    Single event upset hardened embedded domain specific reconfigurable architecture

    Get PDF

    Robust design of deep-submicron digital circuits

    Get PDF
    Avec l'augmentation de la probabilité de fautes dans les circuits numériques, les systèmes développés pour les environnements critiques comme les centrales nucléaires, les avions et les applications spatiales doivent être certifies selon des normes industrielles. Cette thèse est un résultat d'une cooperation CIFRE entre l'entreprise Électricité de France (EDF) R&D et Télécom Paristech. EDF est l'un des plus gros producteurs d'énergie au monde et possède de nombreuses centrales nucléaires. Les systèmes de contrôle-commande utilisé dans les centrales sont basés sur des dispositifs électroniques, qui doivent être certifiés selon des normes industrielles comme la CEI 62566, la CEI 60987 et la CEI 61513 à cause de la criticité de l'environnement nucléaire. En particulier, l'utilisation des dispositifs programmables comme les FPGAs peut être considérée comme un défi du fait que la fonctionnalité du dispositif est définie par le concepteur seulement après sa conception physique. Le travail présenté dans ce mémoire porte sur la conception de nouvelles méthodes d'analyse de la fiabilité aussi bien que des méthodes d'amélioration de la fiabilité d'un circuit numérique.The design of circuits to operate at critical environments, such as those used in control-command systems at nuclear power plants, is becoming a great challenge with the technology scaling. These circuits have to pass through a number of tests and analysis procedures in order to be qualified to operate. In case of nuclear power plants, safety is considered as a very high priority constraint, and circuits designed to operate under such critical environment must be in accordance with several technical standards such as the IEC 62566, the IEC 60987, and the IEC 61513. In such standards, reliability is treated as a main consideration, and methods to analyze and improve the circuit reliability are highly required. The present dissertation introduces some methods to analyze and to improve the reliability of circuits in order to facilitate their qualification according to the aforementioned technical standards. Concerning reliability analysis, we first present a fault-injection based tool used to assess the reliability of digital circuits. Next, we introduce a method to evaluate the reliability of circuits taking into account the ability of a given application to tolerate errors. Concerning reliability improvement techniques, first two different strategies to selectively harden a circuit are proposed. Finally, a method to automatically partition a TMR design based on a given reliability requirement is introduced.PARIS-Télécom ParisTech (751132302) / SudocSudocFranceF

    Identifying and Mitigating Security Risks in Multi-Level Systems-of-Systems Environments

    Get PDF
    In recent years, organisations, governments, and cities have taken advantage of the many benefits and automated processes Information and Communication Technology (ICT) offers, evolving their existing systems and infrastructures into highly connected and complex Systems-of-Systems (SoS). These infrastructures endeavour to increase robustness and offer some resilience against single points of failure. The Internet, Wireless Sensor Networks, the Internet of Things, critical infrastructures, the human body, etc., can all be broadly categorised as SoS, as they encompass a wide range of differing systems that collaborate to fulfil objectives that the distinct systems could not fulfil on their own. ICT constructed SoS face the same dangers, limitations, and challenges as those of traditional cyber based networks, and while monitoring the security of small networks can be difficult, the dynamic nature, size, and complexity of SoS makes securing these infrastructures more taxing. Solutions that attempt to identify risks, vulnerabilities, and model the topologies of SoS have failed to evolve at the same pace as SoS adoption. This has resulted in attacks against these infrastructures gaining prevalence, as unidentified vulnerabilities and exploits provide unguarded opportunities for attackers to exploit. In addition, the new collaborative relations introduce new cyber interdependencies, unforeseen cascading failures, and increase complexity. This thesis presents an innovative approach to identifying, mitigating risks, and securing SoS environments. Our security framework incorporates a number of novel techniques, which allows us to calculate the security level of the entire SoS infrastructure using vulnerability analysis, node property aspects, topology data, and other factors, and to improve and mitigate risks without adding additional resources into the SoS infrastructure. Other risk factors we examine include risks associated with different properties, and the likelihood of violating access control requirements. Extending the principals of the framework, we also apply the approach to multi-level SoS, in order to improve both SoS security and the overall robustness of the network. In addition, the identified risks, vulnerabilities, and interdependent links are modelled by extending network modelling and attack graph generation methods. The proposed SeCurity Risk Analysis and Mitigation Framework and principal techniques have been researched, developed, implemented, and then evaluated via numerous experiments and case studies. The subsequent results accomplished ascertain that the framework can successfully observe SoS and produce an accurate security level for the entire SoS in all instances, visualising identified vulnerabilities, interdependencies, high risk nodes, data access violations, and security grades in a series of reports and undirected graphs. The framework’s evolutionary approach to mitigating risks and the robustness function which can determine the appropriateness of the SoS, revealed promising results, with the framework and principal techniques identifying SoS topologies, and quantifying their associated security levels. Distinguishing SoS that are either optimally structured (in terms of communication security), or cannot be evolved as the applied processes would negatively impede the security and robustness of the SoS. Likewise, the framework is capable via evolvement methods of identifying SoS communication configurations that improve communication security and assure data as it traverses across an unsecure and unencrypted SoS. Reporting enhanced SoS configurations that mitigate risks in a series of undirected graphs and reports that visualise and detail the SoS topology and its vulnerabilities. These reported candidates and optimal solutions improve the security and SoS robustness, and will support the maintenance of acceptable and tolerable low centrality factors, should these recommended configurations be applied to the evaluated SoS infrastructure
    corecore