1,820 research outputs found

    A Hybrid Optimization Algorithm for Efficient Virtual Machine Migration and Task Scheduling Using a Cloud-Based Adaptive Multi-Agent Deep Deterministic Policy Gradient Technique

    Get PDF
    This To achieve optimal system performance in the quickly developing field of cloud computing, efficient resource management—which includes accurate job scheduling and optimized Virtual Machine (VM) migration—is essential. The Adaptive Multi-Agent System with Deep Deterministic Policy Gradient (AMS-DDPG) Algorithm is used in this study to propose a cutting-edge hybrid optimization algorithm for effective virtual machine migration and task scheduling. An sophisticated combination of the War Strategy Optimization (WSO) and Rat Swarm Optimizer (RSO) algorithms, the Iterative Concept of War and Rat Swarm (ICWRS) algorithm is the foundation of this technique. Notably, ICWRS optimizes the system with an amazing 93% accuracy, especially for load balancing, job scheduling, and virtual machine migration. The VM migration and task scheduling flexibility and efficiency are greatly improved by the AMS-DDPG technology, which uses a powerful combination of deterministic policy gradient and deep reinforcement learning. By assuring the best possible resource allocation, the Adaptive Multi-Agent System method enhances decision-making even more. Performance in cloud-based virtualized systems is significantly enhanced by our hybrid method, which combines deep learning and multi-agent coordination. Extensive tests that include a detailed comparison with conventional techniques verify the effectiveness of the suggested strategy. As a consequence, our hybrid optimization approach is successful. The findings show significant improvements in system efficiency, shorter job completion times, and optimum resource utilization. Cloud-based systems have unrealized potential for synergistic optimization, as shown by the integration of ICWRS inside the AMS-DDPG framework. Enabling a high-performing and sustainable cloud computing infrastructure that can adapt to the changing needs of modern computing paradigms is made possible by this strategic resource allocation, which is attained via careful computational utilization

    Deep Learning Techniques for Power System Operation: Modeling and Implementation

    Get PDF
    The fast development of the deep learning (DL) techniques in the most recent years has drawn attention from both academia and industry. And there have been increasing applications of the DL techniques in many complex real-world situations, including computer vision, medical diagnosis, and natural language processing. The great power and flexibility of DL can be attributed to its hierarchical learning structure that automatically extract features from mass amounts of data. In addition, DL applies an end-to-end solving mechanism, and directly generates the output from the input, where the traditional machine learning methods usually break down the problem and combine the results. The end-to-end mechanism considerably improve the computational efficiency of the DL.The power system is one of the most complex artificial infrastructures, and many power system control and operation problems share the same features as the above mentioned real-world applications, such as time variability and uncertainty, partial observability, which impedes the performance of the conventional model-based methods. On the other hand, with the wide spread implementation of Advanced Metering Infrastructures (AMI), the SCADA, the Wide Area Monitoring Systems (WAMS), and many other measuring system providing massive data from the field, the data-driven deep learning technique is becoming an intriguing alternative method to enable the future development and success of the smart grid. This dissertation aims to explore the potential of utilizing the deep-learning-based approaches to solve a broad range of power system modeling and operation problems. First, a comprehensive literature review is conducted to summarize the existing applications of deep learning techniques in power system area. Second, the prospective application of deep learning techniques in several scenarios in power systems, including contingency screening, cascading outage search, multi-microgrid energy management, residential HVAC system control, and electricity market bidding are discussed in detail in the following 2-6 chapters. The problem formulation, the specific deep learning approaches in use, and the simulation results are all presented, and also compared with the currently used model-based method as a verification of the advantage of deep learning. Finally, the conclusions are provided in the last chapter, as well as the directions for future researches. It’s hoped that this dissertation can work as a single spark of fire to enlighten more innovative ideas and original studies, widening and deepening the application of deep learning technique in the field of power system, and eventually bring some positive impacts to the real-world bulk grid resilient and economic control and operation

    Machine Learning for Resource-Constrained Computing Systems

    Get PDF
    Die verfügbaren Ressourcen in Informationsverarbeitungssystemen wie Prozessoren sind in der Regel eingeschränkt. Das umfasst z. B. die elektrische Leistungsaufnahme, den Energieverbrauch, die Wärmeabgabe oder die Chipfläche. Daher ist die Optimierung der Verwaltung der verfügbaren Ressourcen von größter Bedeutung, um Ziele wie maximale Performanz zu erreichen. Insbesondere die Ressourcenverwaltung auf der Systemebene hat über die (dynamische) Zuweisung von Anwendungen zu Prozessorkernen und über die Skalierung der Spannung und Frequenz (dynamic voltage and frequency scaling, DVFS) einen großen Einfluss auf die Performanz, die elektrische Leistung und die Temperatur während der Ausführung von Anwendungen. Die wichtigsten Herausforderungen bei der Ressourcenverwaltung sind die hohe Komplexität von Anwendungen und Plattformen, unvorhergesehene (zur Entwurfszeit nicht bekannte) Anwendungen oder Plattformkonfigurationen, proaktive Optimierung und die Minimierung des Laufzeit-Overheads. Bestehende Techniken, die auf einfachen Heuristiken oder analytischen Modellen basieren, gehen diese Herausforderungen nur unzureichend an. Aus diesem Grund ist der Hauptbeitrag dieser Dissertation der Einsatz maschinellen Lernens (ML) für Ressourcenverwaltung. ML-basierte Lösungen ermöglichen die Bewältigung dieser Herausforderungen durch die Vorhersage der Auswirkungen potenzieller Entscheidungen in der Ressourcenverwaltung, durch Schätzung verborgener (unbeobachtbarer) Eigenschaften von Anwendungen oder durch direktes Lernen einer Ressourcenverwaltungs-Strategie. Diese Dissertation entwickelt mehrere neuartige ML-basierte Ressourcenverwaltung-Techniken für verschiedene Plattformen, Ziele und Randbedingungen. Zunächst wird eine auf Vorhersagen basierende Technik zur Maximierung der Performanz von Mehrkernprozessoren mit verteiltem Last-Level Cache und limitierter Maximaltemperatur vorgestellt. Diese verwendet ein neuronales Netzwerk (NN) zur Vorhersage der Auswirkungen potenzieller Migrationen von Anwendungen zwischen Prozessorkernen auf die Performanz. Diese Vorhersagen erlauben die Bestimmung der bestmöglichen Migration und ermöglichen eine proaktive Verwaltung. Das NN ist so trainiert, dass es mit unbekannten Anwendungen und verschiedenen Temperaturlimits zurechtkommt. Zweitens wird ein Boosting-Verfahren zur Maximierung der Performanz homogener Mehrkernprozessoren mit limitierter Maximaltemperatur mithilfe von DVFS vorgestellt. Dieses basiert auf einer neuartigen {Boostability}-Metrik, die die Abhängigkeiten von Performanz, elektrischer Leistung und Temperatur auf Spannungs/Frequenz-Änderungen in einer Metrik vereint. % ignorerepeated Die Abhängigkeiten von Performanz und elektrischer Leistung hängen von der Anwendung ab und können zur Laufzeit nicht direkt beobachtet (gemessen) werden. Daher wird ein NN verwendet, um diese Werte für unbekannte Anwendungen zu schätzen und so die Komplexität der Boosting-Optimierung zu bewältigen. Drittens wird eine Technik zur Temperaturminimierung von heterogenen Mehrkernprozessoren mit Quality of Service-Zielen vorgestellt. Diese verwendet Imitationslernen, um eine Migrationsstrategie von Anwendungen aus optimalen Orakel-Demonstrationen zu lernen. Dafür wird ein NN eingesetzt, um die Komplexität der Plattform und des Anwendungsverhaltens zu bewältigen. Die Inferenz des NNs wird mit Hilfe eines vorhandenen generischen Beschleunigers, einer Neural Processing Unit (NPU), beschleunigt. Auch die ML Algorithmen selbst müssen auch mit begrenzten Ressourcen ausgeführt werden. Zuletzt wird eine Technik für ressourcenorientiertes Training auf verteilten Geräten vorgestellt, um einen konstanten Trainingsdurchsatz bei sich schnell ändernder Verfügbarkeit von Rechenressourcen aufrechtzuerhalten, wie es z.~B.~aufgrund von Konflikten bei gemeinsam genutzten Ressourcen der Fall ist. Diese Technik verwendet Structured Dropout, welches beim Training zufällige Teile des NNs auslässt. Dadurch können die erforderlichen Ressourcen für das Training dynamisch angepasst werden -- mit vernachlässigbarem Overhead, aber auf Kosten einer langsameren Trainingskonvergenz. Die Pareto-optimalen Dropout-Parameter pro Schicht des NNs werden durch eine Design Space Exploration bestimmt. Evaluierungen dieser Techniken werden sowohl in Simulationen als auch auf realer Hardware durchgeführt und zeigen signifikante Verbesserungen gegenüber dem Stand der Technik, bei vernachlässigbarem Laufzeit-Overhead. Zusammenfassend zeigt diese Dissertation, dass ML eine Schlüsseltechnologie zur Optimierung der Verwaltung der limitierten Ressourcen auf Systemebene ist, indem die damit verbundenen Herausforderungen angegangen werden

    Conserve and Protect Resources in Software-Defined Networking via the Traffic Engineering Approach

    Get PDF
    Software Defined Networking (SDN) is revolutionizing the architecture and operation of computer networks and promises a more agile and cost-efficient network management. SDN centralizes the network control logic and separates the control plane from the data plane, thus enabling flexible management of networks. A network based on SDN consists of a data plane and a control plane. To assist management of devices and data flows, a network also has an independent monitoring plane. These coexisting network planes have various types of resources, such as bandwidth utilized to transmit monitoring data, energy spent to power data forwarding devices and computational resources to control a network. Unwise management, even abusive utilization of these resources lead to the degradation of the network performance and increase the Operating Expenditure (Opex) of the network owner. Conserving and protecting limited network resources is thus among the key requirements for efficient networking. However, the heterogeneity of the network hardware and network traffic workloads expands the configuration space of SDN, making it a challenging task to operate a network efficiently. Furthermore, the existing approaches usually lack the capability to automatically adapt network configurations to handle network dynamics and diverse optimization requirements. Addtionally, a centralized SDN controller has to run in a protected environment against certain attacks. This thesis builds upon the centralized management capability of SDN, and uses cross-layer network optimizations to perform joint traffic engineering, e.g., routing, hardware and software configurations. The overall goal is to overcome the management complexities in conserving and protecting resources in multiple functional planes in SDN when facing network heterogeneities and system dynamics. This thesis presents four contributions: (1) resource-efficient network monitoring, (2) resource-efficient data forwarding, (3) using self-adaptive algorithms to improve network resource efficiency, and (4) mitigating abusive usage of resources for network controlling. The first contribution of this thesis is a resource-efficient network monitoring solution. In this thesis, we consider one specific type of virtual network management function: flow packet inspection. This type of the network monitoring application requires to duplicate packets of target flows and send them to packet monitors for in-depth analysis. To avoid the competition for resources between the original data and duplicated data, the network operators can transmit the data flows through physically (e.g., different communication mediums) or virtually (e.g., distinguished network slices) separated channels having different resource consumption properties. We propose the REMO solution, namely Resource Efficient distributed Monitoring, to reduce the overall network resource consumption incurred by both types of data, via jointly considering the locations of the packet monitors, the selection of devices forking the data packets, and flow path scheduling strategies. In the second contribution of this thesis, we investigate the resource efficiency problem in hybrid, server-centric data center networks equipped with both traditional wired connections (e.g., InfiniBand or Ethernet) and advanced high-data-rate wireless links (e.g., directional 60GHz wireless technology). The configuration space of hybrid SDN equipped with both wired and wireless communication technologies is massively large due to the complexity brought by the device heterogeneity. To tackle this problem, we present the ECAS framework to reduce the power consumption and maintain the network performance. The approaches based on the optimization models and heuristic algorithms are considered as the traditional way to reduce the operation and facility resource consumption in SDN. These approaches are either difficult to directly solve or specific for a particular problem space. As the third contribution of this thesis, we investigates the approach of using Deep Reinforcement Learning (DRL) to improve the adaptivity of the management modules for network resource and data flow scheduling. The goal of the DRL agent in the SDN network is to reduce the power consumption of SDN networks without severely degrading the network performance. The fourth contribution of this thesis is a protection mechanism based upon flow rate limiting to mitigate abusive usage of the SDN control plane resource. Due to the centralized architecture of SDN and its handling mechanism for new data flows, the network controller can be the failure point due to the crafted cyber-attacks, especially the Control-Plane- Saturation (CPS) attack. We proposes an In-Network Flow mAnagement Scheme (INFAS) to effectively reduce the generation of malicious control packets depending on the parameters configured for the proposed mitigation algorithm. In summary, the contributions of this thesis address various unique challenges to construct resource-efficient and secure SDN. This is achieved by designing and implementing novel and intelligent models and algorithms to configure networks and perform network traffic engineering, in the protected centralized network controller

    Network Management, Optimization and Security with Machine Learning Applications in Wireless Networks

    Get PDF
    Wireless communication networks are emerging fast with a lot of challenges and ambitions. Requirements that are expected to be delivered by modern wireless networks are complex, multi-dimensional, and sometimes contradicting. In this thesis, we investigate several types of emerging wireless networks and tackle some challenges of these various networks. We focus on three main challenges. Those are Resource Optimization, Network Management, and Cyber Security. We present multiple views of these three aspects and propose solutions to probable scenarios. The first challenge (Resource Optimization) is studied in Wireless Powered Communication Networks (WPCNs). WPCNs are considered a very promising approach towards sustainable, self-sufficient wireless sensor networks. We consider a WPCN with Non-Orthogonal Multiple Access (NOMA) and study two decoding schemes aiming for optimizing the performance with and without interference cancellation. This leads to solving convex and non-convex optimization problems. The second challenge (Network Management) is studied for cellular networks and handled using Machine Learning (ML). Two scenarios are considered. First, we target energy conservation. We propose an ML-based approach to turn Multiple Input Multiple Output (MIMO) technology on/off depending on certain criteria. Turning off MIMO can save considerable energy of the total site consumption. To control enabling and disabling MIMO, a Neural Network (NN) based approach is used. It learns some network features and decides whether the site can achieve satisfactory performance with MIMO off or not. In the second scenario, we take a deeper look into the cellular network aiming for more control over the network features. We propose a Reinforcement Learning-based approach to control three features of the network (relative CIOs, transmission power, and MIMO feature). The proposed approach delivers a stable state of the cellular network and enables the network to self-heal after any change or disturbance in the surroundings. In the third challenge (Cyber Security), we propose an NN-based approach with the target of detecting False Data Injection (FDI) in industrial data. FDI attacks corrupt sensor measurements to deceive the industrial platform. The proposed approach uses an Autoencoder (AE) for FDI detection. In addition, a Denoising AE (DAE) is used to clean the corrupted data for further processing

    Optimizing the Implementation of Green Technologies Under Climate Change Uncertainty

    Get PDF
    In this study, we aim to investigate the application of the green technologies (i.e., green roofs (GRs), Photovoltaic (PV) panels, and battery integrated PV systems) under climate change-related uncertainty through three separate, but inherently related studies, and utilize optimization methods to provide new solutions or improve the currently available methodsFirst, we develop a model to evaluate and optimize the joint placement of PV panels and GRs under climate change uncertainty. We consider the efficiency drop of PV panels due to heat, savings from GRs, and the interaction between them. We develop a two-stage stochastic programming model to optimally place PV panels and GRs under climate change uncertainty to maximize the overall profit. We calibrate the model and then conduct a case study on the City of Knoxville, TN.Second, we study the diffusion rate of the green technologies under different climate projections for the City of Knoxville through the integration of simulation and dynamic programming. We aim to investigate the diffusion rates for PV panels and/or GRs under climate change uncertainty in the City of Knoxville, TN. We further investigate the effect of different and evaluate their effects on the diffusion rate. We first present the agent based framework and the mathematical model behind it. Then, we study the effects of different policies on the results and rate of diffusion.Lastly, We aim to study a Lithium-ion battery load connected to a PV system to store the excess generated electricity throughout the day. The stored energy is then used when the PV system is not able to generate electricity due to a lack of direct solar radiation. This study is an attempt to minimize the cost of electricity bill for a medium sized household by maximizing the battery package utilization. We develop a Markov decision processes (MDP) model to capture the stochastic nature of the panels\u27 output due to weather. Due to the minute reduction in the Li-ion battery capacity per day, we have to deal with an excessively large state space. Hence, we utilize reinforcement learning methods (i.e., Q-Learning) to find the optimal policy
    corecore