36 research outputs found

    An Optimal Virtual Machine Placement Method in Cloud Computing Environment

    Get PDF
    Cloud computing is formally known as an Internet-centered computing technique used for computing purposes in the cloud network. It must compute on a system where an application may simultaneously run on many connected computers. Cloud computing uses computing resources to achieve the efficiency of data centres using the virtualization concept in the cloud. The load balancers consistently allocate the workloads to all the virtual machines in the cloud to avoid an overload situation. The virtualization process implements the instances from the physical state machines to fully utilize servers. Then the dynamic data centres encompass a stochastic modelling approach for resource optimization for high performance in a cloud computing environment. This paper defines the virtualization process for obtaining energy productivity in cloud data centres. The algorithm proposed involves a stochastic modelling approach in cloud data centres for resource optimization. The load balancing method is applied in the cloud data centres to obtain the appropriate efficiency

    Improved self-management of datacenter systems applying machine learning

    Get PDF
    Autonomic Computing is a Computer Science and Technologies research area, originated during mid 2000's. It focuses on optimization and improvement of complex distributed computing systems through self-control and self-management. As distributed computing systems grow in complexity, like multi-datacenter systems in cloud computing, the system operators and architects need more help to understand, design and optimize manually these systems, even more when these systems are distributed along the world and belong to different entities and authorities. Self-management lets these distributed computing systems improve their resource and energy management, a very important issue when resources have a cost, by obtaining, running or maintaining them. Here we propose to improve Autonomic Computing techniques for resource management by applying modeling and prediction methods from Machine Learning and Artificial Intelligence. Machine Learning methods can find accurate models from system behaviors and often intelligible explanations to them, also predict and infer system states and values. These models obtained from automatic learning have the advantage of being easily updated to workload or configuration changes by re-taking examples and re-training the predictors. So employing automatic modeling and predictive abilities, we can find new methods for making "intelligent" decisions and discovering new information and knowledge from systems. This thesis departs from the state of the art, where management is based on administrators expertise, well known data, ad-hoc studied algorithms and models, and elements to be studied from computing machine point of view; to a novel state of the art where management is driven by models learned from the same system, providing useful feedback, making up for incomplete, missing or uncertain data, from a global network of datacenters point of view. - First of all, we cover the scenario where the decision maker works knowing all pieces of information from the system: how much will each job consume, how is and will be the desired quality of service, what are the deadlines for the workload, etc. All of this focusing on each component and policy of each element involved in executing these jobs. -Then we focus on the scenario where instead of fixed oracles that provide us information from an expert formula or set of conditions, machine learning is used to create these oracles. Here we look at components and specific details while some part of the information is not known and must be learned and predicted. - We reduce the problem of optimizing resource allocations and requirements for virtualized web-services to a mathematical problem, indicating each factor, variable and element involved, also all the constraints the scheduling process must attend to. The scheduling problem can be modeled as a Mixed Integer Linear Program. Here we face an scenario of a full datacenter, further we introduce some information prediction. - We complement the model by expanding the predicted elements, studying the main resources (this is CPU, Memory and IO) that can suffer from noise, inaccuracy or unavailability. Once learning predictors for certain components let the decision making improve, the system can become more Âżexpert-knowledge independentÂż and research can focus on an scenario where all the elements provide noisy, uncertainty or private information. Also we introduce to the management optimization new factors as for each datacenter context and costs may change, turning the model as "multi-datacenter" - Finally, we review of the cost of placing datacenters depending on green energy sources, and distribute the load according to green energy availability

    Edge computing infrastructure for 5G networks: a placement optimization solution

    Get PDF
    This thesis focuses on how to optimize the placement of the Edge Computing infrastructure for upcoming 5G networks. To this aim, the core contributions of this research are twofold: 1) a novel heuristic called Hybrid Simulated Annealing to tackle the NP-hard nature of the problem and, 2) a framework called EdgeON providing a practical tool for real-life deployment optimization. In more detail, Edge Computing has grown into a key solution to 5G latency, reliability and scalability requirements. By bringing computing, storage and networking resources to the edge of the network, delay-sensitive applications, location-aware systems and upcoming real-time services leverage the benefits of a reduced physical and logical path between the end-user and the data or service host. Nevertheless, the edge node placement problem raises critical concerns regarding deployment and operational expenditures (i.e., mainly due to the number of nodes to be deployed), current backhaul network capabilities and non-technical placement limitations. Common approaches to the placement of edge nodes are based on: Mobile Edge Computing (MEC), where the processing capabilities are deployed at the Radio Access Network nodes and Facility Location Problem variations, where a simplistic cost function is used to determine where to optimally place the infrastructure. However, these methods typically lack the flexibility to be used for edge node placement under the strict technical requirements identified for 5G networks. They fail to place resources at the network edge for 5G ultra-dense networking environments in a network-aware manner. This doctoral thesis focuses on rigorously defining the Edge Node Placement Problem (ENPP) for 5G use cases and proposes a novel framework called EdgeON aiming at reducing the overall expenses when deploying and operating an Edge Computing network, taking into account the usage and characteristics of the in-place backhaul network and the strict requirements of a 5G-EC ecosystem. The developed framework implements several placement and optimization strategies thoroughly assessing its suitability to solve the network-aware ENPP. The core of the framework is an in-house developed heuristic called Hybrid Simulated Annealing (HSA), seeking to address the high complexity of the ENPP while avoiding the non-convergent behavior of other traditional heuristics (i.e., when applied to similar problems). The findings of this work validate our approach to solve the network-aware ENPP, the effectiveness of the heuristic proposed and the overall applicability of EdgeON. Thorough performance evaluations were conducted on the core placement solutions implemented revealing the superiority of HSA when compared to widely used heuristics and common edge placement approaches (i.e., a MEC-based strategy). Furthermore, the practicality of EdgeON was tested through two main case studies placing services and virtual network functions over the previously optimally placed edge nodes. Overall, our proposal is an easy-to-use, effective and fully extensible tool that can be used by operators seeking to optimize the placement of computing, storage and networking infrastructure at the users’ vicinity. Therefore, our main contributions not only set strong foundations towards a cost-effective deployment and operation of an Edge Computing network, but directly impact the feasibility of upcoming 5G services/use cases and the extensive existing research regarding the placement of services and even network service chains at the edge

    LITHIUM-ION BATTERY DEGRADATION EVALUATION THROUGH BAYESIAN NETWORK METHOD FOR RESIDENTIAL ENERGY STORAGE SYSTEMS

    Get PDF
    Batteries continue to infiltrate in innovative applications with the technological advancements led by Li-ion chemistry in the past decade. Residential energy storage is one such example, made possible by increasing efficiency and decreasing the cost of solar PV. Residential energy storage, charged by rooftop solar PV is tied to the grid, provides household loads. This multi-operation role has a significant effect on battery degradation. These contributing factors especially solar irradiation and weather conditions are highly variable and can only be explained with probabilistic analysis. However, the effect of such external factors on battery degradation is approached in recent literature with mostly deterministic and some limited stochastic processes. Thus, a probabilistic degradation analysis of Li-ion batteries in residential energy storage is required to evaluate aging and relate to the external causal factors. The literature review revealed modified Arrhenius degradation model for Li-ion battery cells. Though originating from an empirical deterministic method, the modified Arrhenius equation relates battery degradation with all the major properties, i.e. state of charge, C-rate, temperature, and total amp-hour throughput. These battery properties are correlated with external factors while evaluation of capacity fade of residential Li-ion battery using a proposed detailed hierarchical Bayesian Network (BN), a hierarchical probabilistic framework suitable to analyze battery degradation stochastically. The BN is developed considering all the uncertainties of the process including, solar irradiance, grid services, weather conditions, and EV schedule. It also includes hidden intermediate variables such as battery power and power generated by solar PV. Markov Chain Monte-Carlo analysis with Metropolis-Hastings algorithm is used to estimate capacity fade along with several other interesting posterior probability distributions from the BN. Various informative and promising results were obtained from multiple case scenarios that were developed to explore the effect of the aforementioned external factors on the battery. Furthermore, the methodologies involved to perform several characterizations and aging test that is essential to evaluate the estimation proposed by the hierarchical BN is explored. These experiments were conducted with conventional and low-cost hardware-in-the-loop systems that were developed and utilized to quantify the quality of estimation of degradation

    Demand Response in Smart Grids

    Get PDF
    The Special Issue “Demand Response in Smart Grids” includes 11 papers on a variety of topics. The success of this Special Issue demonstrates the relevance of demand response programs and events in the operation of power and energy systems at both the distribution level and at the wide power system level. This reprint addresses the design, implementation, and operation of demand response programs, with focus on methods and techniques to achieve an optimized operation as well as on the electricity consumer

    Efficient Learning Machines

    Get PDF
    Computer scienc

    Advanced Signal Processing Techniques Applied to Power Systems Control and Analysis

    Get PDF
    The work published in this book is related to the application of advanced signal processing in smart grids, including power quality, data management, stability and economic management in presence of renewable energy sources, energy storage systems, and electric vehicles. The distinct architecture of smart grids has prompted investigations into the use of advanced algorithms combined with signal processing methods to provide optimal results. The presented applications are focused on data management with cloud computing, power quality assessment, photovoltaic power plant control, and electrical vehicle charge stations, all supported by modern AI-based optimization methods

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems
    corecore