15 research outputs found

    Upgrading Conventional Distribution Networks by Actively Planning Distributed Generation Based on Virtual Microgrids

    Get PDF

    Energy Efficient Resource Allocation for Virtual Network Services with Dynamic Workload in Cloud Data Centers

    Get PDF
    Title from PDF of title page, viewed on March 21, 2016Dissertation advisor: Baek-Young ChoiVitaIncludes bibliographical references (pages 126-143)Thesis (Ph.D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2016With the rapid proliferation of cloud computing, more and more network services and applications are deployed on cloud data centers. Their energy consumption and green house gas emissions have significantly increased. Some efforts have been made to control and lower energy consumption of data centers such as, proportional energy consuming hardware, dynamic provisioning, and virtualization machine techniques. However, it is still common that many servers and network resources are often underutilized, and idle servers spend a large portion of their peak power consumption. Network virtualization and resource sharing have been employed to improve energy efficiency of data centers by aggregating workload to a few physical nodes and switch the idle nodes to sleep mode. Especially, with the advent of live migration, a virtual node can be moved from one physical node to another physical node without service disrup tion. It is possible to save more energy by shrinking virtual nodes to a small set of physical nodes and turning the idle nodes to sleep mode when the service workload is low, and expanding virtual nodes to a large set of physical nodes to satisfy QoS requirements when the service workload is high. When the service provider explicates the desired virtual network including a specific topology, and a set of virtual nodes with certain resource demands, the infrastructure provider computes how the given virtual network is embedded to its operated data centers with minimum energy consumption. When the service provider only gives some description about the network service and the desired QoS requirements, the infrastructure provider has more freedom on how to allocate resources for the network service. For the first problem, we consider the evolving workload of the virtual networks or virtual applications and residual resources in data centers, and build a novel model of energy efficient virtual network embedding (EE-VNE) in order to minimize energy usage in the physical network consists of multiple data centers. In this model, both operation cost for executing network services’ task and migration cost for the live migrations of virtual nodes are counted toward the total energy consumption. In addition, rather than random generated physical network topology, we use practical assumption about physical network topology in our model. Due to the NP-hardness of the proposed model, we develop a heuristic algorithm for virtual network scheduling and mapping. In doing so, we specifically take the expected energy consumption at different times, virtual network operation and future migration costs, and a data center architecture into consideration. Our extensive evaluation results showthatouralgorithmcouldreduceenergyconsumptionupto40%andtakeuptoa57% higher number of virtual network requests over other existing virtual mapping schemes. However, through comparison with CPLEX based exact algorithm, we identify that there is still a gap between the heuristic solution and the optimal solution. Therefore, after investigation other solutions, we convert the origin EE-VNE problem to an Ant Colony Optimization (ACO) problem by building the construction model and presenting the transition probability formula. Then, ACO based algorithm has been adapted to solve the ACO-EE-VNE problem. In addition, we reduce the space complexity of ACO-EE VNE by developing a novel way to track and update the pheromone. For the second problem, we design a framework to dynamically allocate resources for a network service by employing container based virtual nodes. In the framework,each network service would have a pallet container and a set of execution containers. The pal let container requests resource based on certain strategy, creates execution containers with assigned resources and manage the life cycle of the containers; while the execution containers execute the assigned job for the network service. Formulations are presented to optimize resource usage efficiency and save energy consumption for network services with dynamic workload, and a heuristic algorithm is proposed to solve the optimization problem. Our numerical results show that container based resource allocation provide more flexible and saves more cost than virtual service deployment with fixed virtual machines and demands. In addition, we study the content distribution problem with joint optimization goal and varied size of contents in cloud storage. Previous research on content distribution mainly focuses on reducing latency experienced by content customers. A few recent studies address the issue of bandwidth usage in CDNs, as the bandwidth consumption is an important issue due to its relevance to the cost of content providers. However, few researches consider both bandwidth consumption and delay performance for the content providers that use cloud storages with limited budgets, which is the focus of this study. We develop an efficient light-weight approximation algorithm toward the joint optimization problem of content placement. We also conduct the analysis of its theoretical complexities. The performance bound of the proposed approximation algorithm exhibits a much better worst case than those in previous studies. We further extend the approximate algorithm into a distributed version that allows it to promptly react to dynamic changes in users’ interests. The extensive results from both simulations and Planetlab experiments exhibit that the performance is near optimal for most of the practical conditions.Introduction -- Related work -- Energy efficient virtual network embedding for green data centers using data center topology and future migration -- Ant colony optimization based energy efficient virtual network embedding -- Energy aware container based resource allocation for virtual services in green data centers -- Achieving optimal content delivery using cloud storage -- Conclusions and future wor

    Lattice Boltzmann Methods for Wind Energy Analysis

    Get PDF
    An estimate of the United States wind potential conducted in 2011 found that the energy available at an altitude of 80 meters is approximately triple the wind energy available 50 meters above ground. In 2012, 43% of all new electricity generation installed in the U.S. (13.1 GW) came from wind power. The majority of this power, 79%, comes from large utility scale turbines that are being manufactured at unprecedented sizes. Existing wind plants operate with a capacity factor of only approximately 30%. Measurements have shown that the turbulent wake of a turbine persists for many rotor diameters, inducing increased vibration and wear on downwind turbines. Power losses can be as high as 20-30% in operating wind plants, due solely to complex wake interactions occurring in wind plant arrays. It is my objective to accurately predict the generation and interaction of turbine wakes and their interaction with downwind turbines and topology by means of numerical simulation with high-performance parallel computer systems. Numerical simulation is already utilized to plan wind plant layouts. However, available computational tools employ severe geometric simplifications to model wake interactions and are geared to providing rough estimates on desktop PCs. A three dimensional simulation tool designed for modern parallel computers based upon lattice Boltzmann methods for fluid-dynamics, a general six-degree-of-freedom motion solver, and foundational beam solvers has been proposed to meet this simulation need. In this text, the software development, verification, and validation are detailed. Fundamental computational fluid dynamics issues of boundary conditions and turbulence modeling are examined through classic cases (Cavity, Jeffery-Hammel, Kelvin-Helmholtz, Pressure wave, Vorticity wave, Backward facing step, Cylinder in cross-flow, Airfoils, Tandem cylinders, and Turbulent flow over a hill) to asses the accuracy and computational cost of developed alternatives. Simulations of canonical motion (falling beam), fluid-structure-interaction cases (Hinged wing and Flexible pendulum), and realistic horizontal axis wind turbine geometries (Vestas v27, NREL 5MW, and MEXICO) are validated against benchmarks and experiments. Results from simulations of the three turbine array at the Scaled Wind Farm Test facility are presented for two steady wind conditions

    Green Resource Management in Distributed Cloud Infrastructures

    Get PDF
    Computing has evolved over time according to different paradigms, along with an increasing need for computational power. Modern computing paradigms basically share the same underlying concept of Utility Computing, that is a service provisioning model through which a shared pool of computing resources is used by a customer when needed. The objective of Utility Computing is to maximize the resource utilization and bring down the relative costs. Nearly a decade ago, the concept of Cloud Computing emerged as a virtualization technique where services were executed remotely in a ubiquitous way, providing scalable and virtualized resources. The spread of Cloud Computing has been also encouraged by the success of the virtualization, which is one of the most promising and efficient techniques to consolidate system's utilization on one side, and to lower power, electricity charges and space costs in data centers on the other. In the last few years, there has been a remarkable growth in the number of data centers, which represent one of the leading sources of increased business data traffic on the Internet. An effect of the growing scale and the wide use of data centers is the dramatic increase of power consumption, with significant consequences both in terms of environmental and operational costs. In addition to power consumption, also carbon footprint of the Cloud infrastructures is becoming a serious concern, since a lot of power is generated from non-renewable sources. Hence, energy awareness has become one of the major design constraints for Cloud infrastructures. In order to face these challenges, a new generation of energy-efficient and eco-sustainable network infrastructures is needed. In this thesis, a novel energy-aware resource orchestration framework for distributed Cloud infrastructures is discussed. The aim is to explain how both network and IT resources can be managed while, at the same time, the overall power consumption and carbon footprint are being minimized. To this end, an energy-aware routing algorithm and an extension of the OSPF-TE protocol to distribute energy-related information have been implemented

    Optimizing Resource Management in Cloud Analytics Services

    Get PDF
    The fundamental challenge in the cloud today is how to build and optimize machine learning and data analytical services. Machine learning and data analytical platforms are changing computing infrastructure from expensive private data centers to easily accessible online services. These services pack user requests as jobs and run them on thousands of machines in parallel in geo-distributed clusters. The scale and the complexity of emerging jobs lead to increasing challenges for the clusters at all levels, from power infrastructure to system architecture and corresponding software framework design. These challenges come in many forms. Today's clusters are built on commodity hardware and hardware failures are unavoidable. Resource competition, network congestion, and mixed generations of hardware make the hardware environment complex and hard to model and predict. Such heterogeneity becomes a crucial roadblock for efficient parallelization on both the task level and job level. Another challenge comes from the increasing complexity of the applications. For example, machine learning services run jobs made up of multiple tasks with complex dependency structures. This complexity leads to difficulties in framework designs. The scale, especially when services span geo-distributed clusters, leads to another important hurdle for cluster design. Challenges also come from the power infrastructure. Power infrastructure is very expensive and accounts for more than 20% of the total costs to build a cluster. Power sharing optimization to maximize the facility utilization and smooth peak hour usages is another roadblock for cluster design. In this thesis, we focus on solutions for these challenges at the task level, on the job level, with respect to the geo-distributed data cloud design and for power management in colocation data centers. At the task level, a crucial hurdle to achieving predictable performance is stragglers, i.e., tasks that take significantly longer than expected to run. At this point, speculative execution has been widely adopted to mitigate the impact of stragglers in simple workloads. We apply straggler mitigation for approximation jobs for the first time. We present GRASS, which carefully uses speculation to mitigate the impact of stragglers in approximation jobs. GRASS's design is based on the analysis of a model we develop to capture the optimal speculation levels for approximation jobs. Evaluations with production workloads from Facebook and Microsoft Bing in an EC2 cluster of 200 nodes show that GRASS increases accuracy of deadline-bound jobs by 47% and speeds up error-bound jobs by 38%. Moving from task level to job level, task level speculation mechanisms are designed and operated independently of job scheduling when, in fact, scheduling a speculative copy of a task has a direct impact on the resources available for other jobs. Thus, we present Hopper, a job-level speculation-aware scheduler that integrates the tradeoffs associated with speculation into job scheduling decisions based on a model generalized from the task-level speculation model. We implement both centralized and decentralized prototypes of the Hopper scheduler and show that 50% (66%) improvements over state-of-the-art centralized (decentralized) schedulers and speculation strategies can be achieved through the coordination of scheduling and speculation. As computing resources move from local clusters to geo-distributed cloud services, we are expecting the same transformation for data storage. We study two crucial pieces of a geo-distributed data cloud system: data acquisition and data placement. Starting from developing the optimal algorithm for the case of a data cloud made up of a single data center, we propose a near-optimal, polynomial-time algorithm for a geo-distributed data cloud in general. We show, via a case study, that the resulting design, Datum, is near-optimal (within 1.6%) in practical settings. Efficient power management is a fundamental challenge for data centers when providing reliable services. Power oversubscription in data centers is very common and may occasionally trigger an emergency when the aggregate power demand exceeds the capacity. We study power capping solutions for handling such emergencies in a colocation data center, where the operator supplies power to multiple tenants. We propose a novel market mechanism based on supply function bidding, called COOP, to financially incentivize and coordinate tenants' power reduction for minimizing total performance loss while satisfying multiple power capping constraints. We demonstrate that COOP is "win-win", increasing the operator's profit (through oversubscription) and reducing tenants' costs (through financial compensation for their power reduction during emergencies).</p

    Application of Nanomaterials in Biomedical Imaging and Cancer Therapy

    Get PDF
    To mark the recent advances in nanomaterials and nanotechnology in biomedical imaging and cancer therapy, this book, entitled Application of Nanomaterials in Biomedical Imaging and Cancer Therapy includes a collection of important nanomaterial studies on medical imaging and therapy. The book covers recent works on hyperthermia, external beam radiotherapy, MRI-guided radiotherapy, immunotherapy, photothermal therapy, and photodynamic therapy, as well as medical imaging, including high-contrast and deep-tissue imaging, quantum sensing, super-resolution microscopy, and three-dimensional correlative light and electron microscopy. The significant research results and findings explored in this work are expected to help students, researchers and teachers working in the field of nanomaterials and nanotechnology in biomedical physics, to keep pace with the rapid development and the applications of nanomaterials in precise imaging and targeted therapy

    How an Organization\u27s Environmental Orientation Impacts Environmental Performance and its Resultant Financial Performance through Green Computing Hiring Practices: An Empirical Investigation of the Natural Resource-Based View of the Firm

    Get PDF
    This dissertation uses the logic embodied in Strategic Fit Theory, the Natural Resource-Based View of the Firm (NRBV), strategic human resource management, and other relevant literature streams to empirically demonstrate how the environmental orientation of a firm\u27s strategy impacts their environmental performance and resultant financial performance through the firm\u27s Information Technology hiring practices. Specifically, it was hypothesized that firms with a strong relationship between the environmental orientation of their strategy and their green computing hiring practices will achieve higher environmental performance, and, as a result, higher levels of financial performance than firms lacking such fit. The organization\u27s environmental orientation was measured via content analysis of the annual report texts (ARTs). Environmental performance was measured using KLD\u27s award-winning environmental performance metrics. I triangulated across efficiency, effectiveness, and market-based metrics to capture a more holistic measure of the firm\u27s financial performance using data from Compustat/Research Insight. The firm\u27s green computing hiring practices were measured utilizing a web content data mining application that pulled job ads for computing graduates and then extracted the environmentally-oriented skills identified in such ads using content analytic techniques. Various control variables were employed to eliminate possible alternative explanations of my research findings. A number of statistical and analytical techniques were used to assess the nature and strength of the relationships in my theoretical model as are articulated in the proposed hypotheses. The sample size of firms is fairly large, thus increasing the statistical power of the empirical tests. Previous empirical testing of the relationship between environmental strategy and financial performance is still in the developmental stages and has produced mixed results, partly because important intervening mechanisms, such as green computing hiring practices, has not received adequate attention in the empirical literature. The combination of using a large sample of real world firms, a powerful combination of qualitative and quantitative methodological techniques to tap into key trace evidence not available through other methodological techniques, and leveraging an award-winning environmental data set has enhanced the robustness of the empirical findings in addressing this important gap in the literature. The results of the analyses show that there is a strong relationship between an organization\u27s environmental posturing and its environmental performance. Additionally, this effect is mediated by the organization\u27s environmental hiring practices, indicating that implementing the organization\u27s environmental strategy through its hiring practices is important in achieving improved environmental performance. The current research also shows that there is a strong and positive relationship between an organization\u27s environmental performance and financial performance. Surprisingly, these relationships are not significantly impacted by the organization\u27s industry affiliation, which broadens the generalizability of the results of this study
    corecore