960 research outputs found

    Investigating Emerging Security Threats in Clouds and Data Centers

    Get PDF
    Data centers have been growing rapidly in recent years to meet the surging demand of cloud services. However, the expanding scale of a data center also brings new security threats. This dissertation studies emerging security issues in clouds and data centers from different aspects, including low-level cooling infrastructures and different virtualization techniques such as container and virtual machine (VM). We first unveil a new vulnerability called reduced cooling redundancy that might be exploited to launch thermal attacks, resulting in severely worsened thermal conditions in a data center. Such a vulnerability is caused by the wide adoption of aggressive cooling energy saving policies. We conduct thermal measurements and uncover effective thermal attack vectors at the server, rack, and data center levels. We also present damage assessments of thermal attacks. Our results demonstrate that thermal attacks can negatively impact the thermal conditions and reliability of victim servers, significantly raise the cooling cost, and even lead to cooling failures. Finally, we propose effective defenses to mitigate thermal attacks. We then perform a systematic study to understand the security implications of the information leakage in multi-tenancy container cloud services. Due to the incomplete implementation of system resource isolation mechanisms in the Linux kernel, a spectrum of system-wide host information is exposed to the containers, including host-system state information and individual process execution information. By exploiting such leaked host information, malicious adversaries can easily launch advanced attacks that can seriously affect the reliability of cloud services. Additionally, we discuss the root causes of the containers\u27 information leakage and propose a two-stage defense approach. The experimental results show that our defense is effective and incurs trivial performance overhead. Finally, we investigate security issues in the existing VM live migration approaches, especially the post-copy approach. While the entire live migration process relies upon reliable TCP connectivity for the transfer of the VM state, we demonstrate that the loss of TCP reliability leads to VM live migration failure. By intentionally aborting the TCP connection, attackers can cause unrecoverable memory inconsistency for post-copy, significantly increase service downtime, and degrade the running VM\u27s performance. From the offensive side, we present detailed techniques to reset the migration connection under heavy networking traffic. From the defensive side, we also propose effective protection to secure the live migration procedure

    Proposing Optimus Scheduler Algorithm for Virtual Machine Placement Within a Data Center

    Get PDF
    With the evolution of the Internet, we are witnessing the birth of an increasing number of applications that rely on the network; what was previously executed on the user's computers as stand-alone programs has been redesigned to be executed on servers with permanent connections to the Internet, making the information available from any device that has network access. Instead of buying a copy of a program, users can now pay to obtain access to it through the network, which is one of the models of cloud computing, Software as a Service (SaaS). The continuous growth of Internet bandwidth has also given rise to new multimedia applications, such as social networks and video over the Internet; and to complete this new paradigm, mobile platforms provide the ubiquity of information that allows people to stay connected. Service providers may own servers and data centers or, alternatively, may contract infrastructure providers that use economies of scale to offer access to servers as a service in the cloud computing model, i.e., Infrastructure as a Service (IaaS). As users become more dependent on cloud services and mobile platforms increase the ubiquity of the cloud, the quality of service becomes increasingly important. A fundamental metric that defines the quality of service is the delay of the information as it travels between the user computers and the servers, and between the servers themselves. Along with the quality of service and the costs, the energy consumption and the CO2 emissions are fundamental considerations in regard to planning cloud computing networks. In this research work, an Optimus Scheduler algorithm to be proposed for Add, Remove or Resize an application by using Tabu Search Algorithm

    VM Selection Process Management for Live Migration in Cloud Data Centers

    Get PDF
    With immense success and fast growth within the past few years, cloud computing has been established as the dominant computing paradigm in information technology (IT) industry, wherein it utilizes dissipated resource benefits and supports resource sharing and time access flexibility. The proliferation of cloud computing has resulted in the establishment of large-scale data centers across the world, consisting of hundreds of thousands, even millions of servers. The emerging cloud computing paradigm provides administrators and IT organizations with considerable freedom to dynamically migrate virtualized computing services among physical servers in cloud data centers. Normally, these data centers incur very high investment and operating costs for the computing and network devices as well as for the energy consumption. Virtualization and virtual machine (VM) migration offers significant benefits such as load balancing, server consolidation, online maintenance and proactive fault tolerance along data centers. VM migration relies on how to determine the trigger condition of VM migration, select the target virtual machine, and choose the destination node. As a result, dynamic VM migration in the scope of resource management is becoming a crucial issue to emphasize on optimal resource utilization, maximum throughput, minimum response time, enhancing scalability, avoiding over-provisioning of resources and prevention of overload to make cloud computing successful. Intelligent host underload/overload detection, VM selection, and VM placement are the primary means to address VM migration issue. Therefore, these three problems are considered to be the most common tasks in VM migration. This thesis presents novel techniques, models, and algorithms, for distributed dynamic consolidation of virtual machines in cloud data centers. The goal is to improve the utilization of computing resources and reduce energy consumption under workload independent quality of service constraints. The proposed approaches are distributed and efficient in managing the energy-performance trade-off

    Network bandwidth aware dynamic automated framework for Virtual Machine Live Migration in cloud environments

    Get PDF
    Live migration is a very important feature of virtualisation, a running VM can be seamlessly moved between different physical hosts. The source VM’s CPU state, storage, memory and network resources can be completely moved to a target host without disrupting the users or running applications. Live VM migration is an extremely powerful tool in many key scenarios such as load balancing, online maintenance, proactive fault tolerance and power management. There are four steps involved in the live VM migration, the setup stage, memory transfer stage, VM storage transfer stage and the network clean up stage. The most important part of live VM migration is transferring the main memory state of the VM from the source to the destination host which can consume a significant amount of network bandwidth in a short period of time. Modern cloud based data centres generate a significant amount of network traffic apart from VM live migration traffic. If VM migration occurs during a peak time, VM migration and user traffic will compete for network bandwidth, then the data centre’s network may not have enough resources to support both VM migration and demands of application users, which would create a bottleneck in the network. Therefore, this research presents a centralised, bandwidth aware, dynamic, and automated framework for live VM migration in Cloud environments. The proposed framework adopted a heuristic approach, and it provides guaranteed bandwidth for VM live migration by controlling user traffic on the network while scheduling live VM migration in an efficient manner. The framework consists with two main components, The Central Controller and the Local Controller. The Local Controller is responsible for collecting resources usage data from VMs and PMs however the Central Controller makes global management decisions. The Central Controller is based on four algorithms which are called a migration policy. The migration policy contains the following algorithms: the host overloaded detection, host underloaded detection, VM selection and VM placement algorithms which are proposed in this research. The proposed migration policy has been implemented in CloudSim and evaluated against two benchmark migration policies in CloudSim. Five evaluation metrics have been used in the simulation to evaluate the performance of the proposed migration policy. The results reveal that the proposed migration policy outperformed the two benchmark policies

    Autonomous management of cost, performance, and resource uncertainty for migration of applications to infrastructure-as-a-service (IaaS) clouds

    Get PDF
    2014 Fall.Includes bibliographical references.Infrastructure-as-a-Service (IaaS) clouds abstract physical hardware to provide computing resources on demand as a software service. This abstraction leads to the simplistic view that computing resources are homogeneous and infinite scaling potential exists to easily resolve all performance challenges. Adoption of cloud computing, in practice however, presents many resource management challenges forcing practitioners to balance cost and performance tradeoffs to successfully migrate applications. These challenges can be broken down into three primary concerns that involve determining what, where, and when infrastructure should be provisioned. In this dissertation we address these challenges including: (1) performance variance from resource heterogeneity, virtualization overhead, and the plethora of vaguely defined resource types; (2) virtual machine (VM) placement, component composition, service isolation, provisioning variation, and resource contention for multitenancy; and (3) dynamic scaling and resource elasticity to alleviate performance bottlenecks. These resource management challenges are addressed through the development and evaluation of autonomous algorithms and methodologies that result in demonstrably better performance and lower monetary costs for application deployments to both public and private IaaS clouds. This dissertation makes three primary contributions to advance cloud infrastructure management for application hosting. First, it includes design of resource utilization models based on step-wise multiple linear regression and artificial neural networks that support prediction of better performing component compositions. The total number of possible compositions is governed by Bell's Number that results in a combinatorially explosive search space. Second, it includes algorithms to improve VM placements to mitigate resource heterogeneity and contention using a load-aware VM placement scheduler, and autonomous detection of under-performing VMs to spur replacement. Third, it describes a workload cost prediction methodology that harnesses regression models and heuristics to support determination of infrastructure alternatives that reduce hosting costs. Our methodology achieves infrastructure predictions with an average mean absolute error of only 0.3125 VMs for multiple workloads

    Software-Defined Networking for data centre network management: A survey

    Get PDF
    Data centres are growing in numbers and size, and their networks expanding to carry larger amounts of traffic. The traffic profile is constantly varying, particularly in cloud data centres where tenants arrive, leave, and may change their resource requirements in between, and so the network configuration must change at a commensurate rate. Software-Defined Networking - programmatic control of network configuration - has been critical to meeting the demands of modern data centre network management, and has been the subject of intense focus by the research community, working in conjunction with industry. In this survey, we review Software-Defined Networking research targeting the management and operation of data centre networks

    Network and Server Resource Management Strategies for Data Centre Infrastructures: A Survey

    Get PDF
    The advent of virtualisation and the increasing demand for outsourced, elastic compute charged on a pay-as-you-use basis has stimulated the development of large-scale Cloud Data Centres (DCs) housing tens of thousands of computer clusters. Of the signi�cant capital outlay required for building and operating such infrastructures, server and network equipment account for 45% and 15% of the total cost, respectively, making resource utilisation e�ciency paramount in order to increase the operators' Return-on-Investment (RoI). In this paper, we present an extensive survey on the management of server and network resources over virtualised Cloud DC infrastructures, highlighting key concepts and results, and critically discussing their limitations and implications for future research opportunities. We highlight the need for and bene �ts of adaptive resource provisioning that alleviates reliance on static utilisation prediction models and exploits direct measurement of resource utilisation on servers and network nodes. Coupling such distributed measurement with logically-centralised Software De�ned Networking (SDN) principles, we subsequently discuss the challenges and opportunities for converged resource management over converged ICT environments, through unifying control loops to globally orchestrate adaptive and load-sensitive resource provisioning
    • …
    corecore