116 research outputs found

    Migration of networks in multi-cloud environment

    Get PDF
    Tese de mestrado, Engenharia Informática (Arquitetura, Sistemas e Redes de Computadores) Universidade de Lisboa, Faculdade de Ciências, 2018A forma como os centros de dados e os recursos computacionais são geridos tem vindo a mudar. O uso exclusivo de servidores físicos e os complexos processos para provisionamento de software são já passado, sendo agora possível e simples usar recursos de uma terceira parte a pedido, na nuvem (cloud). A técnica central que permitiu esta evolução foi a virtualização, uma abstração dos recursos computacionais que torna o software mais independente do hardware em que é executado. Os avanços tecnológicos nesta área permitiram a migração de máquinas virtuais, agilizando ainda mais os processos de gestão e manutenção de recursos. A possibilidade de migrar máquinas virtuais libertou o software da infraestrutura física, facilitando uma série de tarefas como manutenção, balanceamento de carga, tratamento de faltas, entre outras. Hoje em dia a migração de máquinas virtuais é uma ferramenta essencial para gerir clouds, tanto públicas como privadas. Os sistemas informáticos de grande escala existentes na cloud são complexos, compostos por múltiplas partes que trabalham em conjunto para atingir os seus objectivos. O facto de os sistemas estarem intimamente ligados coloca pressão nos sistemas de comunicação e nas redes que os suportam. Esta dependência do sistema na infraestrutura de comunicação vem limitar a flexibilidade da migração de máquinas virtuais. Isto porque actualmente a gestão de uma rede é pouco flexível, limitando por exemplo a migração de VMs a uma subrede ou obrigando a um processo de reconfiguração de rede para a migração, um processo difícil, tipicamente manual e sujeito a falhas. Idealmente, a infraestrutura de que as máquinas virtuais necessitam para comunicar seria também virtual, permitindo migrar tanto as máquinas virtuais como a rede virtual. Abstrair os recursos de comunicação permitiria que todo o sistema tivesse a flexibilidade de ser transferido para outro local. Neste sentido foi recentemente proposta a migração de redes usando redes definidas por software (SDN), um novo paradigma que separa a infraestrutura de encaminhamento (plano de dados) do plano de controlo. Numa SDN a responsabilidade de tomar as decisões de controlo fica delegada num elemento logicamente centralizado, o controlador, que tem uma visão global da rede e do seu estado. Esta separação do plano de controlo do processo de encaminhamento veio facilitar a virtualização de redes. No entanto, as recentes propostas de virtualização de redes usando SDN apresentam limitações. Nomeadamente, estas soluções estão limitadas a um único centro de dados ou provedor de serviços. Esta dependência é um problema. Em primeiro lugar, confiar num único provedor ou cloud limita a disponibilidade, tornando efectivamente o provedor num ponto de falha único. Em segundo lugar, certos serviços ficam severamente limitados por recorrerem apenas a uma cloud, devido a requisitos especiais (de privacidade, por exemplo) ou mesmo legais (que podem obrigar a que, por exemplo, dados de utilizadores fiquem guardados no próprio país). Idealmente, seria possível ter a possibilidade de tirar partido de múltiplas clouds e poder, de forma transparente, aproveitar as vantagens de cada uma delas (por exemplo, umas por apresentarem custos mais reduzidos, outras pela sua localização). Tal possibilidade garantiria uma maior disponibilidade, visto que a falha de uma cloud não comprometeria todo o sistema. Além disso, poderia permitir baixar os custos porque seria possível aproveitar a variação dos preços existente entre clouds ao longo do tempo. Neste contexto multi-cloud um dos grandes desafios é conseguir migrar recursos entre clouds de forma a aproveitar os recursos existentes. Num ambiente SDN, em particular, a migração de redes é problemática porque é necessario que o controlador comunique com os elementos físicos da rede para implementar novas políticas e para que estes possam informar o controlador de novos eventos. Se a capacidade de comunicação entre o controlador e os elementos de rede for afectada (por exemplo, devido a latências elevadas de comunicação) o funcionamento da rede é também afectado. O trabalho que aqui propomos tem como objectivo desenvolver algoritmos de orquestração para migração de redes virtuais, com o objectivo de minimizar as latências na comunicação controlador-switches, em ambientes multi-cloud. Para esse efeito foi desenvolvida uma solução óptima, usando programação linear, e várias heurísticas. A solução de programação linear, sendo óptima, resulta na menor disrupção possível da ligação ao controlador. No entanto, a complexidade computacional desta solução limita a sua usabilidade, levando a tempos de execução elevados. Por esta razão são prospostas heurísticas que visam resolver o problema em tempo útil e de forma satisfatória. Os resultados das nossas experiências mostram que nas várias topologias testadas algumas heurísticas conseguem resultados próximos da solução óptima. O objectivo é atingido com tempos de execução consideravelmente inferiores.The way datacenters and computer resources are managed has been changing, from bare metal servers and complex deployment processes to on-demand cloud resources and applications. The main technology behind this evolution was virtualization. By abstracting the hardware, virtualization decoupled software from the hardware it runs on. Virtual machine (VM) migration further increased the flexibility of management and maintenance procedures. Tasks like maintenance, load balancing and fault handling were made easier. Today, the migration of virtual machines is a fundamental tool in public and private clouds. However as VMs rarely act alone, when the VMs migrate, the virtual networks should migrate too. Solutions to this problem using traditional networks have several limitations: they are integrated with the devices and are hard to manage. For these reasons the logical centralisation offered by Software-Defined Networking (SDN) architectures has been shown recently as an enabler for transparent migration of networks. In an SDN a controller remotely controls the network switches by installing flow rules that implement the policies defined by the network operator. Recent proposals are a good step forward but have problems. Namely, they are limited to a single data center or provider. The user’s dependency on a single cloud provider is a fundamental limitation. A large number of incidents involving accidental and malicious faults in cloud infrastructures show that relying on a single provider can lead to the creation of internet-scale single points of failures for cloud-based services. Furthermore, giving clients the power to choose how to use their cloud resources and the flexibility to easily change cloud providers is of great value, enabling clients to lower costs, tolerate cloud-wide outages and enhance security. The objective of this dissertation is therefore to design, implement and evaluate solutions for network migration in an environment of multiple clouds. The main goal is to schedule the migration of a network in such a way that the migration process has the least possible impact on the SDN controller’s ability to manage the network. This is achieved by creating a migration plan that aims to minimize the experienced control plane latency (i.e., the latency between the controller and the switches). We have developed an optimal solution based on a linear program, and several heuristics. Our results show that it is possible to achieve results close to the optimal solution, within reasonable time frames

    On the detection of virtual machine introspection from inside a guest virtual machine

    Get PDF
    Thesis (Ph.D.) University of Alaska Fairbanks, 2015With the increased prevalence of virtualization in the modern computing environment, the security of that technology becomes of paramount importance. Virtual Machine Introspection (VMI) is one of the technologies that has emerged to provide security for virtual environments by examining and then interpreting the state of an active Virtual Machine (VM). VMI has seen use in systems administration, digital forensics, intrusion detection, and honeypots. As with any technology, VMI has both productive uses as well as harmful uses. The research presented in this dissertation aims to enable a guest VM to determine if it is under examination by an external VMI agent. To determine if a VM is under examination a series of statistical analyses are performed on timing data generated by the guest itself

    Resource-Efficient Replication and Migration of Virtual Machines.

    Full text link
    Continuous replication and live migration of Virtual Machines (VMs) are two vital tools in a virtualized environment, but they are resource-expensive. Continuously replicating a VM's checkpointed state to a backup host maintains high-availability (HA) of the VM despite host failures, but checkpoint replication can generate significant network traffic. Each replicated VM also incurs a 100% memory overhead, since the backup unproductively reserves the same amount of memory to hold the redundant VM state. Live migration, though being widely used for load-balancing, power-saving, etc., can also generate excessive network traffic, by transferring VM state iteratively. In addition, it can incur a long completion time and degrade application performance. This thesis explores ways to replicate VMs for HA using resources efficiently, and to migrate VMs fast, with minimal execution disruption and using resources efficiently. First, we investigate the tradeoffs in using different compression methods to reduce the network traffic of checkpoint replication in a HA system. We evaluate gzip, delta and similarity compressions based on metrics that are specifically important in a HA system, and then suggest guidelines for their selection. Next, we propose HydraVM, a storage-based HA approach that eliminates the unproductive memory reservation made in backup hosts. HydraVM maintains a recent image of a protected VM in a shared storage by taking and consolidating incremental VM checkpoints. When a failure occurs, HydraVM quickly resumes the execution of a failed VM by loading a small amount of essential VM state from the storage. As the VM executes, the VM state not yet loaded is supplied on-demand. Finally, we propose application-assisted live migration, which skips transfer of VM memory that need not be migrated to execute running applications at the destination. We develop a generic framework for the proposed approach, and then use the framework to build JAVMM, a system that migrates VMs running Java applications skipping transfer of garbage in Java memory. Our evaluation results show that compared to Xen live migration, which is agnostic of running applications, JAVMM can reduce the completion time, network traffic and application downtime caused by Java VM migration, all by up to over 90%.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111575/1/karenhou_1.pd

    Distributed Shared Memory based Live VM Migration

    Get PDF
    Cloud computing is the new trend in computing services and IT industry, this computing paradigm has numerous benefits to utilize IT infrastructure resources and reduce services cost. The key feature of cloud computing depends on mobility and scalability of the computing resources, by managing virtual machines. The virtualization decouples the software from the hardware and manages the software and hardware resources in an easy way without interruption of services. Live virtual machine migration is an essential tool for dynamic resource management in current data centers. Live virtual machine is defined as the process of moving a running virtual machine or application between different physical machines without disconnecting the client or application. Many techniques have been developed to achieve this goal based on several metrics (total migration time, downtime, size of data sent and application performance) that are used to measure the performance of live migration. These metrics measure the quality of the VM services that clients care about, because the main goal of clients is keeping the applications performance with minimum service interruption. The pre-copy live VM migration is done in four phases: preparation, iterative migration, stop and copy, and resume and commitment. During the preparation phase, the source and destination physical servers are selected, the resources in destination physical server are reserved, and the critical VM is selected to be migrated. The cloud manager responsibility is to make all of these decisions. VM state migration takes place and memory state is transferred to the target node during iterative migration phase. Meanwhile, the migrated VM continues to execute and dirties its memory. In the stop and copy phase, VM virtual CPU is stopped and then the processor and network states are transferred to the destination host. Service downtime results from stopping VM execution and moving the VM CPU and network states. Finally in the resume and commitment phase, the migrated VM is resumed running in the destination physical host, the remaining memory pages are pulled by destination machine from the source machine. The source machine resources are released and eliminated. In this thesis, pre-copy live VM migration using Distributed Shared Memory (DSM) computing model is proposed. The setup is built using two identical computation nodes to construct all the proposed environment services architecture namely the virtualization infrastructure (Xenserver6.2 hypervisor), the shared storage server (the network file system), and the DSM and High Performance Computing (HPC) cluster. The custom DSM framework is based on a low latency memory update named Grappa. Moreover, HPC cluster is used to parallelize the work load by using CPUs computation nodes. HPC cluster employs OPENMPI and MPI libraries to support parallelization and auto-parallelization. The DSM allows the cluster CPUs to access the same memory space pages resulting in less memory data updates, which reduces the amount of data transferred through the network. The thesis proposed model achieves a good enhancement of the live VM migration metrics. Downtime is reduced by 50 % in the idle workload of Windows VM and 66.6% in case of Ubuntu Linux idle workload. In general, the proposed model not only reduces the downtime and the total amount of data sent, but also does not degrade other metrics like the total migration time and the applications performance
    corecore