30 research outputs found

    Workload-aware live storage migration for clouds

    Get PDF
    The emerging open cloud computing model will provide users with great freedom to dynamically migrate virtualized computing services to, from, and between clouds over the wide-area. While this freedom leads to many potential benefits, the running services must be minimally disrupted by the migration. Unfortunately, current solutions for wide-area migration incur too much disruption as they will significantly slow down storage I/O operations during migration. The resulting increase in service latency could be very costly to a business. This thesis presents a novel storage migration scheduling algorithm that can greatly improve storage I/O performance during wide-area migration. Our algorithm is unique in that it considers individual virtual machine's storage I/O workload such as temporal locality, spatial locality and popularity characteristics to compute an efficient data transfer schedule. Using a trace-driven framework, we show that our algorithm provides large performance benefits across a wide range of popular virtual machine workloads

    A three phase optimization method for precopy based VM live migration

    Get PDF

    Where are we at with Cloud Computing?: A Descriptive Literature Review

    Get PDF
    Cloud computing is an exciting area for research, because of its relative novelty and exploding growth. In this paper, we present a descriptive literature review and classification scheme for cloud computing research. The former consists of 58 articles published since the recent inception of cloud computing. Clearly, there is an explosively increasing amount of cloud computing research has been conducted this year. The articles are classified and results are presented, based on a scheme that consists of four main categories: technological issues, business issues, applications, and general. The results show that although current cloud computing research is still skewed towards technological issues, such as performance, network, and data management, new research theme regarding the social and organisational implications of cloud computing is emerging. We hope this review will provide a snapshot and reference source of the current state of cloud computing research and stimulate further research interest

    Geography Aware Virtual Machine Migrations and Replications for Distributed Cloud Data Centers

    Get PDF
    Cloud computing provides access to computing resources for a fee. Client applications and services can be hosted in clouds. Cloud computing typically uses a network of data centers that are geographically dispersed. The distance between clients and applications is impacted by geographical distance. The geographical distribution of client requests can be random and difficult to predict. This suggests a need to reconsider the placement of services at run-time through migration. This thesis describes a framework based on software-defined networking (SDN) principles. It demonstrates algorithms that are periodically executed and determine candidate services to migrate and replicate as well as target data centers to migrate to and replicate to and an evaluation. The evaluation shows that effectiveness of the algorithms

    VM Selection Process Management for Live Migration in Cloud Data Centers

    Get PDF
    With immense success and fast growth within the past few years, cloud computing has been established as the dominant computing paradigm in information technology (IT) industry, wherein it utilizes dissipated resource benefits and supports resource sharing and time access flexibility. The proliferation of cloud computing has resulted in the establishment of large-scale data centers across the world, consisting of hundreds of thousands, even millions of servers. The emerging cloud computing paradigm provides administrators and IT organizations with considerable freedom to dynamically migrate virtualized computing services among physical servers in cloud data centers. Normally, these data centers incur very high investment and operating costs for the computing and network devices as well as for the energy consumption. Virtualization and virtual machine (VM) migration offers significant benefits such as load balancing, server consolidation, online maintenance and proactive fault tolerance along data centers. VM migration relies on how to determine the trigger condition of VM migration, select the target virtual machine, and choose the destination node. As a result, dynamic VM migration in the scope of resource management is becoming a crucial issue to emphasize on optimal resource utilization, maximum throughput, minimum response time, enhancing scalability, avoiding over-provisioning of resources and prevention of overload to make cloud computing successful. Intelligent host underload/overload detection, VM selection, and VM placement are the primary means to address VM migration issue. Therefore, these three problems are considered to be the most common tasks in VM migration. This thesis presents novel techniques, models, and algorithms, for distributed dynamic consolidation of virtual machines in cloud data centers. The goal is to improve the utilization of computing resources and reduce energy consumption under workload independent quality of service constraints. The proposed approaches are distributed and efficient in managing the energy-performance trade-off

    Design, implementation and experimental evaluation of a network-slicing aware mobile protocol stack

    Get PDF
    Mención Internacional en el título de doctorWith the arrival of new generation mobile networks, we currently observe a paradigm shift, where monolithic network functions running on dedicated hardware are now implemented as software pieces that can be virtualized on general purpose hardware platforms. This paradigm shift stands on the softwarization of network functions and the adoption of virtualization techniques. Network Function Virtualization (NFV) comprises softwarization of network elements and virtualization of these components. It brings multiple advantages: (i) Flexibility, allowing an easy management of the virtual network functions (VNFs) (deploy, start, stop or update); (ii) efficiency, resources can be adequately consumed due to the increased flexibility of the network infrastructure; and (iii) reduced costs, due to the ability of sharing hardware resources. To this end, multiple challenges must be addressed to effectively leverage of all these benefits. Network Function Virtualization envisioned the concept of virtual network, resulting in a key enabler of 5G networks flexibility, Network Slicing. This new paradigm represents a new way to operate mobile networks where the underlying infrastructure is "sliced" into logically separated networks that can be customized to the specific needs of the tenant. This approach also enables the ability of instantiate VNFs at different locations of the infrastructure, choosing their optimal placement based on parameters such as the requirements of the service traversing the slice or the available resources. This decision process is called orchestration and involves all the VNFs withing the same network slice. The orchestrator is the entity in charge of managing network slices. Hands-on experiments on network slicing are essential to understand its benefits and limits, and to validate the design and deployment choices. While some network slicing prototypes have been built for Radio Access Networks (RANs), leveraging on the wide availability of radio hardware and open-source software, there is no currently open-source suite for end-to-end network slicing available to the research community. Similarly, orchestration mechanisms must be evaluated as well to properly validate theoretical solutions addressing diverse aspects such as resource assignment or service composition. This thesis contributes on the study of the mobile networks evolution regarding its softwarization and cloudification. We identify software patterns for network function virtualization, including the definition of a novel mobile architecture that squeezes the virtualization architecture by splitting functionality in atomic functions. Then, we effectively design, implement and evaluate of an open-source network slicing implementation. Our results show a per-slice customization without paying the price in terms of performance, also providing a slicing implementation to the research community. Moreover, we propose a framework to flexibly re-orchestrate a virtualized network, allowing on-the-fly re-orchestration without disrupting ongoing services. This framework can greatly improve performance under changing conditions. We evaluate the resulting performance in a realistic network slicing setup, showing the feasibility and advantages of flexible re-orchestration. Lastly and following the required re-design of network functions envisioned during the study of the evolution of mobile networks, we present a novel pipeline architecture specifically engineered for 4G/5G Physical Layers virtualized over clouds. The proposed design follows two objectives, resiliency upon unpredictable computing and parallelization to increase efficiency in multi-core clouds. To this end, we employ techniques such as tight deadline control, jitter-absorbing buffers, predictive Hybrid Automatic Repeat Request, and congestion control. Our experimental results show that our cloud-native approach attains > 95% of the theoretical spectrum efficiency in hostile environments where stateof- the-art architectures collapse.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Francisco Valera Pintor.- Secretario: Vincenzo Sciancalepore.- Vocal: Xenofon Fouka

    A Cognitive Routing framework for Self-Organised Knowledge Defined Networks

    Get PDF
    This study investigates the applicability of machine learning methods to the routing protocols for achieving rapid convergence in self-organized knowledge-defined networks. The research explores the constituents of the Self-Organized Networking (SON) paradigm for 5G and beyond, aiming to design a routing protocol that complies with the SON requirements. Further, it also exploits a contemporary discipline called Knowledge-Defined Networking (KDN) to extend the routing capability by calculating the “Most Reliable” path than the shortest one. The research identifies the potential key areas and possible techniques to meet the objectives by surveying the state-of-the-art of the relevant fields, such as QoS aware routing, Hybrid SDN architectures, intelligent routing models, and service migration techniques. The design phase focuses primarily on the mathematical modelling of the routing problem and approaches the solution by optimizing at the structural level. The work contributes Stochastic Temporal Edge Normalization (STEN) technique which fuses link and node utilization for cost calculation; MRoute, a hybrid routing algorithm for SDN that leverages STEN to provide constant-time convergence; Most Reliable Route First (MRRF) that uses a Recurrent Neural Network (RNN) to approximate route-reliability as the metric of MRRF. Additionally, the research outcomes include a cross-platform SDN Integration framework (SDN-SIM) and a secure migration technique for containerized services in a Multi-access Edge Computing environment using Distributed Ledger Technology. The research work now eyes the development of 6G standards and its compliance with Industry-5.0 for enhancing the abilities of the present outcomes in the light of Deep Reinforcement Learning and Quantum Computing

    Revisiting Isolation For System Security And Efficiency In The Era Of Internet Of Things

    Get PDF
    Isolation is a fundamental paradigm for secure and efficient resource sharing on a computer system. However, isolation mechanisms in traditional cloud computing platforms are heavy-weight or just not feasible to be applied onto the computing environment for Internet of Things(IoT). Most IoT devices have limited resources and their servers are less powerful than cloud servers but are widely distributed over the edge of the Internet. Revisions to the traditional isolation mechanisms are needed in order to improve the system security and efficiency in these computing environments. The first project explores container-based isolation for the emerging edge computing platforms. We show a performance issue of live migration between edge servers where the file system transmission becomes a bottleneck. Then we propose a solution that leverages a layered file system for synchronization before the migration starts, avoiding the usage of impractical networking shared file system as in the traditional solution. The evaluation shows that the migration time is reduced by 56% – 80%. In the second project, we propose a lightweight security monitoring service for edge computing platforms, base on the virtual machine isolation technique. Our framework is designed to monitor program activities from underneath of an operating system, which improves its transparency and avoids the cost of embedding different monitor modules into each layer inside the operating system. Furthermore, the monitor runs in a single process virtual machine which requires only ≤32MB of memory, reduces the scheduling overhead, and saves a significant amount of physical memory, while the performance overhead is an average of 2.7%. In the third project, we co-design the hardware and software system stack to achieve efficient fine-grained intra-address space isolation. We propose a systematic solution to partition a legacy program into multiple security compartments, which we call capsules, with isolation at byte granularity. Vulnerabilities in one capsule will not likely affect another capsule. The isolation is guaranteed by our hardware-based ownership types tagged to every byte in the memory. The ownership types are initialized, propagated, and checked by combining both static and dynamic analysis techniques. Finally, our co-design approach could remove most human refactoring efforts while avoiding the untrustworthiness as well as the cost of the pure software approaches. In brief, this proposal explores a spectrum of isolation techniques and their improvementsfor the IoT computing environment. With our explorations, we have shown the necessity to revise the traditional isolation mechanisms in order to improve the system efficiency and security for the edge and IoT platforms. We expect that many more opportunities will be discovered and various kinds of revised or new isolation mechanisms for the edge and IoT platforms will emerge soon

    Fault-tolerant distributed transactions for partitioned OLTP databases

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 103-112).This thesis presents Dtxn, a fault-tolerant distributed transaction system designed specifically for building online transaction processing (OLTP) databases. Databases have traditionally been designed as general purpose data processing tools. By being designed only for OLTP workloads, Dtxn can be more efficient. It is designed to support very large databases by partitioning data across a cluster of commodity servers in a data center. Combining multiple servers together allows systems built with Dtxn to be cost effective, highly available, scalable, and fault-tolerant. Dtxn provides three novel features. First, it provides reusable infrastructure for building a distributed OLTP database out of single machine databases. This allows developers to take a specialized backend storage engine and use it across multiple machines, without needing to re-implement the distributed transaction infrastructure. We used Dtxn to build four different applications: a simple key/value store, a specialized TPC-C implementation, a main-memory OLTP database, and a traditional disk-based OLTP database. Second, Dtxn provides a novel concurrency control mechanism called speculative concurrency control, designed for main memory OLTP workloads that are primarily composed of transactions with a single round of communication between the application and database. Speculative concurrency control executes one transaction at a time, with no concurrency control overhead. In cases where there may be stalls due to network communication, it speculates future transactions. Our results show that this provides significantly better throughput than traditional two-phase locking, outperforming it by a factor of two on the TPC-C benchmark. Finally, Dtxn supports live migration, allowing part of the data on one server to be moved to another server while processing transactions. Our experiments show that our approach has nearly no visible impact on throughput or latency when moving data under moderate to high loads. It has significantly less impact than the best commercially available systems when the database is overloaded. The period of time where the throughput is reduced is less than half as long as failing over to another replica or using virtual machine migration.by Evan Philip Charles Jones.Ph.D
    corecore