13 research outputs found

    Network Virtualization Over Elastic Optical Networks: A Survey of Allocation Algorithms

    Get PDF
    Network virtualization has emerged as a paradigm for cloud computing services by providing key functionalities such as abstraction of network resources kept hidden to the cloud service user, isolation of different cloud computing applications, flexibility in terms of resources granularity, and on‐demand setup/teardown of service. In parallel, flex‐grid (also known as elastic) optical networks have become an alternative to deal with the constant traffic growth. These advances have triggered research on network virtualization over flex‐grid optical networks. Effort has been focused on the design of flexible and virtualized devices, on the definition of network architectures and on virtual network allocation algorithms. In this chapter, a survey on the virtual network allocation algorithms over flexible‐grid networks is presented. Proposals are classified according to a taxonomy made of three main categories: performance metrics, operation conditions and the type of service offered to users. Based on such classification, this work also identifies open research areas as multi‐objective optimization approaches, distributed architectures, meta‐heuristics, reconfiguration and protection mechanisms for virtual networks over elastic optical networks

    Host-Based Virtual Networks Management in Cloud Datacenters

    Get PDF
    Infrastructure management is of key importance in a wide array of computer and network environments. The use of virtualization in cloud datacenters has driven the communications and computing convergence to a common operational entity. Failure to effectively manage the involved infrastructure results as impediments in provisioning a successful service. Information models facilitate the infrastructure management and current solutions can be effectively applied in most datacenter scenarios, apart from cases where the networking architecture relies heavily on systems virtualization. In this paper we propose an information model for managing virtual network architectures, where hypervisors and computing server resources are deployed as the basis of the networking layer. We provide a successful proof of concept by managing a virtual machine-based network infrastructure acting as an IP routing platform using statistical methods. Our proposal enables a dynamic reconfiguration of allocated infrastructure resources adapting, in real-time, to variations in the imposed workload

    Continuous and Concurrent Network Connection for Hardware Virtualization

    Get PDF
    This project addresses the network connectivity in virtualization for cloud computing. Each Virtual Machine will be able to access the network concurrently and obtains continuous internet connectivity without any disruption. This project proposes a new method of resource sharing which is the Network Interface Card (NIC) among the Virtual Machines with each of them having the full access to it with near-native bandwidth. With this, could computing can perform resource allocation more effectively. This will be essential to migrate the each Operating System (Virtual Machine) that resides on one physical machine to another without disrupting its internet or network connection

    Per-Client Network Performance Isolation in VDE-based Cloud Computing Servers

    Get PDF
    Authors' final versionIn a cloud server where multiple virtual machines owned by different clients are cohosted, excessive traffic generated by a small group of clients may well jeopardize the quality of service of other clients. It is thus very important to provide per-client network performance isolation in a cloud computing environment. Unfortunately, the existing techniques are not effective enough for a huge cloud computing system since it is difficult to adopt them in a large scale and they often require non-trivial modification to the established network protocols. To overcome such difficulties, we propose per-client network performance isolation using VDE (Virtual Distributed Ethernet) as a base framework. Our approach begins with per-client weight specification and support client-aware fair share scheduling and packet dispatching for both incoming and outgoing traffic. It also provides hierarchical fairness between a client and its virtual machines. Our approach supports full virtualization of a guest OS, wide scale adoption, limited modification to the existing system, low run-time overhead and work-conserving servicing. Our experimental results show the effectiveness of the proposed approach. Every client received at least 99.4% of its bandwidth share as specified by its weight.OAIID:oai:osos.snu.ac.kr:snu2012-01/102/0000004193/4SEQ:4PERF_CD:SNU2012-01EVAL_ITEM_CD:102USER_ID:0000004193ADJUST_YN:NEMP_ID:A005174DEPT_CD:4541CITE_RATE:.175FILENAME:11-12-23 JISE-VDE.pdfDEPT_NM:전기·컴퓨터공학부EMAIL:[email protected]_YN:YCONFIRM:

    Challenges in real-time virtualization and predictable cloud computing

    Get PDF
    Cloud computing and virtualization technology have revolutionized general-purpose computing applications in the past decade. The cloud paradigm offers advantages through reduction of operation costs, server consolidation, flexible system configuration and elastic resource provisioning. However, despite the success of cloud computing for general-purpose computing, existing cloud computing and virtualization technology face tremendous challenges in supporting emerging soft real-time applications such as online video streaming, cloud-based gaming, and telecommunication management. These applications demand real-time performance in open, shared and virtualized computing environments. This paper identifies the technical challenges in supporting real-time applications in the cloud, surveys recent advancement in real-time virtualization and cloud computing technology, and offers research directions to enable cloud-based real-time applications in the future

    Continuous and Concurrent Network Connection for Hardware Virtualization

    Get PDF
    This project addresses the network connectivity in virtualization for cloud computing. Each Virtual Machine will be able to access the network concurrently and obtains continuous internet connectivity without any disruption. This project proposes a new method of resource sharing which is the Network Interface Card (NIC) among the Virtual Machines with each of them having the full access to it with near-native bandwidth. With this, could computing can perform resource allocation more effectively. This will be essential to migrate the each Operating System (Virtual Machine) that resides on one physical machine to another without disrupting its internet or network connection

    Improving reliability of service oriented systems with consideration of cost and time constraints in clouds

    Get PDF
    Web service technology is more and more popular for the implementation of service oriented systems. Additionally, cloud computing platforms, as an efficient and available environment, can provide the computing, networking and storage resources in order to decrease the budget of companies to deploy and manage their systems. Therefore, more service oriented systems are migrated and deployed in clouds. However, these applications need to be improved in terms of reliability, for certain components have low reliability. Fault tolerance approaches can improve software reliability. However, more redundant units are required, which increases the cost and the execution time of the entire system. Therefore, a migration and deployment framework with fault tolerance approaches with the consideration of global constraints in terms of cost and execution time may be needed. This work proposes a migration and deployment framework to guide the designers of service oriented systems in order to improve the reliability under global constraints in clouds. A multilevel redundancy allocation model is adopted for the framework to assign redundant units to the structure of systems with fault tolerance approaches. An improved genetic algorithm is utilised for the generation of the migration plan that takes the execution time of systems and the cost constraints into consideration. Fault tolerant approaches (such as NVP, RB and Parallel) can be integrated into the framework so as to improve the reliability of the components at the bottom level. Additionally, a new encoding mechanism based on linked lists is proposed to improve the performance of the genetic algorithm in order to reduce the movement of redundant units in the model. The experiments compare the performance of encoding mechanisms and the model integrated with different fault tolerance approaches. The empirical studies show that the proposed framework, with a multilevel redundancy allocation model integrated with the fault tolerance approaches, can generate migration plans for service oriented systems in clouds with the consideration of cost and execution time

    Virtualization of multicast services in WiMAX networks

    Get PDF
    Multicast service is one of the methods used to efficiently manage bandwidth when sending multimedia content. To improve bandwidth utilisation, virtualization is often invoked because of its additional features such as bandwidth sharing and support of services that require high volumes of transactional data. Currently, network providers are concerned with the bandwidth amount for efficient use of the limited wireless network capabilities and the provision of a better quality of service. The virtualization design of a multicast service framework should satisfy several objectives. For example, it should enable the interchange of service delivery between multiple networks with one shareable network infrastructure. Also, it should ensure efficient use of network resources and guarantee users' demands of Quality of Service (QoS). Thus, the design of virtualization of multicast service framework is a complex research study. Due to the bandwidth-related arguments, a strong focus has been put on technical issues that facilitate virtualization in wireless networks. A well-designed virtualized network guarantees users with the required quality service. Similarly, virtualization of multicast service is invoked to improve efficient utilisation of bandwidth in wireless networks. As wireless links prove to be unstable, packet loss is unavoidable when multicast service-oriented virtual artefacts are incorporated in wireless networks. In this thesis, a virtualized multicast framework was modelled by using Generalized Assignment Problem (GAP) methodology. Mixed Integer Linear Programing (MILP) was implemented in MATLAB to solve the GAP model. This was to optimise the allocation of multicast traffic to the appropriate virtual networks. Thus, the developed model allows users to have interchangeable services offered by multiple networks. Furthermore, Network Simulator version 3 (NS-3) was used to evaluate the performance of the virtualized multicast framework. Three applications, namely, voice over IP (VoIP), video streaming, and file download have been used to evaluate the performance of a multicast service virtualization framework in Worldwide Interoperability for Microwave Access (WiMAX) networks using NS-3. The performance evaluation was based on whether MILP is used or not used. The results of experimentation have revealed that there is good performance of virtual networks when multicast traffic is sent over one single virtual network instead of sending it over multiple virtual networks. Similarly, the results show that the bandwidth is efficiently used because the multicast traffic is not delivered through multiple virtual networks. Overall, the concepts, the investigations and the model presented in this thesis can enable mobile network providers to achieve efficient use of bandwidth and provide the necessary means to support services for QoS differentiations and guarantees. Also, the multicast service virtualization framework provides an excellent tool that can enable network providers to interchange services. The developed model can serve as a basis for further extension. Specifically, the extension of the model can boost load balancing in the flow allocation problem and activate a virtual network to deliver traffic. This may rely on the QoS policy between network providers. Therefore, the model should consider the number of users in order to guarantee improved QoS
    corecore