32 research outputs found

    Collocation Games and Their Application to Distributed Resource Management

    Full text link
    We introduce Collocation Games as the basis of a general framework for modeling, analyzing, and facilitating the interactions between the various stakeholders in distributed systems in general, and in cloud computing environments in particular. Cloud computing enables fixed-capacity (processing, communication, and storage) resources to be offered by infrastructure providers as commodities for sale at a fixed cost in an open marketplace to independent, rational parties (players) interested in setting up their own applications over the Internet. Virtualization technologies enable the partitioning of such fixed-capacity resources so as to allow each player to dynamically acquire appropriate fractions of the resources for unencumbered use. In such a paradigm, the resource management problem reduces to that of partitioning the entire set of applications (players) into subsets, each of which is assigned to fixed-capacity cloud resources. If the infrastructure and the various applications are under a single administrative domain, this partitioning reduces to an optimization problem whose objective is to minimize the overall deployment cost. In a marketplace, in which the infrastructure provider is interested in maximizing its own profit, and in which each player is interested in minimizing its own cost, it should be evident that a global optimization is precisely the wrong framework. Rather, in this paper we use a game-theoretic framework in which the assignment of players to fixed-capacity resources is the outcome of a strategic "Collocation Game". Although we show that determining the existence of an equilibrium for collocation games in general is NP-hard, we present a number of simplified, practically-motivated variants of the collocation game for which we establish convergence to a Nash Equilibrium, and for which we derive convergence and price of anarchy bounds. In addition to these analytical results, we present an experimental evaluation of implementations of some of these variants for cloud infrastructures consisting of a collection of multidimensional resources of homogeneous or heterogeneous capacities. Experimental results using trace-driven simulations and synthetically generated datasets corroborate our analytical results and also illustrate how collocation games offer a feasible distributed resource management alternative for autonomic/self-organizing systems, in which the adoption of a global optimization approach (centralized or distributed) would be neither practical nor justifiable.NSF (CCF-0820138, CSR-0720604, EFRI-0735974, CNS-0524477, CNS-052016, CCR-0635102); Universidad Pontificia Bolivariana; COLCIENCIAS–Instituto Colombiano para el Desarrollo de la Ciencia y la Tecnología "Francisco José de Caldas

    A proposal for secured, efficient and scalable layer 2 network virtualisation mechanism

    Get PDF
    El contenidos de los capítulos 3 y 4 está sujeto a confidencialidad. 291 p.La Internet del Futuro ha emergido como un esfuerzo investigador para superar estas limitaciones identificadas en la actual Internet. Para ello es necesario investigar en arquitecturas y soluciones novedosas (evolutivas o rompedoras), y las plataformas de experimentación surgen para proporcionar un entorno realista para validar estas nuevas propuestas a gran escala.Debido a la necesidad de compartir la misma infraestructura y recursos para testear simultáneamente diversas propuestas de red, la virtualización de red es la clave del éxito. Se propone una nueva taxonomía para poder analizar y comparar las diferentes propuestas. Se identifican tres tipos: el Nodo Virtual (vNode), la Virtualización posibilitada por SDN (SDNeV) y el overlay.Además, se presentan las plataformas experimentales más relevantes, con un foco especial en la forma en la que cada una de ellas permite la investigación en propuestas de red, las cuales no cumplen todos estos requisitos impuestos: aislamiento, seguridad, flexibilidad, escalabilidad, estabilidad, transparencia, soporte para la investigación en propuestas de red. Por lo tanto, una nueva plataforma de experimentación ortogonal a la experimentación es necesaria.Las principales contribuciones de esta tesis, sustentadas sobre tecnología SDN y NFV, son también los elementos clave para construir la plataforma de experimentación: la Virtualización de Red basada en Prefijos de Nivel 2 (Layer 2 Prefix-based Network Virtualisation, L2PNV), un Protocolo para la Configuración de Direcciones MAC (MAC Address Configuration Protocol, MACP), y un sistema de Control de Acceso a Red basado en Flujos (Flow-based Network Access Control, FlowNAC).Como resultado, se ha desplegado en la Universidad del Pais Vasco (UPV/EHU) una nueva plataforma experimental, la Plataforma Activada por OpenFlow de EHU (EHU OpenFlow Enabled Facility, EHU-OEF), para experimentar y validar estas propuestas realizadas

    A proposal for secured, efficient and scalable layer 2 network virtualisation mechanism

    Get PDF
    El contenidos de los capítulos 3 y 4 está sujeto a confidencialidad. 291 p.La Internet del Futuro ha emergido como un esfuerzo investigador para superar estas limitaciones identificadas en la actual Internet. Para ello es necesario investigar en arquitecturas y soluciones novedosas (evolutivas o rompedoras), y las plataformas de experimentación surgen para proporcionar un entorno realista para validar estas nuevas propuestas a gran escala.Debido a la necesidad de compartir la misma infraestructura y recursos para testear simultáneamente diversas propuestas de red, la virtualización de red es la clave del éxito. Se propone una nueva taxonomía para poder analizar y comparar las diferentes propuestas. Se identifican tres tipos: el Nodo Virtual (vNode), la Virtualización posibilitada por SDN (SDNeV) y el overlay.Además, se presentan las plataformas experimentales más relevantes, con un foco especial en la forma en la que cada una de ellas permite la investigación en propuestas de red, las cuales no cumplen todos estos requisitos impuestos: aislamiento, seguridad, flexibilidad, escalabilidad, estabilidad, transparencia, soporte para la investigación en propuestas de red. Por lo tanto, una nueva plataforma de experimentación ortogonal a la experimentación es necesaria.Las principales contribuciones de esta tesis, sustentadas sobre tecnología SDN y NFV, son también los elementos clave para construir la plataforma de experimentación: la Virtualización de Red basada en Prefijos de Nivel 2 (Layer 2 Prefix-based Network Virtualisation, L2PNV), un Protocolo para la Configuración de Direcciones MAC (MAC Address Configuration Protocol, MACP), y un sistema de Control de Acceso a Red basado en Flujos (Flow-based Network Access Control, FlowNAC).Como resultado, se ha desplegado en la Universidad del Pais Vasco (UPV/EHU) una nueva plataforma experimental, la Plataforma Activada por OpenFlow de EHU (EHU OpenFlow Enabled Facility, EHU-OEF), para experimentar y validar estas propuestas realizadas

    WiMAX spectrum virtualization and network federation

    Get PDF
    Spectrum management in wireless broadband networks as regards its cost and its efficient usage has posed a huge challenge for mobile network operators. Traditionally, network operators had exclusive rights to access the band of spectrum allocated to them, but with the high price of spectrum license, it is becoming necessary to find alternative ways to use and access spectrum more efficiently. Resource virtualization is a method which has been extensively adopted in hardware computing for creating abstract versions of physical hardware resources and it has proven to be a powerful technique for customized resource provision and sharing. This idea of resource virtualization is gradually being transferred into the domain of wireless mobile network resource management but the ideas around it are still evolving. Since spectrum is an important wireless network resource, it is imperative to provide an efficient and cost effective means for the resource to be accessed and utilized. Therefore the idea of spectrum virtualization is investigated in this research as a possible solution to this problem. To expand on the notion of spectrum virtualization, this research further explores the idea of network federation. Network Federation involves the interconnection of diverse network components to be operated as a single seamless network. This will enable them share their network resources while the networks are geographically dispersed and managed by different network operators. To fully implement these concepts there is a need for a well-developed network framework. This research proposes two novel architectures for spectrum virtualization and network federation using the WiMAX (Worldwide Interoperability for Microwave Exchange) wireless broadband technology. The proposed WiMAX spectrum virtualization architecture introduces a novel entity known as the Virtual Spectrum Hypervisor (VS-Hypervisor). This VS-Hypervisor bears the responsibility of spectrum management and virtualization within the WiMAX framework. In the implementation of WiMAX network federation, the novel architecture enables the cooperative existence of multiple WiMAX base-stations having virtualization capabilities with overlapping cellular coverage areas for the purpose of sharing their spectrum resources. In this architecture, a novel federation control plane known as the Virtual Spectrum Exchange Locale (VSEL) is proposed. The VSEL facilitates the VS-Hypervisors in the federated physical base-stations to be able to negotiate and exchange spectrum between themselves to match their spectrum needs. The architectures for WiMAX spectrum virtualization and network federation was modelled and implemented using the OPNET Modeler. Results obtained validated their efficacy with respect to the effective management of the wireless network spectrum. Therefore this proposed network architectures would help network operators optimize their radio networks

    Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure

    Get PDF
    This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version

    Building the Future Internet through FIRE

    Get PDF
    The Internet as we know it today is the result of a continuous activity for improving network communications, end user services, computational processes and also information technology infrastructures. The Internet has become a critical infrastructure for the human-being by offering complex networking services and end-user applications that all together have transformed all aspects, mainly economical, of our lives. Recently, with the advent of new paradigms and the progress in wireless technology, sensor networks and information systems and also the inexorable shift towards everything connected paradigm, first as known as the Internet of Things and lately envisioning into the Internet of Everything, a data-driven society has been created. In a data-driven society, productivity, knowledge, and experience are dependent on increasingly open, dynamic, interdependent and complex Internet services. The challenge for the Internet of the Future design is to build robust enabling technologies, implement and deploy adaptive systems, to create business opportunities considering increasing uncertainties and emergent systemic behaviors where humans and machines seamlessly cooperate

    A virtual intergrated networks emulator on xen (viNex)

    Get PDF
    Network research experiments have traditionally been conducted in emulated or simulated environments. Emulators are frequently deployed on physical networks. Network simulators provide a self-contained and simple environment that can be hosted on one host. Simulators provide a synthetic environment that is only an approximation of the real world and therefore the results might not be a true re ection of reality. Recent progress in virtualisation technologies enable the deployment of multiple interconnected, virtual hosts on one machine. Virtual hosts run real network protocol stacks and therefore provide an emulated environment on a single host. The rst objective of this dissertation is to build a network emulator (viNEX) using a virtualisation platform (XEN). The second objective is to evaluate whether viNEX can be used to conduct some network research experiments. Thirdly, some limitations of this approach are identifiedComputingM. Sc. (Computer Science

    Building the Future Internet through FIRE

    Get PDF
    The Internet as we know it today is the result of a continuous activity for improving network communications, end user services, computational processes and also information technology infrastructures. The Internet has become a critical infrastructure for the human-being by offering complex networking services and end-user applications that all together have transformed all aspects, mainly economical, of our lives. Recently, with the advent of new paradigms and the progress in wireless technology, sensor networks and information systems and also the inexorable shift towards everything connected paradigm, first as known as the Internet of Things and lately envisioning into the Internet of Everything, a data-driven society has been created. In a data-driven society, productivity, knowledge, and experience are dependent on increasingly open, dynamic, interdependent and complex Internet services. The challenge for the Internet of the Future design is to build robust enabling technologies, implement and deploy adaptive systems, to create business opportunities considering increasing uncertainties and emergent systemic behaviors where humans and machines seamlessly cooperate
    corecore