1,178 research outputs found

    Efficient sharing mechanisms for virtualized multi-tenant heterogeneous networks

    Get PDF
    The explosion in data traffic, the physical resource constraints, and the insufficient financial incentives for deploying 5G networks, stress the need for a paradigm shift in network upgrades. Typically, operators are also the service providers, which charge the end users with low and flat tariffs, independently of the service enjoyed. A fine-scale management of the network resources is needed, both for optimizing costs and resource utilization, as well as for enabling new synergies among network owners and third-parties. In particular, operators could open their networks to third parties by means of fine-scale sharing agreements over customized networks for enhanced service provision, in exchange for an adequate return of investment for upgrading their infrastructures. The main objective of this thesis is to study the potential of fine-scale resource management and sharing mechanisms for enhancing service provision and for contributing to a sustainable road to 5G. More precisely, the state-of-the-art architectures and technologies for network programmability and scalability are studied, together with a novel paradigm for supporting service diversity and fine-scale sharing. We review the limits of conventional networks, we extend existing standardization efforts and define an enhanced architecture for enabling 5G networks' features (e.g., network-wide centralization and programmability). The potential of the proposed architecture is assessed in terms of flexible sharing and enhanced service provision, while the advantages of alternative business models are studied in terms of additional profits to the operators. We first study the data rate improvement achievable by means of spectrum and infrastructure sharing among operators and evaluate the profit increase justified by a better service provided. We present a scheme based on coalitional game theory for assessing the capability of accommodating more service requests when a cooperative approach is adopted, and for studying the conditions for beneficial sharing among coalitions of operators. Results show that: i) collaboration can be beneficial also in case of unbalanced cost redistribution within coalitions; ii) coalitions of equal-sized operators provide better profit opportunities and require lower tariffs. The second kind of sharing interaction that we consider is the one between operators and third-party service providers, in the form of fine-scale provision of customized portions of the network resources. We define a policy-based admission control mechanism, whose performance is compared with reference strategies. The proposed mechanism is based on auction theory and computes the optimal admission policy at a reduced complexity for different traffic loads and allocation frequencies. Because next-generation services include delay-critical services, we compare the admission control performances of conventional approaches with the proposed one, which proves to offer near real-time service provision and reduced complexity. Besides, it guarantees high revenues and low expenditures in exchange for negligible losses in terms of fairness towards service providers. To conclude, we study the case where adaptable timescales are adopted for the policy-based admission control, in order to promptly guarantee service requirements over traffic fluctuations. In order to reduce complexity, we consider the offline pre­computation of admission strategies with respect to reference network conditions, then we study the extension to unexplored conditions by means of computationally efficient methodologies. Performance is compared for different admission strategies by means of a proof of concept on real network traces. Results show that the proposed strategy provides a tradeoff in complexity and performance with respect to reference strategies, while reducing resource utilization and requirements on network awareness.La explosion del trafico de datos, los recursos limitados y la falta de incentivos para el desarrollo de 5G evidencian la necesidad de un cambio de paradigma en la gestion de las redes actuales. Los operadores de red suelen ser tambien proveedores de servicios, cobrando tarifas bajas y planas, independientemente del servicio ofrecido. Se necesita una gestion de recursos precisa para optimizar su utilizacion, y para permitir nuevas sinergias entre operadores y proveedores de servicios. Concretamente, los operadores podrian abrir sus redes a terceros compartiendolas de forma flexible y personalizada para mejorar la calidad de servicio a cambio de aumentar sus ganancias como incentivo para mejorar sus infraestructuras. El objetivo principal de esta tesis es estudiar el potencial de los mecanismos de gestion y comparticion de recursos a pequei\a escala para trazar un camino sostenible hacia el 5G. En concreto, se estudian las arquitecturas y tecnolog fas mas avanzadas de "programabilidad" y escalabilidad de las redes, junto a un nuevo paradigma para la diversificacion de servicios y la comparticion de recursos. Revisamos los limites de las redes convencionales, ampliamos los esfuerzos de estandarizacion existentes y definimos una arquitectura para habilitar la centralizacion y la programabilidad en toda la red. La arquitectura propuesta se evalua en terminos de flexibilidad en la comparticion de recursos, y de mejora en la prestacion de servicios, mientras que las ventajas de un modelo de negocio alternativo se estudian en terminos de ganancia para los operadores. En primer lugar, estudiamos el aumento en la tasa de datos gracias a un uso compartido del espectro y de las infraestructuras, y evaluamos la mejora en las ganancias de los operadores. Presentamos un esquema de admision basado en la teoria de juegos para acomodar mas solicitudes de servicio cuando se adopta un enfoque cooperativo, y para estudiar las condiciones para que la reparticion de recursos sea conveniente entre coaliciones de operadores. Los resultados ensei\an que: i) la colaboracion puede ser favorable tambien en caso de una redistribucion desigual de los costes en cada coalicion; ii) las coaliciones de operadores de igual tamai\o ofrecen mejores ganancias y requieren tarifas mas bajas. El segundo tipo de comparticion que consideramos se da entre operadores de red y proveedores de servicios, en forma de provision de recursos personalizada ya pequei\a escala. Definimos un mecanismo de control de trafico basado en polfticas de admision, cuyo rendimiento se compara con estrategias de referencia. El mecanismo propuesto se basa en la teoria de subastas y calcula la politica de admision optima con una complejidad reducida para diferentes cargas de trafico y tasa de asignacion. Con particular atencion a servicios 5G de baja latencia, comparamos las prestaciones de estrategias convencionales para el control de admision con las del metodo propuesto, que proporciona: i) un suministro de servicios casi en tiempo real; ii) una complejidad reducida; iii) unos ingresos elevados; y iv) unos gastos reducidos, a cambio de unas perdidas insignificantes en terminos de imparcialidad hacia los proveedores de servicios. Para concluir, estudiamos el caso en el que se adoptan escalas de tiempo adaptables para el control de admision, con el fin de garantizar puntualmente los requisitos de servicio bajo diferentes condiciones de trafico. Para reducir la complejidad, consideramos el calculo previo de las estrategias de admision con respecto a condiciones de red de referenda, adaptables a condiciones inexploradas por medio de metodologias computacionalmente eficientes. Se compara el rendimiento de diferentes estrategias de admision sobre trazas de trafico real. Los resultados muestran que la estrategia propuesta equilibra complejidad y ganancias, mientras se reduce la utilizacion de recursos y la necesidad de conocer el estado exacto de la red.Postprint (published version

    An optimized Load Balancing Technique for Virtual Machine Migration in Cloud Computing

    Get PDF
    Cloud computing (CC) is a service that uses subscription storage & computing power. Load balancing in distributed systems is one of the most critical pieces. CC has been a very interesting and important area of research because CC is one of the best systems that stores data with reduced costs and can be viewed over the internet at all times. Load balance facilitates maintaining high user retention & resource utilization by ensuring that each computing resource is correctly and properly distributed. This paper describes cloud-based load balancing systems. CC is virtualization of hardware like storage, computing, and security by virtual machines (VM). The live relocation of these machines provides many advantages, including high availability, hardware repair, fault tolerance, or workload balancing. In addition to various VM migration facilities, during the migration process, it is subject to significant security risks which the industry hesitates to accept. In this paper we have discussed CC besides this we also emphasize various existing load balancing algorithms, advantages& also we describe the PSO optimization technique

    Resource management in a containerized cloud : status and challenges

    Get PDF
    Cloud computing heavily relies on virtualization, as with cloud computing virtual resources are typically leased to the consumer, for example as virtual machines. Efficient management of these virtual resources is of great importance, as it has a direct impact on both the scalability and the operational costs of the cloud environment. Recently, containers are gaining popularity as virtualization technology, due to the minimal overhead compared to traditional virtual machines and the offered portability. Traditional resource management strategies however are typically designed for the allocation and migration of virtual machines, so the question arises how these strategies can be adapted for the management of a containerized cloud. Apart from this, the cloud is also no longer limited to the centrally hosted data center infrastructure. New deployment models have gained maturity, such as fog and mobile edge computing, bringing the cloud closer to the end user. These models could also benefit from container technology, as the newly introduced devices often have limited hardware resources. In this survey, we provide an overview of the current state of the art regarding resource management within the broad sense of cloud computing, complementary to existing surveys in literature. We investigate how research is adapting to the recent evolutions within the cloud, being the adoption of container technology and the introduction of the fog computing conceptual model. Furthermore, we identify several challenges and possible opportunities for future research

    Resource Management in Large-scale Systems

    Get PDF
    The focus of this thesis is resource management in large-scale systems. Our primary concerns are energy management and practical principles for self-organization and self-management. The main contributions of our work are: 1. Models. We proposed several models for different aspects of resource management, e.g., energy-aware load balancing and application scaling for the cloud ecosystem, hierarchical architecture model for self-organizing and self-manageable systems and a new cloud delivery model based on auction-driven self-organization approach. 2. Algorithms. We also proposed several different algorithms for the models described above. Algorithms such as coalition formation, combinatorial auctions and clustering algorithm for scale-free organizations of scale-free networks. 3. Evaluation. Eventually we conducted different evaluations for the proposed models and algorithms in order to verify them. All the simulations reported in this thesis had been carried out on different instances and services of Amazon Web Services (AWS). All of these modules will be discussed in detail in the following chapters respectively

    5G and beyond networks

    Get PDF
    This chapter investigates the Network Layer aspects that will characterize the merger of the cellular paradigm and the IoT architectures, in the context of the evolution towards 5G-and-beyond, including some promising emerging services as Unmanned Aerial Vehicles or Base Stations, and V2X communications

    A Survey on Compiler Autotuning using Machine Learning

    Full text link
    Since the mid-1990s, researchers have been trying to use machine-learning based approaches to solve a number of different compiler optimization problems. These techniques primarily enhance the quality of the obtained results and, more importantly, make it feasible to tackle two main compiler optimization problems: optimization selection (choosing which optimizations to apply) and phase-ordering (choosing the order of applying optimizations). The compiler optimization space continues to grow due to the advancement of applications, increasing number of compiler optimizations, and new target architectures. Generic optimization passes in compilers cannot fully leverage newly introduced optimizations and, therefore, cannot keep up with the pace of increasing options. This survey summarizes and classifies the recent advances in using machine learning for the compiler optimization field, particularly on the two major problems of (1) selecting the best optimizations and (2) the phase-ordering of optimizations. The survey highlights the approaches taken so far, the obtained results, the fine-grain classification among different approaches and finally, the influential papers of the field.Comment: version 5.0 (updated on September 2018)- Preprint Version For our Accepted Journal @ ACM CSUR 2018 (42 pages) - This survey will be updated quarterly here (Send me your new published papers to be added in the subsequent version) History: Received November 2016; Revised August 2017; Revised February 2018; Accepted March 2018
    • …
    corecore