54 research outputs found

    An evaluation of the power consumption and carbon footprint of a cloud infrastructure

    Get PDF
    The Information and Communication Technology (ICT) sector represent two to three percents of the world energy consumption and about the same percentage of GreenHouse Gas(GHG) emission. Moreover the IT-related costs represent fifty per-cents of the electricity bill of a company. In January 2010 the Green Touch consortium composed of sixteen leading companies and laboratories in the IT field led by Bell's lab and Alcatel-Lucent have announced that in five years the Internet could require a thousand times less energy than it requires now. Furthermore Edinburgh Napier University is committed to reduce its carbon footprint by 25% on the 2007/8 to 2012/13 period (Edinburgh Napier University Sustainability Office, 2009) and one of the objectives is to deploy innovative C&IT solutions. Therefore there is a general interest to reduce the electrical cost of the IT infrastructure, usually led by environmental concerns.One of the most prominent technologies when Green IT is discussed is Cloud Computing (Stephen Ruth, 2009). This technology allows the on-demand self service provisioning by making resources available as a service. Its elasticity allows the automatic scaling of the demand and hardware consolidation thanks to virtualization. Therefore an increasing number of companies are moving their resources into a cloud managed by themselves or a third party. However this is known to reduce the electricity bill of a company if the cloud is managed by a third-party off-premise but this does not say to which extent is the power consumption is reduced. Indeed the processing resources seem to be just located somewhere else. Moreover hardware consolidation suggest that power saving is achieved only during off-peak time (Xiaobo Fan et al, 2007). Furthermore the cost of the network is never mentioned when cloud is referred as power saving and this cost might not be negligible. Indeed the network might need upgrades because what was being done locally is done remotely with cloud computing. In the same way cloud computing is supposed to enhance the capabilities of mobile devices but the impact of cloud communication on their autonomy is mentioned anywhere.Experimentations have been performed in order to evaluate the power consumption of an infrastructure relying on a cloud used for desktop virtualization and also to measure the cost of the same infrastructure without a cloud. The overall infrastructure have been split in different elements respectively the cloud infrastructure, the network infrastructure and end devices and the power consumption of each element have been monitored separately. The experimentation have considered different severs, network equipment (switches, wireless access-points, router) and end-devices (desktops Iphone, Ipad and Sony-Ericsson Xperia running Android). The experiments have also measured the impact of a cloud communication on the battery of mobile devices. The evaluation have considered different deployment sizes and estimated the carbon emission of the technologies tested. The cloud infrastructure happened to be power saving and not only during off-peak time from a deployment size large enough (approximately 20 computers) for the same processing power. The power saving is large enough for wide deployment (500 computers) that it could overcome the cost of a network upgrade to a Gigabit access infrastructure and still reduce the carbon emission by 4 tonnes or 43.97% over a year and on Napier campuses compared to traditional deployment with a Fast-Ethernet access-network. However the impact of cloud communication on mobile-devices is important and has increase the power consumption by 57% to 169%

    Holistic cloud computing environmental quantification and behavioural analysis

    Get PDF
    Cloud computing has been characterized to be large-scale multi-tenant systems that are able to dynamically scale-up and scale-down computational resources to consumers with diverse Quality-of-Service requirements. In recent years, a number of dependability and resource management approaches have been proposed for Cloud computing datacenters. However, there is still a lack of real-world Cloud datasets that analyse and extensively model Cloud computing characteristics and quantify their effect on system dimensions such as resource utilization, user behavioural patterns and failure characteristics. This results in two research problems: First, without the holistic analysis of real-world systems Cloud characteristics, their dimensions cannot be quantified resulting in inaccurate research assumptions of Cloud system behaviour. Second, simulated parameters used in state-of-the-art Cloud mechanisms currently rely on theoretical values which do not accurately represent real Cloud systems, as important parameters such as failure times and energy-waste have not been quantified using empirical data. This presents a large gap in terms of practicality and effectiveness between developing and evaluating mechanisms within simulated and real Cloud systems. This thesis presents a comprehensive method and empirical analysis of large-scale production Cloud computing environments in order to quantify system characteristics in terms of consumer submission and resource request patterns, workload behaviour, server utilization and failures. Furthermore, this work identifies areas of operational inefficiency within the system, as well as quantifies the amount of energy waste created due to failures. We discover that 4-10% of all server computation is wasted due to Termination Events, and that failures contribute to approximately 11% of the total datacenter energy waste. These analyses of empirical data enables researchers and Cloud providers an enhanced understanding of real Cloud behaviour and supports system assumptions and provides parameters that can be used to develop and validate the effectiveness of future energy-efficient and dependability mechanisms

    Eco‐Holonic 4.0 Circular Business Model to  Conceptualize Sustainable Value Chain Towards  Digital Transition 

    Get PDF
    The purpose of this paper is to conceptualize a circular business model based on an Eco-Holonic Architecture, through the integration of circular economy and holonic principles. A conceptual model is developed to manage the complexity of integrating circular economy principles, digital transformation, and tools and frameworks for sustainability into business models. The proposed architecture is multilevel and multiscale in order to achieve the instantiation of the sustainable value chain in any territory. The architecture promotes the incorporation of circular economy and holonic principles into new circular business models. This integrated perspective of business model can support the design and upgrade of the manufacturing companies in their respective industrial sectors. The conceptual model proposed is based on activity theory that considers the interactions between technical and social systems and allows the mitigation of the metabolic rift that exists between natural and social metabolism. This study contributes to the existing literature on circular economy, circular business models and activity theory by considering holonic paradigm concerns, which have not been explored yet. This research also offers a unique holonic architecture of circular business model by considering different levels, relationships, dynamism and contextualization (territory) aspects

    High Performance Network Evaluation and Testing

    Get PDF

    Internet of Things Applications - From Research and Innovation to Market Deployment

    Get PDF
    The book aims to provide a broad overview of various topics of Internet of Things from the research, innovation and development priorities to enabling technologies, nanoelectronics, cyber physical systems, architecture, interoperability and industrial applications. It is intended to be a standalone book in a series that covers the Internet of Things activities of the IERC – Internet of Things European Research Cluster from technology to international cooperation and the global "state of play".The book builds on the ideas put forward by the European research Cluster on the Internet of Things Strategic Research Agenda and presents global views and state of the art results on the challenges facing the research, development and deployment of IoT at the global level. Internet of Things is creating a revolutionary new paradigm, with opportunities in every industry from Health Care, Pharmaceuticals, Food and Beverage, Agriculture, Computer, Electronics Telecommunications, Automotive, Aeronautics, Transportation Energy and Retail to apply the massive potential of the IoT to achieving real-world solutions. The beneficiaries will include as well semiconductor companies, device and product companies, infrastructure software companies, application software companies, consulting companies, telecommunication and cloud service providers. IoT will create new revenues annually for these stakeholders, and potentially create substantial market share shakeups due to increased technology competition. The IoT will fuel technology innovation by creating the means for machines to communicate many different types of information with one another while contributing in the increased value of information created by the number of interconnections among things and the transformation of the processed information into knowledge shared into the Internet of Everything. The success of IoT depends strongly on enabling technology development, market acceptance and standardization, which provides interoperability, compatibility, reliability, and effective operations on a global scale. The connected devices are part of ecosystems connecting people, processes, data, and things which are communicating in the cloud using the increased storage and computing power and pushing for standardization of communication and metadata. In this context security, privacy, safety, trust have to be address by the product manufacturers through the life cycle of their products from design to the support processes. The IoT developments address the whole IoT spectrum - from devices at the edge to cloud and datacentres on the backend and everything in between, through ecosystems are created by industry, research and application stakeholders that enable real-world use cases to accelerate the Internet of Things and establish open interoperability standards and common architectures for IoT solutions. Enabling technologies such as nanoelectronics, sensors/actuators, cyber-physical systems, intelligent device management, smart gateways, telematics, smart network infrastructure, cloud computing and software technologies will create new products, new services, new interfaces by creating smart environments and smart spaces with applications ranging from Smart Cities, smart transport, buildings, energy, grid, to smart health and life. Technical topics discussed in the book include: • Introduction• Internet of Things Strategic Research and Innovation Agenda• Internet of Things in the industrial context: Time for deployment.• Integration of heterogeneous smart objects, applications and services• Evolution from device to semantic and business interoperability• Software define and virtualization of network resources• Innovation through interoperability and standardisation when everything is connected anytime at anyplace• Dynamic context-aware scalable and trust-based IoT Security, Privacy framework• Federated Cloud service management and the Internet of Things• Internet of Things Application

    Conserve and Protect Resources in Software-Defined Networking via the Traffic Engineering Approach

    Get PDF
    Software Defined Networking (SDN) is revolutionizing the architecture and operation of computer networks and promises a more agile and cost-efficient network management. SDN centralizes the network control logic and separates the control plane from the data plane, thus enabling flexible management of networks. A network based on SDN consists of a data plane and a control plane. To assist management of devices and data flows, a network also has an independent monitoring plane. These coexisting network planes have various types of resources, such as bandwidth utilized to transmit monitoring data, energy spent to power data forwarding devices and computational resources to control a network. Unwise management, even abusive utilization of these resources lead to the degradation of the network performance and increase the Operating Expenditure (Opex) of the network owner. Conserving and protecting limited network resources is thus among the key requirements for efficient networking. However, the heterogeneity of the network hardware and network traffic workloads expands the configuration space of SDN, making it a challenging task to operate a network efficiently. Furthermore, the existing approaches usually lack the capability to automatically adapt network configurations to handle network dynamics and diverse optimization requirements. Addtionally, a centralized SDN controller has to run in a protected environment against certain attacks. This thesis builds upon the centralized management capability of SDN, and uses cross-layer network optimizations to perform joint traffic engineering, e.g., routing, hardware and software configurations. The overall goal is to overcome the management complexities in conserving and protecting resources in multiple functional planes in SDN when facing network heterogeneities and system dynamics. This thesis presents four contributions: (1) resource-efficient network monitoring, (2) resource-efficient data forwarding, (3) using self-adaptive algorithms to improve network resource efficiency, and (4) mitigating abusive usage of resources for network controlling. The first contribution of this thesis is a resource-efficient network monitoring solution. In this thesis, we consider one specific type of virtual network management function: flow packet inspection. This type of the network monitoring application requires to duplicate packets of target flows and send them to packet monitors for in-depth analysis. To avoid the competition for resources between the original data and duplicated data, the network operators can transmit the data flows through physically (e.g., different communication mediums) or virtually (e.g., distinguished network slices) separated channels having different resource consumption properties. We propose the REMO solution, namely Resource Efficient distributed Monitoring, to reduce the overall network resource consumption incurred by both types of data, via jointly considering the locations of the packet monitors, the selection of devices forking the data packets, and flow path scheduling strategies. In the second contribution of this thesis, we investigate the resource efficiency problem in hybrid, server-centric data center networks equipped with both traditional wired connections (e.g., InfiniBand or Ethernet) and advanced high-data-rate wireless links (e.g., directional 60GHz wireless technology). The configuration space of hybrid SDN equipped with both wired and wireless communication technologies is massively large due to the complexity brought by the device heterogeneity. To tackle this problem, we present the ECAS framework to reduce the power consumption and maintain the network performance. The approaches based on the optimization models and heuristic algorithms are considered as the traditional way to reduce the operation and facility resource consumption in SDN. These approaches are either difficult to directly solve or specific for a particular problem space. As the third contribution of this thesis, we investigates the approach of using Deep Reinforcement Learning (DRL) to improve the adaptivity of the management modules for network resource and data flow scheduling. The goal of the DRL agent in the SDN network is to reduce the power consumption of SDN networks without severely degrading the network performance. The fourth contribution of this thesis is a protection mechanism based upon flow rate limiting to mitigate abusive usage of the SDN control plane resource. Due to the centralized architecture of SDN and its handling mechanism for new data flows, the network controller can be the failure point due to the crafted cyber-attacks, especially the Control-Plane- Saturation (CPS) attack. We proposes an In-Network Flow mAnagement Scheme (INFAS) to effectively reduce the generation of malicious control packets depending on the parameters configured for the proposed mitigation algorithm. In summary, the contributions of this thesis address various unique challenges to construct resource-efficient and secure SDN. This is achieved by designing and implementing novel and intelligent models and algorithms to configure networks and perform network traffic engineering, in the protected centralized network controller

    ERP implementation methodologies and frameworks: a literature review

    Get PDF
    Enterprise Resource Planning (ERP) implementation is a complex and vibrant process, one that involves a combination of technological and organizational interactions. Often an ERP implementation project is the single largest IT project that an organization has ever launched and requires a mutual fit of system and organization. Also the concept of an ERP implementation supporting business processes across many different departments is not a generic, rigid and uniform concept and depends on variety of factors. As a result, the issues addressing the ERP implementation process have been one of the major concerns in industry. Therefore ERP implementation receives attention from practitioners and scholars and both, business as well as academic literature is abundant and not always very conclusive or coherent. However, research on ERP systems so far has been mainly focused on diffusion, use and impact issues. Less attention has been given to the methods used during the configuration and the implementation of ERP systems, even though they are commonly used in practice, they still remain largely unexplored and undocumented in Information Systems research. So, the academic relevance of this research is the contribution to the existing body of scientific knowledge. An annotated brief literature review is done in order to evaluate the current state of the existing academic literature. The purpose is to present a systematic overview of relevant ERP implementation methodologies and frameworks as a desire for achieving a better taxonomy of ERP implementation methodologies. This paper is useful to researchers who are interested in ERP implementation methodologies and frameworks. Results will serve as an input for a classification of the existing ERP implementation methodologies and frameworks. Also, this paper aims also at the professional ERP community involved in the process of ERP implementation by promoting a better understanding of ERP implementation methodologies and frameworks, its variety and history

    NFV orchestration in edge and fog scenarios

    Get PDF
    Mención Internacional en el título de doctorLas infraestructuras de red actuales soportan una variedad diversa de servicios como video bajo demanda, video conferencias, redes sociales, sistemas de educación, o servicios de almacenamiento de fotografías. Gran parte de la población mundial ha comenzado a utilizar estos servicios, y los utilizan diariamente. Proveedores de Cloud y operadores de infraestructuras de red albergan el tráfico de red generado por estos servicios, y sus tareas de gestión no solo implican realizar el enrutamiento del tráfico, sino también el procesado del tráfico de servicios de red. Tradicionalmente, el procesado del tráfico ha sido realizado mediante aplicaciones/ programas desplegados en servidores que estaban dedicados en exclusiva a tareas concretas como la inspección de paquetes. Sin embargo, en los últimos anos los servicios de red se han virtualizado y esto ha dado lugar al paradigma de virtualización de funciones de red (Network Function Virtualization (NFV) siguiendo las siglas en ingles), en el que las funciones de red de un servicio se ejecutan en contenedores o máquinas virtuales desacopladas de la infraestructura hardware. Como resultado, el procesado de tráfico se ha ido haciendo más flexible gracias al laxo acople del software y hardware, y a la posibilidad de compartir funciones de red típicas, como firewalls, entre los distintos servicios de red. NFV facilita la automatización de operaciones de red, ya que tareas como el escalado, o la migración son típicamente llevadas a cabo mediante un conjunto de comandos previamente definidos por la tecnología de virtualización pertinente, bien mediante contenedores o máquinas virtuales. De todos modos, sigue siendo necesario decidir el en rutamiento y procesado del tráfico de cada servicio de red. En otras palabras, que servidores tienen que encargarse del procesado del tráfico, y que enlaces de la red tienen que utilizarse para que las peticiones de los usuarios lleguen a los servidores finales, es decir, el conocido como embedding problem. Bajo el paraguas del paradigma NFV, a este problema se le conoce en inglés como Virtual Network Embedding (VNE), y esta tesis utiliza el termino “NFV orchestration algorithm” para referirse a los algoritmos que resuelven este problema. El problema del VNE es NP-hard, lo cual significa que que es imposible encontrar una solución optima en un tiempo polinómico, independientemente del tamaño de la red. Como consecuencia, la comunidad investigadora y de telecomunicaciones utilizan heurísticos que encuentran soluciones de manera más rápida que productos para la resolución de problemas de optimización. Tradicionalmente, los “NFV orchestration algorithms” han intentado minimizar los costes de despliegue derivados de las soluciones asociadas. Por ejemplo, estos algoritmos intentan no consumir el ancho de banda de la red, y usar rutas cortas para no utilizar tantos recursos. Además, una tendencia reciente ha llevado a la comunidad investigadora a utilizar algoritmos que minimizan el consumo energético de los servicios desplegados, bien mediante la elección de dispositivos con un consumo energético más eficiente, o mediante el apagado de dispositivos de red en desuso. Típicamente, las restricciones de los problemas de VNE se han resumido en un conjunto de restricciones asociadas al uso de recursos y consumo energético, y las soluciones se diferenciaban por la función objetivo utilizada. Pero eso era antes de la 5a generación de redes móviles (5G) se considerase en el problema de VNE. Con la aparición del 5G, nuevos servicios de red y casos de uso entraron en escena. Los estándares hablaban de comunicaciones ultra rápidas y fiables (Ultra-Reliable and Low Latency Communications (URLLC) usando las siglas en inglés) con latencias por debajo de unos pocos milisegundos y fiabilidades del 99.999%, una banda ancha mejorada (enhanced Mobile Broadband (eMBB) usando las siglas en inglés) con notorios incrementos en el flujo de datos, e incluso la consideración de comunicaciones masivas entre maquinas (Massive Machine-Type Communications (mMTC) usando las siglas en inglés) entre dispositivos IoT. Es más, paradigmas como edge y fog computing se incorporaron a la tecnología 5G, e introducían la idea de tener dispositivos de computo más cercanos al usuario final. Como resultado, el problema del VNE tenía que incorporar los nuevos requisitos como restricciones a tener en cuenta, y toda solución debía satisfacer bajas latencias, alta fiabilidad, y mayores tasas de transmisión. Esta tesis estudia el problema des VNE, y propone algunos heurísticos que lidian con las restricciones asociadas a servicios 5G en escenarios edge y fog, es decir, las soluciones propuestas se encargan de asignar funciones virtuales de red a servidores, y deciden el enrutamiento del trafico en las infraestructuras 5G con dispositivos edge y fog. Para evaluar el rendimiento de las soluciones propuestas, esta tesis estudia en primer lugar la generación de grafos que representan redes 5G. Los mecanismos propuestos para la generación de grafos sirven para representar distintos escenarios 5G. En particular, escenarios de federación en los que varios dominios comparten recursos entre ellos. Los grafos generados también representan servidores en el edge, así como dispositivos fog con una batería limitada. Además, estos grafos tienen en cuenta los requisitos de estándares, y la demanda que se espera en las redes 5G. La generación de grafos propuesta sirve para representar escenarios federación en los que varios dominios comparten recursos entre ellos, y redes 5G con servidores edge, así como dispositivos fog estáticos o móviles con una batería limitada. Los grafos generados para infraestructuras 5G tienen en cuenta los requisitos de estándares, y la demanda de red que se espera en las redes 5G. Además, los grafos son diferentes en función de la densidad de población, y el área de estudio, es decir, si es una zona industrial, una autopista, o una zona urbana. Tras detallar la generación de grafos que representan redes 5G, esta tesis propone algoritmos de orquestación NFV para resolver con el problema del VNE. Primero, se centra en escenarios federados en los que los servicios de red se tienen que asignar no solo a la infraestructura de un dominio, sino a los recursos compartidos en la federación de dominios. Dos problemas diferentes han sido estudiados, uno es el problema del VNE propiamente dicho sobre una infraestructura federada, y el otro es la delegación de servicios de red. Es decir, si un servicio de red se debe desplegar localmente en un dominio, o en los recursos compartidos por la federación de dominios; a sabiendas de que el último caso supone el pago de cuotas por parte del dominio local a cambio del despliegue del servicio de red. En segundo lugar, esta tesis propone OKpi, un algoritmo de orquestación NFV para conseguir la calidad de servicio de las distintas slices de las redes 5G. Conceptualmente, el slicing consiste en partir la red de modo que cada servicio de red sea tratado de modo diferente dependiendo del trozo al que pertenezca. Por ejemplo, una slice de eHealth reservara los recursos de red necesarios para conseguir bajas latencias en servicios como operaciones quirúrgicas realizadas de manera remota. Cada trozo (slice) está destinado a unos servicios específicos con unos requisitos muy concretos, como alta fiabilidad, restricciones de localización, o latencias de un milisegundo. OKpi es un algoritmo de orquestación NFV que consigue satisfacer los requisitos de servicios de red en los distintos trozos, o slices de la red. Tras presentar OKpi, la tesis resuelve el problema del VNE en redes 5G con dispositivos fog estáticos y móviles. El algoritmo de orquestación NFV presentado tiene en cuenta las limitaciones de recursos de computo de los dispositivos fog, además de los problemas de falta de cobertura derivados de la movilidad de los dispositivos. Para concluir, esta tesis estudia el escalado de servicios vehiculares Vehicle-to-Network (V2N), que requieren de bajas latencias para servicios como la prevención de choques, avisos de posibles riesgos, y conducción remota. Para estos servicios, los atascos y congestiones en la carretera pueden causar el incumplimiento de los requisitos de latencia. Por tanto, es necesario anticiparse a esas circunstancias usando técnicas de series temporales que permiten saber el tráfico inminente en los siguientes minutos u horas, para así poder escalar el servicio V2N adecuadamente.Current network infrastructures handle a diverse range of network services such as video on demand services, video-conferences, social networks, educational systems, or photo storage services. These services have been embraced by a significant amount of the world population, and are used on a daily basis. Cloud providers and Network operators’ infrastructures accommodate the traffic rates that the aforementioned services generate, and their management tasks do not only involve the traffic steering, but also the processing of the network services’ traffic. Traditionally, the traffic processing has been assessed via applications/programs deployed on servers that were exclusively dedicated to a specific task as packet inspection. However, in recent years network services have stated to be virtualized and this has led to the Network Function Virtualization (Network Function Virtualization (NFV)) paradigm, in which the network functions of a service run on containers or virtual machines that are decoupled from the hardware infrastructure. As a result, the traffic processing has become more flexible because of the loose coupling between software and hardware, and the possibility of sharing common network functions, as firewalls, across multiple network services. NFV eases the automation of network operations, since scaling and migrations tasks are typically performed by a set of commands predefined by the virtualization technology, either containers or virtual machines. However, it is still necessary to decide the traffic steering and processing of every network service. In other words, which servers will hold the traffic processing, and which are the network links to be traversed so the users’ requests reach the final servers, i.e., the network embedding problem. Under the umbrella of NFV, this problem is known as Virtual Network Embedding (VNE), and this thesis refers as “NFV orchestration algorithms” to those algorithms solving such a problem. The VNE problem is a NP-hard, meaning that it is impossible to find optimal solutions in polynomial time, no matter the network size. As a consequence, the research and telecommunications community rely on heuristics that find solutions quicker than a commodity optimization solver. Traditionally, NFV orchestration algorithms have tried to minimize the deployment costs derived from their solutions. For example, they try to not exhaust the network bandwidth, and use short paths to use less network resources. Additionally, a recent tendency led the research community towards algorithms that minimize the energy consumption of the deployed services, either by selecting more energy efficient devices or by turning off those network devices that remained unused. VNE problem constraints were typically summarized in a set of resources/energy constraints, and the solutions differed on which objectives functions were aimed for. But that was before 5th generation of mobile networks (5G) were considered in the VNE problem. With the appearance of 5G, new network services and use cases started to emerge. The standards talked about Ultra Reliable Low Latency Communication (Ultra-Reliable and Low Latency Communications (URLLC)) with latencies below few milliseconds and 99.999% reliability, an enhanced mobile broadband (enhanced Mobile Broadband (eMBB)) with significant data rate increases, and even the consideration of massive machine-type communications (Massive Machine-Type Communications (mMTC)) among Internet of Things (IoT) devices. Moreover, paradigms such as edge and fog computing blended with the 5G technology to introduce the idea of having computing devices closer to the end users. As a result, the VNE problem had to incorporate the new requirements as constraints to be taken into account, and every solution should either satisfy low latencies, high reliability, or larger data rates. This thesis studies the VNE problem, and proposes some heuristics tackling the constraints related to 5G services in Edge and fog scenarios, that is, the proposed solutions assess the assignment of Virtual Network Functions to resources, and the traffic steering across 5G infrastructures that have Edge and Fog devices. To evaluate the performance of the proposed solutions, the thesis studies first the generation of graphs that represent 5G networks. The proposed mechanisms to generate graphs serve to represent diverse 5G scenarios. In particular federation scenarios in which several domains share resources among themselves. The generated graphs also represent edge servers, so as fog devices with limited battery capacity. Additionally, these graphs take into account the standard requirements, and the expected demand for 5G networks. Moreover, the graphs differ depending on the density of population, and the area of study, i.e., whether it is an industrial area, a highway, or an urban area. After detailing the generation of graphs representing the 5G networks, this thesis proposes several NFV orchestration algorithms to tackle the VNE problem. First, it focuses on federation scenarios in which network services should be assigned not only to a single domain infrastructure, but also to the shared resources of the federation of domains. Two different problems are studied, one being the VNE itself over a federated infrastructure, and the other the delegation of network services. That is, whether a network service should be deployed in a local domain, or in the pool of resources of the federation domain; knowing that the latter charges the local domain for hosting the network service. Second, the thesis proposes OKpi, a NFV orchestration algorithm to meet 5G network slices quality of service. Conceptually, network slicing consists in splitting the network so network services are treated differently based on the slice they belong to. For example, an eHealth network slice will allocate the network resources necessary to meet low latencies for network services such as remote surgery. Each network slice is devoted to specific services with very concrete requirements, as high reliability, location constraints, or 1ms latencies. OKpi is a NFV orchestration algorithm that meets the network service requirements among different slices. It is based on a multi-constrained shortest path heuristic, and its solutions satisfy latency, reliability, and location constraints. After presenting OKpi, the thesis tackles the VNE problem in 5G networks with static/moving fog devices. The presented NFV orchestration algorithm takes into account the limited computing resources of fog devices, as well as the out-of-coverage problems derived from the devices’ mobility. To conclude, this thesis studies the scaling of Vehicle-to-Network (V2N) services, which require low latencies for network services as collision avoidance, hazard warning, and remote driving. For these services, the presence of traffic jams, or high vehicular traffic congestion lead to the violation of latency requirements. Hence, it is necessary to anticipate to such circumstances by using time-series techniques that allow to derive the incoming vehicular traffic flow in the next minutes or hours, so as to scale the V2N service accordingly.The 5G Exchange (5GEx) project (2015-2018) was an EU-funded project (H2020-ICT-2014-2 grant agreement 671636). The 5G-TRANSFORMER project (2017-2019) is an EU-funded project (H2020-ICT-2016-2 grant agreement 761536). The 5G-CORAL project (2017-2019) is an EU-Taiwan project (H2020-ICT-2016-2 grant agreement 761586).Programa de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Ioannis Stavrakakis.- Secretario: Pablo Serrano Yáñez-Mingot.- Vocal: Paul Horatiu Patra
    corecore