1,195 research outputs found

    APMEC: An Automated Provisioning Framework for Multi-access Edge Computing

    Full text link
    Novel use cases and verticals such as connected cars and human-robot cooperation in the areas of 5G and Tactile Internet can significantly benefit from the flexibility and reduced latency provided by Network Function Virtualization (NFV) and Multi-Access Edge Computing (MEC). Existing frameworks managing and orchestrating MEC and NFV are either tightly coupled or completely separated. The former design is inflexible and increases the complexity of one framework. Whereas, the latter leads to inefficient use of computation resources because information are not shared. We introduce APMEC, a dedicated framework for MEC while enabling the collaboration with the management and orchestration (MANO) frameworks for NFV. The new design allows to reuse allocated network services, thus maximizing resource utilization. Measurement results have shown that APMEC can allocate up to 60% more number of network services. Being developed on top of OpenStack, APMEC is an open source project, available for collaboration and facilitating further research activities

    IT and Multi-layer Online Resource Allocation and Offline Planning in Metropolitan Networks

    Get PDF
    Metropolitan networks are undergoing a major technological breakthrough leveraging the capabilities of software-defined networking (SDN) and network function virtualization (NFV). NFV permits the deployment of virtualized network functions (VNFs) on commodity hardware appliances which can be combined with SDN flexibility and programmability of the network infrastructure. SDN/NFV-enabled networks require decision-making in two time scales: short-term online resource allocation and mid-to-long term offline planning. In this paper, we first tackle the dimensioning of SDN/NFV-enabled metropolitan networks paying special attention to the role that latency plays in the capacity planning. We focus on a specific use-case: the metropolitan network that covers the Murcia - Alicante Spanish regions. Then, we propose a latency-aware multilayer service-chain allocation (LA-ML-SCA) algorithm to explore a range of maximum latency requirements and their impact on the resources for dimensioning the metropolitan network. We observe that design costs increase for low latency requirements as more data center facilities need to be spread to get closer to the network edge, reducing the economies of scale on the IT infrastructure. Subsequently, we review our recent joint computation of multi-site VNF placement and multilayer resource allocation in the deployment of a network service in a metro network. Specifically, a set of subroutines contained in LA-ML-SCA are experimentally validated in a network optimization-as-a-service architecture that assists an Open-Source MANO instance, virtual infrastructure managers and WAN controllers in a metro network test-bed.Grant numbers : Go2Edge - Engineering Future Edge Computing Networks, Systems and Services.@ 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works

    Impact of Processing-Resource Sharing on the Placement of Chained Virtual Network Functions

    Full text link
    Network Function Virtualization (NFV) provides higher flexibility for network operators and reduces the complexity in network service deployment. Using NFV, Virtual Network Functions (VNF) can be located in various network nodes and chained together in a Service Function Chain (SFC) to provide a specific service. Consolidating multiple VNFs in a smaller number of locations would allow decreasing capital expenditures. However, excessive consolidation of VNFs might cause additional latency penalties due to processing-resource sharing, and this is undesirable, as SFCs are bounded by service-specific latency requirements. In this paper, we identify two different types of penalties (referred as "costs") related to the processingresource sharing among multiple VNFs: the context switching costs and the upscaling costs. Context switching costs arise when multiple CPU processes (e.g., supporting different VNFs) share the same CPU and thus repeated loading/saving of their context is required. Upscaling costs are incurred by VNFs requiring multi-core implementations, since they suffer a penalty due to the load-balancing needs among CPU cores. These costs affect how the chained VNFs are placed in the network to meet the performance requirement of the SFCs. We evaluate their impact while considering SFCs with different bandwidth and latency requirements in a scenario of VNF consolidation.Comment: Accepted for publication in IEEE Transactions on Cloud Computin

    Algorithms for advance bandwidth reservation in media production networks

    Get PDF
    Media production generally requires many geographically distributed actors (e.g., production houses, broadcasters, advertisers) to exchange huge amounts of raw video and audio data. Traditional distribution techniques, such as dedicated point-to-point optical links, are highly inefficient in terms of installation time and cost. To improve efficiency, shared media production networks that connect all involved actors over a large geographical area, are currently being deployed. The traffic in such networks is often predictable, as the timing and bandwidth requirements of data transfers are generally known hours or even days in advance. As such, the use of advance bandwidth reservation (AR) can greatly increase resource utilization and cost efficiency. In this paper, we propose an Integer Linear Programming formulation of the bandwidth scheduling problem, which takes into account the specific characteristics of media production networks, is presented. Two novel optimization algorithms based on this model are thoroughly evaluated and compared by means of in-depth simulation results

    An Energy-driven Network Function Virtualization for Multi-domain Software Defined Networks

    Full text link
    Network Functions Virtualization (NFV) in Software Defined Networks (SDN) emerged as a new technology for creating virtual instances for smooth execution of multiple applications. Their amalgamation provides flexible and programmable platforms to utilize the network resources for providing Quality of Service (QoS) to various applications. In SDN-enabled NFV setups, the underlying network services can be viewed as a series of virtual network functions (VNFs) and their optimal deployment on physical/virtual nodes is considered a challenging task to perform. However, SDNs have evolved from single-domain to multi-domain setups in the recent era. Thus, the complexity of the underlying VNF deployment problem in multi-domain setups has increased manifold. Moreover, the energy utilization aspect is relatively unexplored with respect to an optimal mapping of VNFs across multiple SDN domains. Hence, in this work, the VNF deployment problem in multi-domain SDN setup has been addressed with a primary emphasis on reducing the overall energy consumption for deploying the maximum number of VNFs with guaranteed QoS. The problem in hand is initially formulated as a "Multi-objective Optimization Problem" based on Integer Linear Programming (ILP) to obtain an optimal solution. However, the formulated ILP becomes complex to solve with an increasing number of decision variables and constraints with an increase in the size of the network. Thus, we leverage the benefits of the popular evolutionary optimization algorithms to solve the problem under consideration. In order to deduce the most appropriate evolutionary optimization algorithm to solve the considered problem, it is subjected to different variants of evolutionary algorithms on the widely used MOEA framework (an open source java framework based on multi-objective evolutionary algorithms).Comment: Accepted for publication in IEEE INFOCOM 2019 Workshop on Intelligent Cloud Computing and Networking (ICCN 2019
    • …
    corecore