6,923 research outputs found

    System architecture and deployment scenarios for SESAME: small cEllS coordinAtion for Multi-tenancy and Edge services

    Get PDF
    The surge of the Internet traffic with exabytes of data flowing over operators’ mobile networks has created the need to rethink the paradigms behind the design of the mobile network architecture. The inadequacy of the 4G UMTS Long term Evolution (LTE) and even of its advanced version LTE-A is evident, considering that the traffic will be extremely heterogeneous in the near future and ranging from 4K resolution TV to machine-type communications. To keep up with these changes, academia, industries and EU institutions have now engaged in the quest for new 5G technology. In this paper we present the innovative system design, concepts and visions developed by the 5G PPP H2020 project SESAME (Small cEllS coordinAtion for Multi-tenancy and Edge services). The innovation of SESAME is manifold: i) combine the key 5G small cells with cloud technology, ii) promote and develop the concept of Small Cells-as-a-Service (SCaaS), iii) bring computing and storage power at the mobile network edge through the development of non-x86 ARM technology enabled micro-servers, and iv) address a large number of scenarios and use cases applying mobile edge computing

    GNFC: Towards Network Function Cloudification

    Get PDF
    An increasing demand is seen from enterprises to host and dynamically manage middlebox services in public clouds in order to leverage the same benefits that network functions provide in traditional, in-house deployments. However, today's public clouds provide only a limited view and programmability for tenants that challenges flexible deployment of transparent, software-defined network functions. Moreover, current virtual network functions can't take full advantage of a virtualized cloud environment, limiting scalability and fault tolerance. In this paper we review and evaluate the current infrastructural limitations imposed by public cloud providers and present the design and implementation of GNFC, a cloud-based Network Function Virtualization (NFV) framework that gives tenants the ability to transparently attach stateless, container-based network functions to their services hosted in public clouds. We evaluate the proposed system over three public cloud providers (Amazon EC2, Microsoft Azure and Google Compute Engine) and show the effects on end-to-end latency and throughput using various instance types for NFV hosts

    On the Fly Orchestration of Unikernels: Tuning and Performance Evaluation of Virtual Infrastructure Managers

    Full text link
    Network operators are facing significant challenges meeting the demand for more bandwidth, agile infrastructures, innovative services, while keeping costs low. Network Functions Virtualization (NFV) and Cloud Computing are emerging as key trends of 5G network architectures, providing flexibility, fast instantiation times, support of Commercial Off The Shelf hardware and significant cost savings. NFV leverages Cloud Computing principles to move the data-plane network functions from expensive, closed and proprietary hardware to the so-called Virtual Network Functions (VNFs). In this paper we deal with the management of virtual computing resources (Unikernels) for the execution of VNFs. This functionality is performed by the Virtual Infrastructure Manager (VIM) in the NFV MANagement and Orchestration (MANO) reference architecture. We discuss the instantiation process of virtual resources and propose a generic reference model, starting from the analysis of three open source VIMs, namely OpenStack, Nomad and OpenVIM. We improve the aforementioned VIMs introducing the support for special-purpose Unikernels and aiming at reducing the duration of the instantiation process. We evaluate some performance aspects of the VIMs, considering both stock and tuned versions. The VIM extensions and performance evaluation tools are available under a liberal open source licence

    ClouNS - A Cloud-native Application Reference Model for Enterprise Architects

    Full text link
    The capability to operate cloud-native applications can generate enormous business growth and value. But enterprise architects should be aware that cloud-native applications are vulnerable to vendor lock-in. We investigated cloud-native application design principles, public cloud service providers, and industrial cloud standards. All results indicate that most cloud service categories seem to foster vendor lock-in situations which might be especially problematic for enterprise architectures. This might sound disillusioning at first. However, we present a reference model for cloud-native applications that relies only on a small subset of well standardized IaaS services. The reference model can be used for codifying cloud technologies. It can guide technology identification, classification, adoption, research and development processes for cloud-native application and for vendor lock-in aware enterprise architecture engineering methodologies

    Cloud computing resource scheduling and a survey of its evolutionary approaches

    Get PDF
    A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon
    • …
    corecore