195 research outputs found
Sl-EDGE: Network Slicing at the Edge
Network slicing of multi-access edge computing (MEC) resources is expected to
be a pivotal technology to the success of 5G networks and beyond. The key
challenge that sets MEC slicing apart from traditional resource allocation
problems is that edge nodes depend on tightly-intertwined and
strictly-constrained networking, computation and storage resources. Therefore,
instantiating MEC slices without incurring in resource over-provisioning is
hardly addressable with existing slicing algorithms. The main innovation of
this paper is Sl-EDGE, a unified MEC slicing framework that allows network
operators to instantiate heterogeneous slice services (e.g., video streaming,
caching, 5G network access) on edge devices. We first describe the architecture
and operations of Sl-EDGE, and then show that the problem of optimally
instantiating joint network-MEC slices is NP-hard. Thus, we propose
near-optimal algorithms that leverage key similarities among edge nodes and
resource virtualization to instantiate heterogeneous slices 7.5x faster and
within 0.25 of the optimum. We first assess the performance of our algorithms
through extensive numerical analysis, and show that Sl-EDGE instantiates slices
6x more efficiently then state-of-the-art MEC slicing algorithms. Furthermore,
experimental results on a 24-radio testbed with 9 smartphones demonstrate that
Sl-EDGE provides at once highly-efficient slicing of joint LTE connectivity,
video streaming over WiFi, and ffmpeg video transcoding
Software-Defined Cloud Computing: Architectural Elements and Open Challenges
The variety of existing cloud services creates a challenge for service
providers to enforce reasonable Software Level Agreements (SLA) stating the
Quality of Service (QoS) and penalties in case QoS is not achieved. To avoid
such penalties at the same time that the infrastructure operates with minimum
energy and resource wastage, constant monitoring and adaptation of the
infrastructure is needed. We refer to Software-Defined Cloud Computing, or
simply Software-Defined Clouds (SDC), as an approach for automating the process
of optimal cloud configuration by extending virtualization concept to all
resources in a data center. An SDC enables easy reconfiguration and adaptation
of physical resources in a cloud infrastructure, to better accommodate the
demand on QoS through a software that can describe and manage various aspects
comprising the cloud environment. In this paper, we present an architecture for
SDCs on data centers with emphasis on mobile cloud applications. We present an
evaluation, showcasing the potential of SDC in two use cases-QoS-aware
bandwidth allocation and bandwidth-aware, energy-efficient VM placement-and
discuss the research challenges and opportunities in this emerging area.Comment: Keynote Paper, 3rd International Conference on Advances in Computing,
Communications and Informatics (ICACCI 2014), September 24-27, 2014, Delhi,
Indi
OSMOSIS: Enabling Multi-Tenancy in Datacenter SmartNICs
Multi-tenancy is essential for unleashing SmartNIC's potential in
datacenters. Our systematic analysis in this work shows that existing on-path
SmartNICs have resource multiplexing limitations. For example, existing
solutions lack multi-tenancy capabilities such as performance isolation and QoS
provisioning for compute and IO resources. Compared to standard NIC data paths
with a well-defined set of offloaded functions, unpredictable execution times
of SmartNIC kernels make conventional approaches for multi-tenancy and QoS
insufficient. We fill this gap with OSMOSIS, a SmartNICs resource manager
co-design. OSMOSIS extends existing OS mechanisms to enable dynamic hardware
resource multiplexing on top of the on-path packet processing data plane. We
implement OSMOSIS within an open-source RISC-V-based 400Gbit/s SmartNIC. Our
performance results demonstrate that OSMOSIS fully supports multi-tenancy and
enables broader adoption of SmartNICs in datacenters with low overhead.Comment: 12 pages, 14 figures, 103 reference
Performance Analysis of VXLAN and NVGRE Tunneling Protocol on Virtual Network
Virtualization is a new revolutionary approach in networking industry, its make possible to build several virtual machine (VM) in one physical hardware. In virtualization practice, one VM might be connected to others, but not all of VM in one environment must be connected due the privacy and security issues. One of the solutions which can address this issue is tunneling protocol. Tunneling protocol is a layer-2-in-layer-3 protocol which can isolate tenant traffic in virtualize environment. This research conducted about the performance of VXLAN and NVGRE tunneling protocol which works on virtualize environment and aims to determine the perfomances of throughput, delay, jitter, and vCPU Usage using variable packet size in range of 128-1514 byte. From the the result, can be conclude that both of tunneling protocol can isolate the traffic between tenant. For the performance result, NVGRE has the highest value of throughput, 771,02 Mbps and the VXLAN got 753,62 Mbps. For the delay NVGRE got 2.24 ms and VXLAN got 2.29 ms. For the jitter, NVGRE has better rate value of 0.361 ms, than VXLAN value of 0.348 ms, and the vCPU USAge performance, NVGRE has the highest performance too that value is 60.57%. So on overall performance NVGRE has the better performance than VXLAN
Resource management in a containerized cloud : status and challenges
Cloud computing heavily relies on virtualization, as with cloud computing virtual resources are typically leased to the consumer, for example as virtual machines. Efficient management of these virtual resources is of great importance, as it has a direct impact on both the scalability and the operational costs of the cloud environment. Recently, containers are gaining popularity as virtualization technology, due to the minimal overhead compared to traditional virtual machines and the offered portability. Traditional resource management strategies however are typically designed for the allocation and migration of virtual machines, so the question arises how these strategies can be adapted for the management of a containerized cloud. Apart from this, the cloud is also no longer limited to the centrally hosted data center infrastructure. New deployment models have gained maturity, such as fog and mobile edge computing, bringing the cloud closer to the end user. These models could also benefit from container technology, as the newly introduced devices often have limited hardware resources. In this survey, we provide an overview of the current state of the art regarding resource management within the broad sense of cloud computing, complementary to existing surveys in literature. We investigate how research is adapting to the recent evolutions within the cloud, being the adoption of container technology and the introduction of the fog computing conceptual model. Furthermore, we identify several challenges and possible opportunities for future research
Performance Analysis of VXLAN and NVGRE Tunneling Protocol on Virtual Network
Virtualization is a new revolutionary approach in networking industry, its make possible to build several virtual machine (VM) in one physical hardware. In virtualization practice, one VM might be connected to others, but not all of VM in one environment must be connected due the privacy and security issues. One of the solutions which can address this issue is tunneling protocol. Tunneling protocol is a layer-2-in-layer-3 protocol which can isolate tenant traffic in virtualize environment. This research conducted about the performance of VXLAN and NVGRE tunneling protocol which works on virtualize environment and aims to determine the perfomances of throughput, delay, jitter, and vCPU Usage using variable packet size in range of 128-1514 byte. From the the result, can be conclude that both of tunneling protocol can isolate the traffic between tenant. For the performance result, NVGRE has the highest value of throughput, 771,02 Mbps and the VXLAN got 753,62 Mbps. For the delay NVGRE got 2.24 ms and VXLAN got 2.29 ms. For the jitter, NVGRE has better rate value of 0.361 ms, than VXLAN value of 0.348 ms, and the vCPU usage performance, NVGRE has the highest performance too that value is 60.57%. So on overall performance NVGRE has the better performance than VXLAN
Do we all really know what a fog node is? Current trends towards an open definition
Fog computing has emerged as a promising technology that can bring cloud applications closer to the physical IoT devices at the network edge. While it is widely known what cloud computing is, how data centers can build the cloud infrastructure and how applications can make use of this infrastructure, there is no common picture on what fog computing and particularly a fog node, as its main building block, really is. One of the first attempts to define a fog node was made by Cisco, qualifying a fog computing system as a “mini-cloud” located at the edge of the network and implemented through a variety of edge devices, interconnected by a variety, mostly wireless, communication technologies. Thus, a fog node would be the infrastructure implementing the said mini-cloud. Other proposals have their own definition of what a fog node is, usually in relation to a specific edge device, a specific use case or an application. In this paper, we first survey the state of the art in technologies for fog computing nodes, paying special attention to the contributions that analyze the role edge devices play in the fog node definition. We summarize and compare the concepts, lessons learned from their implementation, and end up showing how a conceptual framework is emerging towards a unifying fog node definition. We focus on core functionalities of a fog node as well as in the accompanying opportunities and challenges towards their practical realization in the near future.Postprint (author's final draft
5G Multi-access Edge Computing: Security, Dependability, and Performance
The main innovation of the Fifth Generation (5G) of mobile networks is the
ability to provide novel services with new and stricter requirements. One of
the technologies that enable the new 5G services is the Multi-access Edge
Computing (MEC). MEC is a system composed of multiple devices with computing
and storage capabilities that are deployed at the edge of the network, i.e.,
close to the end users. MEC reduces latency and enables contextual information
and real-time awareness of the local environment. MEC also allows cloud
offloading and the reduction of traffic congestion. Performance is not the only
requirement that the new 5G services have. New mission-critical applications
also require high security and dependability. These three aspects (security,
dependability, and performance) are rarely addressed together. This survey
fills this gap and presents 5G MEC by addressing all these three aspects.
First, we overview the background knowledge on MEC by referring to the current
standardization efforts. Second, we individually present each aspect by
introducing the related taxonomy (important for the not expert on the aspect),
the state of the art, and the challenges on 5G MEC. Finally, we discuss the
challenges of jointly addressing the three aspects.Comment: 33 pages, 11 figures, 15 tables. This paper is under review at IEEE
Communications Surveys & Tutorials. Copyright IEEE 202
- …