11,939 research outputs found
Practical service placement approach for microservices architecture
Community networks (CNs) have gained momentum in the last few years with the increasing number of spontaneously deployed WiFi hotspots and home networks. These networks, owned and managed by volunteers, offer various services to their members and to the public. To reduce the complexity of service deployment, community micro-clouds have recently emerged as a promising enabler for the delivery of cloud services to community users. By putting services closer to consumers, micro-clouds pursue not only a better service performance, but also a low entry barrier for the deployment of mainstream Internet services within the CN. Unfortunately, the provisioning of the services is not so simple. Due to the large and irregular topology, high software and hardware diversity of CNs, it requires of aPeer ReviewedPostprint (author's final draft
An Algorithm for Network and Data-aware Placement of Multi-Tier Applications in Cloud Data Centers
Today's Cloud applications are dominated by composite applications comprising
multiple computing and data components with strong communication correlations
among them. Although Cloud providers are deploying large number of computing
and storage devices to address the ever increasing demand for computing and
storage resources, network resource demands are emerging as one of the key
areas of performance bottleneck. This paper addresses network-aware placement
of virtual components (computing and data) of multi-tier applications in data
centers and formally defines the placement as an optimization problem. The
simultaneous placement of Virtual Machines and data blocks aims at reducing the
network overhead of the data center network infrastructure. A greedy heuristic
is proposed for the on-demand application components placement that localizes
network traffic in the data center interconnect. Such optimization helps
reducing communication overhead in upper layer network switches that will
eventually reduce the overall traffic volume across the data center. This, in
turn, will help reducing packet transmission delay, increasing network
performance, and minimizing the energy consumption of network components.
Experimental results demonstrate performance superiority of the proposed
algorithm over other approaches where it outperforms the state-of-the-art
network-aware application placement algorithm across all performance metrics by
reducing the average network cost up to 67% and network usage at core switches
up to 84%, as well as increasing the average number of application deployments
up to 18%.Comment: Submitted for publication consideration for the Journal of Network
and Computer Applications (JNCA). Total page: 28. Number of figures: 15
figure
Cooperative Multi-Bitrate Video Caching and Transcoding in Multicarrier NOMA-Assisted Heterogeneous Virtualized MEC Networks
Cooperative video caching and transcoding in mobile edge computing (MEC)
networks is a new paradigm for future wireless networks, e.g., 5G and 5G
beyond, to reduce scarce and expensive backhaul resource usage by prefetching
video files within radio access networks (RANs). Integration of this technique
with other advent technologies, such as wireless network virtualization and
multicarrier non-orthogonal multiple access (MC-NOMA), provides more flexible
video delivery opportunities, which leads to enhancements both for the
network's revenue and for the end-users' service experience. In this regard, we
propose a two-phase RAF for a parallel cooperative joint multi-bitrate video
caching and transcoding in heterogeneous virtualized MEC networks. In the cache
placement phase, we propose novel proactive delivery-aware cache placement
strategies (DACPSs) by jointly allocating physical and radio resources based on
network stochastic information to exploit flexible delivery opportunities.
Then, for the delivery phase, we propose a delivery policy based on the user
requests and network channel conditions. The optimization problems
corresponding to both phases aim to maximize the total revenue of network
slices, i.e., virtual networks. Both problems are non-convex and suffer from
high-computational complexities. For each phase, we show how the problem can be
solved efficiently. We also propose a low-complexity RAF in which the
complexity of the delivery algorithm is significantly reduced. A Delivery-aware
cache refreshment strategy (DACRS) in the delivery phase is also proposed to
tackle the dynamically changes of network stochastic information. Extensive
numerical assessments demonstrate a performance improvement of up to 30% for
our proposed DACPSs and DACRS over traditional approaches.Comment: 53 pages, 24 figure
Impact of Processing-Resource Sharing on the Placement of Chained Virtual Network Functions
Network Function Virtualization (NFV) provides higher flexibility for network
operators and reduces the complexity in network service deployment. Using NFV,
Virtual Network Functions (VNF) can be located in various network nodes and
chained together in a Service Function Chain (SFC) to provide a specific
service. Consolidating multiple VNFs in a smaller number of locations would
allow decreasing capital expenditures. However, excessive consolidation of VNFs
might cause additional latency penalties due to processing-resource sharing,
and this is undesirable, as SFCs are bounded by service-specific latency
requirements. In this paper, we identify two different types of penalties
(referred as "costs") related to the processingresource sharing among multiple
VNFs: the context switching costs and the upscaling costs. Context switching
costs arise when multiple CPU processes (e.g., supporting different VNFs) share
the same CPU and thus repeated loading/saving of their context is required.
Upscaling costs are incurred by VNFs requiring multi-core implementations,
since they suffer a penalty due to the load-balancing needs among CPU cores.
These costs affect how the chained VNFs are placed in the network to meet the
performance requirement of the SFCs. We evaluate their impact while considering
SFCs with different bandwidth and latency requirements in a scenario of VNF
consolidation.Comment: Accepted for publication in IEEE Transactions on Cloud Computin
- …