862 research outputs found
Towards delay-aware container-based Service Function Chaining in Fog Computing
Recently, the fifth-generation mobile network (5G) is getting significant attention. Empowered by Network Function Virtualization (NFV), 5G networks aim to support diverse services coming from different business verticals (e.g. Smart Cities, Automotive, etc). To fully leverage on NFV, services must be connected in a specific order forming a Service Function Chain (SFC). SFCs allow mobile operators to benefit from the high flexibility and low operational costs introduced by network softwarization. Additionally, Cloud computing is evolving towards a distributed paradigm called Fog Computing, which aims to provide a distributed cloud infrastructure by placing computational resources close to end-users. However, most SFC research only focuses on Multi-access Edge Computing (MEC) use cases where mobile operators aim to deploy services close to end-users. Bi-directional communication between Edges and Cloud are not considered in MEC, which in contrast is highly important in a Fog environment as in distributed anomaly detection services. Therefore, in this paper, we propose an SFC controller to optimize the placement of service chains in Fog environments, specifically tailored for Smart City use cases. Our approach has been validated on the Kubernetes platform, an open-source orchestrator for the automatic deployment of micro-services. Our SFC controller has been implemented as an extension to the scheduling features available in Kubernetes, enabling the efficient provisioning of container-based SFCs while optimizing resource allocation and reducing the end-to-end (E2E) latency. Results show that the proposed approach can lower the network latency up to 18% for the studied use case while conserving bandwidth when compared to the default scheduling mechanism
A study on performance measures for auto-scaling CPU-intensive containerized applications
Autoscaling of containers can leverage performance measures from the different layers of the computational stack. This paper investigate the problem of selecting the most appropriate performance measure to activate auto-scaling actions aiming at guaranteeing QoS constraints. First, the correlation between absolute and relative usage measures and how a resource allocation decision can be influenced by them is analyzed in different workload scenarios. Absolute and relative measures could assume quite different values. The former account for the actual utilization of resources in the host system, while the latter account for the share that each container has of the resources used. Then, the performance of a variant of Kubernetes’ auto-scaling algorithm, that transparently uses the absolute usage measures to scale-in/out containers, is evaluated through a wide set of experiments. Finally, a detailed analysis of the state-of-the-art is presented
funcX: A Federated Function Serving Fabric for Science
Exploding data volumes and velocities, new computational methods and
platforms, and ubiquitous connectivity demand new approaches to computation in
the sciences. These new approaches must enable computation to be mobile, so
that, for example, it can occur near data, be triggered by events (e.g.,
arrival of new data), be offloaded to specialized accelerators, or run remotely
where resources are available. They also require new design approaches in which
monolithic applications can be decomposed into smaller components, that may in
turn be executed separately and on the most suitable resources. To address
these needs we present funcX---a distributed function as a service (FaaS)
platform that enables flexible, scalable, and high performance remote function
execution. funcX's endpoint software can transform existing clouds, clusters,
and supercomputers into function serving systems, while funcX's cloud-hosted
service provides transparent, secure, and reliable function execution across a
federated ecosystem of endpoints. We motivate the need for funcX with several
scientific case studies, present our prototype design and implementation, show
optimizations that deliver throughput in excess of 1 million functions per
second, and demonstrate, via experiments on two supercomputers, that funcX can
scale to more than more than 130000 concurrent workers.Comment: Accepted to ACM Symposium on High-Performance Parallel and
Distributed Computing (HPDC 2020). arXiv admin note: substantial text overlap
with arXiv:1908.0490
Orchestration in the Cloud-to-Things Compute Continuum: Taxonomy, Survey and Future Directions
IoT systems are becoming an essential part of our environment. Smart cities,
smart manufacturing, augmented reality, and self-driving cars are just some
examples of the wide range of domains, where the applicability of such systems
has been increasing rapidly. These IoT use cases often require simultaneous
access to geographically distributed arrays of sensors, and heterogeneous
remote, local as well as multi-cloud computational resources. This gives birth
to the extended Cloud-to-Things computing paradigm. The emergence of this new
paradigm raised the quintessential need to extend the orchestration
requirements i.e., the automated deployment and run-time management) of
applications from the centralised cloud-only environment to the entire spectrum
of resources in the Cloud-to-Things continuum. In order to cope with this
requirement, in the last few years, there has been a lot of attention to the
development of orchestration systems in both industry and academic
environments. This paper is an attempt to gather the research conducted in the
orchestration for the Cloud-to-Things continuum landscape and to propose a
detailed taxonomy, which is then used to critically review the landscape of
existing research work. We finally discuss the key challenges that require
further attention and also present a conceptual framework based on the
conducted analysis.Comment: Journal of Cloud Computing Pages: 2
A Cloud Native Solution for Dynamic Auto Scaling of MME in LTE
Due to rapid growth in the use of mobile
devices and as a vital carrier of IoT traffic, mobile networks
need to undergo infrastructure wide revisions to meet explosive
traffic demand. In addition to data traffic, there has
been a significant rise in the control signaling overhead due
to dense deployment of small cells and IoT devices. Adoption
of technologies like cloud computing, Software Defined
Networking (SDN) and Network Functions Virtualization
(NFV) is impressively successful in mitigating the existing
challenges and driving the path towards 5G evolution.
However, issues pertaining to scalability, ease of use, service
resiliency, and high availability need considerable study
for successful roll out of production grade 5G solutions in
cloud. In this work, we propose a scalable Cloud Native
Solution for Mobility Management Entity (CNS-MME) of
mobile core in a production data center based on micro service
architecture. The micro services are lightweight MME
functionalities, in contrast to monolithic MME in Long
Term Evolution (LTE). The proposed architecture is highly
available and supports auto-scaling to dynamically scale-up
and scale-down required micro services for load balancing.
The performance of proposed CNS-MME architecture is
evaluated against monolithic MME in terms of scalability,
auto scaling of the service, resource utilization of MME,
and efficient load balancing features. We observed that,
compared to monolithic MME architecture, CNS-MME
provides 7% higher MME throughput and also reduces
the processing resource consumption by 26%
- …