1,800 research outputs found
A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing
Edge computing is promoted to meet increasing performance needs of
data-driven services using computational and storage resources close to the end
devices, at the edge of the current network. To achieve higher performance in
this new paradigm one has to consider how to combine the efficiency of resource
usage at all three layers of architecture: end devices, edge devices, and the
cloud. While cloud capacity is elastically extendable, end devices and edge
devices are to various degrees resource-constrained. Hence, an efficient
resource management is essential to make edge computing a reality. In this
work, we first present terminology and architectures to characterize current
works within the field of edge computing. Then, we review a wide range of
recent articles and categorize relevant aspects in terms of 4 perspectives:
resource type, resource management objective, resource location, and resource
use. This taxonomy and the ensuing analysis is used to identify some gaps in
the existing research. Among several research gaps, we found that research is
less prevalent on data, storage, and energy as a resource, and less extensive
towards the estimation, discovery and sharing objectives. As for resource
types, the most well-studied resources are computation and communication
resources. Our analysis shows that resource management at the edge requires a
deeper understanding of how methods applied at different levels and geared
towards different resource types interact. Specifically, the impact of mobility
and collaboration schemes requiring incentives are expected to be different in
edge architectures compared to the classic cloud solutions. Finally, we find
that fewer works are dedicated to the study of non-functional properties or to
quantifying the footprint of resource management techniques, including
edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless
Communications and Mobile Computing journa
Towards delay-aware container-based Service Function Chaining in Fog Computing
Recently, the fifth-generation mobile network (5G) is getting significant attention. Empowered by Network Function Virtualization (NFV), 5G networks aim to support diverse services coming from different business verticals (e.g. Smart Cities, Automotive, etc). To fully leverage on NFV, services must be connected in a specific order forming a Service Function Chain (SFC). SFCs allow mobile operators to benefit from the high flexibility and low operational costs introduced by network softwarization. Additionally, Cloud computing is evolving towards a distributed paradigm called Fog Computing, which aims to provide a distributed cloud infrastructure by placing computational resources close to end-users. However, most SFC research only focuses on Multi-access Edge Computing (MEC) use cases where mobile operators aim to deploy services close to end-users. Bi-directional communication between Edges and Cloud are not considered in MEC, which in contrast is highly important in a Fog environment as in distributed anomaly detection services. Therefore, in this paper, we propose an SFC controller to optimize the placement of service chains in Fog environments, specifically tailored for Smart City use cases. Our approach has been validated on the Kubernetes platform, an open-source orchestrator for the automatic deployment of micro-services. Our SFC controller has been implemented as an extension to the scheduling features available in Kubernetes, enabling the efficient provisioning of container-based SFCs while optimizing resource allocation and reducing the end-to-end (E2E) latency. Results show that the proposed approach can lower the network latency up to 18% for the studied use case while conserving bandwidth when compared to the default scheduling mechanism
Fog Computing: A Taxonomy, Survey and Future Directions
In recent years, the number of Internet of Things (IoT) devices/sensors has
increased to a great extent. To support the computational demand of real-time
latency-sensitive applications of largely geo-distributed IoT devices/sensors,
a new computing paradigm named "Fog computing" has been introduced. Generally,
Fog computing resides closer to the IoT devices/sensors and extends the
Cloud-based computing, storage and networking facilities. In this chapter, we
comprehensively analyse the challenges in Fogs acting as an intermediate layer
between IoT devices/ sensors and Cloud datacentres and review the current
developments in this field. We present a taxonomy of Fog computing according to
the identified challenges and its key features.We also map the existing works
to the taxonomy in order to identify current research gaps in the area of Fog
computing. Moreover, based on the observations, we propose future directions
for research
Edge Computing for Extreme Reliability and Scalability
The massive number of Internet of Things (IoT) devices and their continuous data collection will lead to a rapid increase in the scale of collected data. Processing all these collected data at the central cloud server is inefficient, and even is unfeasible or unnecessary. Hence, the task of processing the data is pushed to the network edges introducing the concept of Edge Computing. Processing the information closer to the source of data (e.g., on gateways and on edge micro-servers) not only reduces the huge workload of central cloud, also decreases the latency for real-time applications by avoiding the unreliable and unpredictable network latency to communicate with the central cloud
Probabilistic QoS-aware Placement of VNF chains at the Edge
Deploying IoT-enabled Virtual Network Function (VNF) chains to Cloud-Edge
infrastructures requires determining a placement for each VNF that satisfies
all set deployment requirements as well as a software-defined routing of
traffic flows between consecutive functions that meets all set communication
requirements. In this article, we present a declarative solution, EdgeUsher, to
the problem of how to best place VNF chains to Cloud-Edge infrastructures.
EdgeUsher can determine all eligible placements for a set of VNF chains to a
Cloud-Edge infrastructure so to satisfy all of their hardware, IoT, security,
bandwidth, and latency requirements. It exploits probability distributions to
model the dynamic variations in the available Cloud-Edge infrastructure, and to
assess output eligible placements against those variations
Enabling Scalable and Sustainable Softwarized 5G Environments
The fifth generation of telecommunication systems (5G) is foreseen to play a fundamental
role in our socio-economic growth by supporting various and radically new vertical
applications (such as Industry 4.0, eHealth, Smart Cities/Electrical Grids, to name
a few), as a one-fits-all technology that is enabled by emerging softwarization solutions
\u2013 specifically, the Fog, Multi-access Edge Computing (MEC), Network Functions Virtualization
(NFV) and Software-Defined Networking (SDN) paradigms. Notwithstanding
the notable potential of the aforementioned technologies, a number of open issues
still need to be addressed to ensure their complete rollout. This thesis is particularly developed
towards addressing the scalability and sustainability issues in softwarized 5G
environments through contributions in three research axes: a) Infrastructure Modeling
and Analytics, b) Network Slicing and Mobility Management, and c) Network/Services Management
and Control. The main contributions include a model-based analytics approach
for real-time workload profiling and estimation of network key performance indicators
(KPIs) in NFV infrastructures (NFVIs), as well as a SDN-based multi-clustering approach
to scale geo-distributed virtual tenant networks (VTNs) and to support seamless
user/service mobility; building on these, solutions to the problems of resource consolidation,
service migration, and load balancing are also developed in the context of 5G.
All in all, this generally entails the adoption of Stochastic Models, Mathematical Programming,
Queueing Theory, Graph Theory and Team Theory principles, in the context
of Green Networking, NFV and SDN
A Fast and Scalable Authentication Scheme in IoT for Smart Living
Numerous resource-limited smart objects (SOs) such as sensors and actuators
have been widely deployed in smart environments, opening new attack surfaces to
intruders. The severe security flaw discourages the adoption of the Internet of
things in smart living. In this paper, we leverage fog computing and
microservice to push certificate authority (CA) functions to the proximity of
data sources. Through which, we can minimize attack surfaces and authentication
latency, and result in a fast and scalable scheme in authenticating a large
volume of resource-limited devices. Then, we design lightweight protocols to
implement the scheme, where both a high level of security and low computation
workloads on SO (no bilinear pairing requirement on the client-side) is
accomplished. Evaluations demonstrate the efficiency and effectiveness of our
scheme in handling authentication and registration for a large number of nodes,
meanwhile protecting them against various threats to smart living. Finally, we
showcase the success of computing intelligence movement towards data sources in
handling complicated services.Comment: 15 pages, 7 figures, 3 tables, to appear in FGC
ROUTER:Fog Enabled Cloud based Intelligent Resource Management Approach for Smart Home IoT Devices
There is a growing requirement for Internet of Things (IoT) infrastructure to ensure low response time to provision latency-sensitive real-time applications such as health monitoring, disaster management, and smart homes. Fog computing offers a means to provide such requirements, via a virtualized intermediate layer to provide data, computation, storage, and networking services between Cloud datacenters and end users. A key element within such Fog computing environments is resource management. While there are existing resource manager in Fog computing, they only focus on a subset of parameters important to Fog resource management encompassing system response time, network bandwidth, energy consumption and latency. To date no existing Fog resource manager considers these parameters simultaneously for decision making, which in the context of smart homes will become increasingly key. In this paper, we propose a novel resource management technique (ROUTER) for fog-enabled Cloud computing environments, which leverages Particle Swarm Optimization to optimize simultaneously. The approach is validated within an IoT-based smart home automation scenario, and evaluated within iFogSim toolkit driven by empirical models within a small-scale smart home experiment. Results demonstrate our approach results a reduction of 12% network bandwidth, 10% response time, 14% latency and 12.35% in energy consumption
- …