11,095 research outputs found
Computation Offloading and Scheduling in Edge-Fog Cloud Computing
Resource allocation and task scheduling in the Cloud environment faces many challenges, such as time delay, energy consumption, and security. Also, executing computation tasks of mobile applications on mobile devices (MDs) requires a lot of resources, so they can offload to the Cloud. But Cloud is far from MDs and has challenges as high delay and power consumption. Edge computing with processing near the Internet of Things (IoT) devices have been able to reduce the delay to some extent, but the problem is distancing itself from the Cloud. The fog computing (FC), with the placement of sensors and Cloud, increase the speed and reduce the energy consumption. Thus, FC is suitable for IoT applications. In this article, we review the resource allocation and task scheduling methods in Cloud, Edge and Fog environments, such as traditional, heuristic, and meta-heuristics. We also categorize the researches related to task offloading in Mobile Cloud Computing (MCC), Mobile Edge Computing (MEC), and Mobile Fog Computing (MFC). Our categorization criteria include the issue, proposed strategy, objectives, framework, and test environment.
Geo-distributed Edge and Cloud Resource Management for Low-latency Stream Processing
The proliferation of Internet-of-Things (IoT) devices is rapidly increasing the demands for efficient processing of low latency stream data generated close to the edge of the network.
Edge Computing provides a layer of infrastructure to fill latency gaps between the IoT devices and the back-end cloud computing infrastructure.
A large number of IoT applications require continuous processing of data streams in real-time.
Edge computing-based stream processing techniques that carefully consider the heterogeneity of the computing and network resources available in the geo-distributed infrastructure provide significant benefits in optimizing the throughput and end-to-end latency of the data streams.
Managing geo-distributed resources operated by individual service providers raises new challenges in terms of effective global resource sharing and achieving global efficiency in the resource allocation process.
In this dissertation, we present a distributed stream processing framework that optimizes the performance of stream processing applications through a careful allocation of computing and network resources available at the edge of the network.
The proposed approach differentiates itself from the state-of-the-art through its careful consideration of data locality and resource constraints during physical plan generation and operator placement for the stream queries.
Additionally, it considers co-flow dependencies that exist between the data streams to optimize the network resource allocation through an application-level rate control mechanism.
The proposed framework incorporates resilience through a cost-aware partial active replication strategy that minimizes the recovery cost when applications incur failures.
The framework employs a reinforcement learning-based online learning model for dynamically determining the level of parallelism to adapt to changing workload conditions.
The second dimension of this dissertation proposes a novel model for allocating computing resources in edge and cloud computing environments.
In edge computing environments, it allows service providers to establish resource sharing contracts with infrastructure providers apriori in a latency-aware manner.
In geo-distributed cloud environments, it allows cloud service providers to establish resource sharing contracts with individual datacenters apriori for defined time intervals in a cost-aware manner.
Based on these mechanisms, we develop a decentralized implementation of the contract-based resource allocation model for geo-distributed resources using Smart Contracts in Ethereum
Towards delay-aware container-based Service Function Chaining in Fog Computing
Recently, the fifth-generation mobile network (5G) is getting significant attention. Empowered by Network Function Virtualization (NFV), 5G networks aim to support diverse services coming from different business verticals (e.g. Smart Cities, Automotive, etc). To fully leverage on NFV, services must be connected in a specific order forming a Service Function Chain (SFC). SFCs allow mobile operators to benefit from the high flexibility and low operational costs introduced by network softwarization. Additionally, Cloud computing is evolving towards a distributed paradigm called Fog Computing, which aims to provide a distributed cloud infrastructure by placing computational resources close to end-users. However, most SFC research only focuses on Multi-access Edge Computing (MEC) use cases where mobile operators aim to deploy services close to end-users. Bi-directional communication between Edges and Cloud are not considered in MEC, which in contrast is highly important in a Fog environment as in distributed anomaly detection services. Therefore, in this paper, we propose an SFC controller to optimize the placement of service chains in Fog environments, specifically tailored for Smart City use cases. Our approach has been validated on the Kubernetes platform, an open-source orchestrator for the automatic deployment of micro-services. Our SFC controller has been implemented as an extension to the scheduling features available in Kubernetes, enabling the efficient provisioning of container-based SFCs while optimizing resource allocation and reducing the end-to-end (E2E) latency. Results show that the proposed approach can lower the network latency up to 18% for the studied use case while conserving bandwidth when compared to the default scheduling mechanism
Adaptive and Resilient Revenue Maximizing Dynamic Resource Allocation and Pricing for Cloud-Enabled IoT Systems
Cloud computing is becoming an essential component of modern computer and
communication systems. The available resources at the cloud such as computing
nodes, storage, databases, etc. are often packaged in the form of virtual
machines (VMs) to be used by remotely located client applications for
computational tasks. However, the cloud has a limited number of VMs available,
which have to be efficiently utilized to generate higher productivity and
subsequently generate maximum revenue. Client applications generate requests
with computational tasks at random times with random complexity to be processed
by the cloud. The cloud service provider (CSP) has to decide whether to
allocate a VM to a task at hand or to wait for a higher complexity task in the
future. We propose a threshold-based mechanism to optimally decide the
allocation and pricing of VMs to sequentially arriving requests in order to
maximize the revenue of the CSP over a finite time horizon. Moreover, we
develop an adaptive and resilient framework based that can counter the effect
of realtime changes in the number of available VMs at the cloud server, the
frequency and nature of arriving tasks on the revenue of the CSP.Comment: American Control Conference (ACC 2018
VIoLET: A Large-scale Virtual Environment for Internet of Things
IoT deployments have been growing manifold, encompassing sensors, networks,
edge, fog and cloud resources. Despite the intense interest from researchers
and practitioners, most do not have access to large-scale IoT testbeds for
validation. Simulation environments that allow analytical modeling are a poor
substitute for evaluating software platforms or application workloads in
realistic computing environments. Here, we propose VIoLET, a virtual
environment for defining and launching large-scale IoT deployments within cloud
VMs. It offers a declarative model to specify container-based compute resources
that match the performance of the native edge, fog and cloud devices using
Docker. These can be inter-connected by complex topologies on which
private/public networks, and bandwidth and latency rules are enforced. Users
can configure synthetic sensors for data generation on these devices as well.
We validate VIoLET for deployments with > 400 devices and > 1500 device-cores,
and show that the virtual IoT environment closely matches the expected compute
and network performance at modest costs. This fills an important gap between
IoT simulators and real deployments.Comment: To appear in the Proceedings of the 24TH International European
Conference On Parallel and Distributed Computing (EURO-PAR), August 27-31,
2018, Turin, Italy, europar2018.org. Selected as a Distinguished Paper for
presentation at the Plenary Session of the conferenc
- …