566 research outputs found
Addressing the Challenges in Federating Edge Resources
This book chapter considers how Edge deployments can be brought to bear in a
global context by federating them across multiple geographic regions to create
a global Edge-based fabric that decentralizes data center computation. This is
currently impractical, not only because of technical challenges, but is also
shrouded by social, legal and geopolitical issues. In this chapter, we discuss
two key challenges - networking and management in federating Edge deployments.
Additionally, we consider resource and modeling challenges that will need to be
addressed for a federated Edge.Comment: Book Chapter accepted to the Fog and Edge Computing: Principles and
Paradigms; Editors Buyya, Sriram
ENORM: A Framework For Edge NOde Resource Management
Current computing techniques using the cloud as a centralised server will
become untenable as billions of devices get connected to the Internet. This
raises the need for fog computing, which leverages computing at the edge of the
network on nodes, such as routers, base stations and switches, along with the
cloud. However, to realise fog computing the challenge of managing edge nodes
will need to be addressed. This paper is motivated to address the resource
management challenge. We develop the first framework to manage edge nodes,
namely the Edge NOde Resource Management (ENORM) framework. Mechanisms for
provisioning and auto-scaling edge node resources are proposed. The feasibility
of the framework is demonstrated on a PokeMon Go-like online game use-case. The
benefits of using ENORM are observed by reduced application latency between 20%
- 80% and reduced data transfer and communication frequency between the edge
node and the cloud by up to 95\%. These results highlight the potential of fog
computing for improving the quality of service and experience.Comment: 14 pages; accepted to IEEE Transactions on Services Computing on 12
September 201
Resource management in a containerized cloud : status and challenges
Cloud computing heavily relies on virtualization, as with cloud computing virtual resources are typically leased to the consumer, for example as virtual machines. Efficient management of these virtual resources is of great importance, as it has a direct impact on both the scalability and the operational costs of the cloud environment. Recently, containers are gaining popularity as virtualization technology, due to the minimal overhead compared to traditional virtual machines and the offered portability. Traditional resource management strategies however are typically designed for the allocation and migration of virtual machines, so the question arises how these strategies can be adapted for the management of a containerized cloud. Apart from this, the cloud is also no longer limited to the centrally hosted data center infrastructure. New deployment models have gained maturity, such as fog and mobile edge computing, bringing the cloud closer to the end user. These models could also benefit from container technology, as the newly introduced devices often have limited hardware resources. In this survey, we provide an overview of the current state of the art regarding resource management within the broad sense of cloud computing, complementary to existing surveys in literature. We investigate how research is adapting to the recent evolutions within the cloud, being the adoption of container technology and the introduction of the fog computing conceptual model. Furthermore, we identify several challenges and possible opportunities for future research
Big Data and Large-scale Data Analytics: Efficiency of Sustainable Scalability and Security of Centralized Clouds and Edge Deployment Architectures
One of the significant shifts of the next-generation computing technologies will certainly be in
the development of Big Data (BD) deployment architectures. Apache Hadoop, the BD
landmark, evolved as a widely deployed BD operating system. Its new features include
federation structure and many associated frameworks, which provide Hadoop 3.x with the
maturity to serve different markets. This dissertation addresses two leading issues involved in
exploiting BD and large-scale data analytics realm using the Hadoop platform. Namely,
(i)Scalability that directly affects the system performance and overall throughput using
portable Docker containers. (ii) Security that spread the adoption of data protection practices
among practitioners using access controls. An Enhanced Mapreduce Environment (EME),
OPportunistic and Elastic Resource Allocation (OPERA) scheduler, BD Federation Access Broker
(BDFAB), and a Secure Intelligent Transportation System (SITS) of multi-tiers architecture for
data streaming to the cloud computing are the main contribution of this thesis study
Recommended from our members
Resource Allocation in Multi-analytics, Resource-Constrained Environments
The vast proliferation of monitoring and sensing devices equipped with Internet connectivity, commonly known as the Internet of Things (IoT) generates an unprecedented volume of data, which requires Big Data Analytics Systems (BDAS) to process it and extract actionable insights. The large diversity of IoT data processing applications require the deployment of multiple processing frameworks under the coordination of a resource allocator. To enable prompt actuation, these applications must meet deadlines and their processing takes place near where data is generated, in private clouds or edge computing clusters, which have limited resources.In resource-constrained and multi-analytics settings there are issues related to the combined use of open-source BDAS, originally designed for resource-rich, standalone clusters, that remain unaddressed. Specifically, open-source BDAS have unknown behavior when used combined under the coordination of a cluster-manager and the available resources are limited. Moreover, existing allocation policies are not suitable to meet deadlines in resource-constrained settings without wasting resources or requiring particular repetitive job patterns. Lastly, in such settings fair-share policies cannot reliably preserve fairness.To satisfy deadlines and achieve allocation fairness in resource constrained clusters for multi-analytics, we employ predictive resource allocation and admission control. We evaluate the performance and behavior of BDAS in resource-constrained multi-analyticsclusters and understand the root causes of their interference. Moreover, we design admission control and resource allocation suitable for resource-managers. Allocation decisions adapt to changing cluster conditions to satisfy deadlines and preserve fairness under resource-constrained multi-analytics settings. We evaluate our approach with trace-based simulations and production workloads and show that it satisfies more deadlines, preserves fairness, and utilizes the cluster more efficiently compared to existing fair-share allocators designed for resource managers
- …