592 research outputs found
A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing
Edge computing is promoted to meet increasing performance needs of
data-driven services using computational and storage resources close to the end
devices, at the edge of the current network. To achieve higher performance in
this new paradigm one has to consider how to combine the efficiency of resource
usage at all three layers of architecture: end devices, edge devices, and the
cloud. While cloud capacity is elastically extendable, end devices and edge
devices are to various degrees resource-constrained. Hence, an efficient
resource management is essential to make edge computing a reality. In this
work, we first present terminology and architectures to characterize current
works within the field of edge computing. Then, we review a wide range of
recent articles and categorize relevant aspects in terms of 4 perspectives:
resource type, resource management objective, resource location, and resource
use. This taxonomy and the ensuing analysis is used to identify some gaps in
the existing research. Among several research gaps, we found that research is
less prevalent on data, storage, and energy as a resource, and less extensive
towards the estimation, discovery and sharing objectives. As for resource
types, the most well-studied resources are computation and communication
resources. Our analysis shows that resource management at the edge requires a
deeper understanding of how methods applied at different levels and geared
towards different resource types interact. Specifically, the impact of mobility
and collaboration schemes requiring incentives are expected to be different in
edge architectures compared to the classic cloud solutions. Finally, we find
that fewer works are dedicated to the study of non-functional properties or to
quantifying the footprint of resource management techniques, including
edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless
Communications and Mobile Computing journa
Little Boxes: A Dynamic Optimization Approach for Enhanced Cloud Infrastructures
The increasing demand for diverse, mobile applications with various degrees
of Quality of Service requirements meets the increasing elasticity of on-demand
resource provisioning in virtualized cloud computing infrastructures. This
paper provides a dynamic optimization approach for enhanced cloud
infrastructures, based on the concept of cloudlets, which are located at
hotspot areas throughout a metropolitan area. In conjunction, we consider
classical remote data centers that are rigid with respect to QoS but provide
nearly abundant computation resources. Given fluctuating user demands, we
optimize the cloudlet placement over a finite time horizon from a cloud
infrastructure provider's perspective. By the means of a custom tailed
heuristic approach, we are able to reduce the computational effort compared to
the exact approach by at least three orders of magnitude, while maintaining a
high solution quality with a moderate cost increase of 5.8% or less
Mobile Edge Computing Empowers Internet of Things
In this paper, we propose a Mobile Edge Internet of Things (MEIoT)
architecture by leveraging the fiber-wireless access technology, the cloudlet
concept, and the software defined networking framework. The MEIoT architecture
brings computing and storage resources close to Internet of Things (IoT)
devices in order to speed up IoT data sharing and analytics. Specifically, the
IoT devices (belonging to the same user) are associated to a specific proxy
Virtual Machine (VM) in the nearby cloudlet. The proxy VM stores and analyzes
the IoT data (generated by its IoT devices) in real-time. Moreover, we
introduce the semantic and social IoT technology in the context of MEIoT to
solve the interoperability and inefficient access control problem in the IoT
system. In addition, we propose two dynamic proxy VM migration methods to
minimize the end-to-end delay between proxy VMs and their IoT devices and to
minimize the total on-grid energy consumption of the cloudlets, respectively.
Performance of the proposed methods are validated via extensive simulations
Single-Board-Computer Clusters for Cloudlet Computing in Internet of Things
The number of connected sensors and devices is expected to increase to billions in the near
future. However, centralised cloud-computing data centres present various challenges to meet the
requirements inherent to Internet of Things (IoT) workloads, such as low latency, high throughput
and bandwidth constraints. Edge computing is becoming the standard computing paradigm for
latency-sensitive real-time IoT workloads, since it addresses the aforementioned limitations related
to centralised cloud-computing models. Such a paradigm relies on bringing computation close to
the source of data, which presents serious operational challenges for large-scale cloud-computing
providers. In this work, we present an architecture composed of low-cost Single-Board-Computer
clusters near to data sources, and centralised cloud-computing data centres. The proposed
cost-efficient model may be employed as an alternative to fog computing to meet real-time IoT
workload requirements while keeping scalability. We include an extensive empirical analysis to
assess the suitability of single-board-computer clusters as cost-effective edge-computing micro data
centres. Additionally, we compare the proposed architecture with traditional cloudlet and cloud
architectures, and evaluate them through extensive simulation. We finally show that acquisition costs
can be drastically reduced while keeping performance levels in data-intensive IoT use cases.Ministerio de Economía y Competitividad TIN2017-82113-C2-1-RMinisterio de Economía y Competitividad RTI2018-098062-A-I00European Union’s Horizon 2020 No. 754489Science Foundation Ireland grant 13/RC/209
Next Generation Cloud Computing: New Trends and Research Directions
The landscape of cloud computing has significantly changed over the last
decade. Not only have more providers and service offerings crowded the space,
but also cloud infrastructure that was traditionally limited to single provider
data centers is now evolving. In this paper, we firstly discuss the changing
cloud infrastructure and consider the use of infrastructure from multiple
providers and the benefit of decentralising computing away from data centers.
These trends have resulted in the need for a variety of new computing
architectures that will be offered by future cloud infrastructure. These
architectures are anticipated to impact areas, such as connecting people and
devices, data-intensive computing, the service space and self-learning systems.
Finally, we lay out a roadmap of challenges that will need to be addressed for
realising the potential of next generation cloud systems.Comment: Accepted to Future Generation Computer Systems, 07 September 201
- …