1,886 research outputs found
Managing Service-Heterogeneity using Osmotic Computing
Computational resource provisioning that is closer to a user is becoming
increasingly important, with a rise in the number of devices making continuous
service requests and with the significant recent take up of latency-sensitive
applications, such as streaming and real-time data processing. Fog computing
provides a solution to such types of applications by bridging the gap between
the user and public/private cloud infrastructure via the inclusion of a "fog"
layer. Such approach is capable of reducing the overall processing latency, but
the issues of redundancy, cost-effectiveness in utilizing such computing
infrastructure and handling services on the basis of a difference in their
characteristics remain. This difference in characteristics of services because
of variations in the requirement of computational resources and processes is
termed as service heterogeneity. A potential solution to these issues is the
use of Osmotic Computing -- a recently introduced paradigm that allows division
of services on the basis of their resource usage, based on parameters such as
energy, load, processing time on a data center vs. a network edge resource.
Service provisioning can then be divided across different layers of a
computational infrastructure, from edge devices, in-transit nodes, and a data
center, and supported through an Osmotic software layer. In this paper, a
fitness-based Osmosis algorithm is proposed to provide support for osmotic
computing by making more effective use of existing Fog server resources. The
proposed approach is capable of efficiently distributing and allocating
services by following the principle of osmosis. The results are presented using
numerical simulations demonstrating gains in terms of lower allocation time and
a higher probability of services being handled with high resource utilization.Comment: 7 pages, 4 Figures, International Conference on Communication,
Management and Information Technology (ICCMIT 2017), At Warsaw, Poland, 3-5
April 2017, http://www.iccmit.net/ (Best Paper Award
Performance Analysis for Heterogeneous Cloud Servers Using Queueing Theory
© 2020 IEEE. Personal use of this material is permitted. Permissíon from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertisíng or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.[EN] In this article, we consider the problem of selecting appropriate heterogeneous servers in cloud centers for stochastically arriving requests in order to obtain an optimal tradeoff between the expected response time and power consumption. Heterogeneous servers with uncertain setup times are far more common than homogenous ones. The heterogeneity of servers and stochastic requests pose great challenges in relation to the tradeoff between the two conflicting objectives. Using the Markov decision process, the expected response time of requests is analyzed in terms of a given number of available candidate servers. For a given system availability, a binary search method is presented to determine the number of servers selected from the candidates. An iterative improvement method is proposed to determine the best servers to select for the considered objectives. After evaluating the performance of the system parameters on the performance of algorithms using the analysis of variance, the proposed algorithm and three of its variants are compared over a large number of random and real instances. The results indicate that proposed algorithm is much more effective than the other four algorithms within acceptable CPU times.This work is supported by the National Key Research and Development Program of China Grant No. 2017YFB1400801, the National Natural Science Foundation of China Grant Nos. 61572127, 61872077, 61832004 and Collaborative Innovation Center of Wireless Communications Technology. Rub~en Ruiz is partly supported by the Spanish Ministry of Science, Innovation, and Universities, under the project "OPTEP-Port Terminal Operations Optimization" (No. RTI2018-094940-BI00) financed with FEDER funds.Wang, S.; Li, X.; Ruiz García, R. (2020). Performance Analysis for Heterogeneous Cloud Servers Using Queueing Theory. IEEE Transactions on Computers. 69(4):563-576. https://doi.org/10.1109/TC.2019.2956505S56357669
Hyperprofile-based Computation Offloading for Mobile Edge Networks
In recent studies, researchers have developed various computation offloading
frameworks for bringing cloud services closer to the user via edge networks.
Specifically, an edge device needs to offload computationally intensive tasks
because of energy and processing constraints. These constraints present the
challenge of identifying which edge nodes should receive tasks to reduce
overall resource consumption. We propose a unique solution to this problem
which incorporates elements from Knowledge-Defined Networking (KDN) to make
intelligent predictions about offloading costs based on historical data. Each
server instance can be represented in a multidimensional feature space where
each dimension corresponds to a predicted metric. We compute features for a
"hyperprofile" and position nodes based on the predicted costs of offloading a
particular task. We then perform a k-Nearest Neighbor (kNN) query within the
hyperprofile to select nodes for offloading computation. This paper formalizes
our hyperprofile-based solution and explores the viability of using machine
learning (ML) techniques to predict metrics useful for computation offloading.
We also investigate the effects of using different distance metrics for the
queries. Our results show various network metrics can be modeled accurately
with regression, and there are circumstances where kNN queries using Euclidean
distance as opposed to rectilinear distance is more favorable.Comment: 5 pages, NSF REU Site publicatio
Fog Computing: A Taxonomy, Survey and Future Directions
In recent years, the number of Internet of Things (IoT) devices/sensors has
increased to a great extent. To support the computational demand of real-time
latency-sensitive applications of largely geo-distributed IoT devices/sensors,
a new computing paradigm named "Fog computing" has been introduced. Generally,
Fog computing resides closer to the IoT devices/sensors and extends the
Cloud-based computing, storage and networking facilities. In this chapter, we
comprehensively analyse the challenges in Fogs acting as an intermediate layer
between IoT devices/ sensors and Cloud datacentres and review the current
developments in this field. We present a taxonomy of Fog computing according to
the identified challenges and its key features.We also map the existing works
to the taxonomy in order to identify current research gaps in the area of Fog
computing. Moreover, based on the observations, we propose future directions
for research
- …