29,333 research outputs found
Maximizing Service Reliability in Distributed Computing Systems with Random Node Failures: Theory and Implementation
In distributed computing systems (DCSs) where server nodes can fail permanently with nonzero probability, the system performance can be assessed by means of the service reliability, defined as the probability of serving all the tasks queued in the DCS before all the nodes fail. This paper presents a rigorous probabilistic framework to analytically characterize the service reliability of a DCS in the presence of communication uncertainties and stochastic topological changes due to node deletions. The framework considers a system composed of heterogeneous nodes with stochastic service and failure times and a communication network imposing random tangible delays. The framework also permits arbitrarily specified, distributed load-balancing actions to be taken by the individual nodes in order to improve the service reliability. The presented analysis is based upon a novel use of the concept of stochastic regeneration, which is exploited to derive a system of difference-differential equations characterizing the service reliability. The theory is further utilized to optimize certain load-balancing policies for maximal service reliability; the optimization is carried out by means of an algorithm that scales linearly with the number of nodes in the system. The analytical model is validated using both Monte Carlo simulations and experimental data collected from a DCS testbed
Benchmarking Practical RRM Algorithms for D2D Communications in LTE Advanced
Device-to-device (D2D) communication integrated into cellular networks is a
means to take advantage of the proximity of devices and allow for reusing
cellular resources and thereby to increase the user bitrates and the system
capacity. However, when D2D (in the 3rd Generation Partnership Project also
called Long Term Evolution (LTE) Direct) communication in cellular spectrum is
supported, there is a need to revisit and modify the existing radio resource
management (RRM) and power control (PC) techniques to realize the potential of
the proximity and reuse gains and to limit the interference at the cellular
layer. In this paper, we examine the performance of the flexible LTE PC tool
box and benchmark it against a utility optimal iterative scheme. We find that
the open loop PC scheme of LTE performs well for cellular users both in terms
of the used transmit power levels and the achieved
signal-to-interference-and-noise-ratio (SINR) distribution. However, the
performance of the D2D users as well as the overall system throughput can be
boosted by the utility optimal scheme, because the utility maximizing scheme
takes better advantage of both the proximity and the reuse gains. Therefore, in
this paper we propose a hybrid PC scheme, in which cellular users employ the
open loop path compensation method of LTE, while D2D users use the utility
optimizing distributed PC scheme. In order to protect the cellular layer, the
hybrid scheme allows for limiting the interference caused by the D2D layer at
the cost of having a small impact on the performance of the D2D layer. To
ensure feasibility, we limit the number of iterations to a practically feasible
level. We make the point that the hybrid scheme is not only near optimal, but
it also allows for a distributed implementation for the D2D users, while
preserving the LTE PC scheme for the cellular users.Comment: 30 pages, submitted for review April-2013. See also: G. Fodor, M.
Johansson, D. P. Demia, B. Marco, and A. Abrardo, A joint power control and
resource allocation algorithm for D2D communications, KTH, Automatic Control,
Tech. Rep., 2012, qC 20120910,
http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-10205
- …