9 research outputs found
Approximation Algorithms for Wireless Link Scheduling with Flexible Data Rates
We consider scheduling problems in wireless networks with respect to flexible
data rates. That is, more or less data can be transmitted per time depending on
the signal quality, which is determined by the
signal-to-interference-plus-noise ratio (SINR). Each wireless link has a
utility function mapping SINR values to the respective data rates. We have to
decide which transmissions are performed simultaneously and (depending on the
problem variant) also which transmission powers are used.
In the capacity-maximization problem, one strives to maximize the overall
network throughput, i.e., the summed utility of all links. For arbitrary
utility functions (not necessarily continuous ones), we present an O(log
n)-approximation when having n communication requests. This algorithm is built
on a constant-factor approximation for the special case of the respective
problem where utility functions only consist of a single step. In other words,
each link has an individual threshold and we aim at maximizing the number of
links whose threshold is satisfied. On the way, this improves the result in
[Kesselheim, SODA 2011] by not only extending it to individual thresholds but
also showing a constant approximation factor independent of assumptions on the
underlying metric space or the network parameters.
In addition, we consider the latency-minimization problem. Here, each link
has a demand, e.g., representing an amount of data. We have to compute a
schedule of shortest possible length such that for each link the demand is
fulfilled, that is the overall summed utility (or data transferred) is at least
as large as its demand. Based on the capacity-maximization algorithm, we show
an O(log^2 n)-approximation for this problem
Beyond Geometry : Towards Fully Realistic Wireless Models
Signal-strength models of wireless communications capture the gradual fading
of signals and the additivity of interference. As such, they are closer to
reality than other models. However, nearly all theoretic work in the SINR model
depends on the assumption of smooth geometric decay, one that is true in free
space but is far off in actual environments. The challenge is to model
realistic environments, including walls, obstacles, reflections and anisotropic
antennas, without making the models algorithmically impractical or analytically
intractable.
We present a simple solution that allows the modeling of arbitrary static
situations by moving from geometry to arbitrary decay spaces. The complexity of
a setting is captured by a metricity parameter Z that indicates how far the
decay space is from satisfying the triangular inequality. All results that hold
in the SINR model in general metrics carry over to decay spaces, with the
resulting time complexity and approximation depending on Z in the same way that
the original results depends on the path loss term alpha. For distributed
algorithms, that to date have appeared to necessarily depend on the planarity,
we indicate how they can be adapted to arbitrary decay spaces.
Finally, we explore the dependence on Z in the approximability of core
problems. In particular, we observe that the capacity maximization problem has
exponential upper and lower bounds in terms of Z in general decay spaces. In
Euclidean metrics and related growth-bounded decay spaces, the performance
depends on the exact metricity definition, with a polynomial upper bound in
terms of Z, but an exponential lower bound in terms of a variant parameter phi.
On the plane, the upper bound result actually yields the first approximation of
a capacity-type SINR problem that is subexponential in alpha
Optimal Schedules for Data Gathering in Wireless Sensor Networks
Wireless Sensor Networks (WSNs) are widely used for target monitoring: sensors monitor a set of targets, and forward the collected or aggregated data using multi-hop routing to the same location, called the sink. The resulting communication scheme is called ConvergeCast or Aggregated ConvergeCast.
Several researchers studied the ConvergeCast and the Aggregated ConvergeCast, as to produce the shortest possible schedule that conveys all the packets or a packet aggregation to the sink. Nearly all proposed methods proceed in two steps, first the routing, and then the scheduling of the packets along the routes defined in the first step.
The thesis is organized around four contributions. The first one is an improvement of the previous mathematical models that outputs (minimum-sized) multi-set of transmission configurations (TCs), in which a transmission configuration is defined as a set of links that can transmit concurrently. Our model allows the transmission of several packets per target, in both single-path and multi-path settings; we give two new heuristics for generating new improved transmission configurations. While such models go beyond the routing step, they do not specify an ordering over time of the configurations. Consequently, the second contribution consists of several algorithms, one exact and several heuristics, for ordering the configurations. Our results show that the approach of scheduling when restricted to a tree generated by the first contribution significantly outperforms the ordering of configurations of TC-approach for single-rate, single packet per sensor traffic patterns, but the TC approach gives better results for multi-rate traffic and when there are a large number of packets per sensor.
In the last two contributions, we propose an exact mathematical model that takes care, in a single phase, of the routing and the scheduling, for the ConvergeCast and the aggregated ConvergeCast problem. They both correspond to decomposition models in which not only we generate transmission configurations, but an ordering of them.
We performed extensive simulations on networks with up to 70 sensors for both ConvergeCast and Aggregated ConvergeCast, and compared our one phase results with one of the best heuristics in the literature