10 research outputs found

    Beyond Geometry : Towards Fully Realistic Wireless Models

    Full text link
    Signal-strength models of wireless communications capture the gradual fading of signals and the additivity of interference. As such, they are closer to reality than other models. However, nearly all theoretic work in the SINR model depends on the assumption of smooth geometric decay, one that is true in free space but is far off in actual environments. The challenge is to model realistic environments, including walls, obstacles, reflections and anisotropic antennas, without making the models algorithmically impractical or analytically intractable. We present a simple solution that allows the modeling of arbitrary static situations by moving from geometry to arbitrary decay spaces. The complexity of a setting is captured by a metricity parameter Z that indicates how far the decay space is from satisfying the triangular inequality. All results that hold in the SINR model in general metrics carry over to decay spaces, with the resulting time complexity and approximation depending on Z in the same way that the original results depends on the path loss term alpha. For distributed algorithms, that to date have appeared to necessarily depend on the planarity, we indicate how they can be adapted to arbitrary decay spaces. Finally, we explore the dependence on Z in the approximability of core problems. In particular, we observe that the capacity maximization problem has exponential upper and lower bounds in terms of Z in general decay spaces. In Euclidean metrics and related growth-bounded decay spaces, the performance depends on the exact metricity definition, with a polynomial upper bound in terms of Z, but an exponential lower bound in terms of a variant parameter phi. On the plane, the upper bound result actually yields the first approximation of a capacity-type SINR problem that is subexponential in alpha

    An Online Approach to Dynamic Channel Access and Transmission Scheduling

    Full text link
    Making judicious channel access and transmission scheduling decisions is essential for improving performance as well as energy and spectral efficiency in multichannel wireless systems. This problem has been a subject of extensive study in the past decade, and the resulting dynamic and opportunistic channel access schemes can bring potentially significant improvement over traditional schemes. However, a common and severe limitation of these dynamic schemes is that they almost always require some form of a priori knowledge of the channel statistics. A natural remedy is a learning framework, which has also been extensively studied in the same context, but a typical learning algorithm in this literature seeks only the best static policy, with performance measured by weak regret, rather than learning a good dynamic channel access policy. There is thus a clear disconnect between what an optimal channel access policy can achieve with known channel statistics that actively exploits temporal, spatial and spectral diversity, and what a typical existing learning algorithm aims for, which is the static use of a single channel devoid of diversity gain. In this paper we bridge this gap by designing learning algorithms that track known optimal or sub-optimal dynamic channel access and transmission scheduling policies, thereby yielding performance measured by a form of strong regret, the accumulated difference between the reward returned by an optimal solution when a priori information is available and that by our online algorithm. We do so in the context of two specific algorithms that appeared in [1] and [2], respectively, the former for a multiuser single-channel setting and the latter for a single-user multichannel setting. In both cases we show that our algorithms achieve sub-linear regret uniform in time and outperforms the standard weak-regret learning algorithms.Comment: 10 pages, to appear in MobiHoc 201

    Jamming-Resistant Learning in Wireless Networks

    Full text link
    We consider capacity maximization in wireless networks under adversarial interference conditions. There are n links, each consisting of a sender and a receiver, which repeatedly try to perform a successful transmission. In each time step, the success of attempted transmissions depends on interference conditions, which are captured by an interference model (e.g. the SINR model). Additionally, an adversarial jammer can render a (1-delta)-fraction of time steps unsuccessful. For this scenario, we analyze a framework for distributed learning algorithms to maximize the number of successful transmissions. Our main result is an algorithm based on no-regret learning converging to an O(1/delta)-approximation. It provides even a constant-factor approximation when the jammer exactly blocks a (1-delta)-fraction of time steps. In addition, we consider a stochastic jammer, for which we obtain a constant-factor approximation after a polynomial number of time steps. We also consider more general settings, in which links arrive and depart dynamically, and where each sender tries to reach multiple receivers. Our algorithms perform favorably in simulations.Comment: 22 pages, 2 figures, typos remove

    Approximation Algorithms for Wireless Link Scheduling with Flexible Data Rates

    Full text link
    We consider scheduling problems in wireless networks with respect to flexible data rates. That is, more or less data can be transmitted per time depending on the signal quality, which is determined by the signal-to-interference-plus-noise ratio (SINR). Each wireless link has a utility function mapping SINR values to the respective data rates. We have to decide which transmissions are performed simultaneously and (depending on the problem variant) also which transmission powers are used. In the capacity-maximization problem, one strives to maximize the overall network throughput, i.e., the summed utility of all links. For arbitrary utility functions (not necessarily continuous ones), we present an O(log n)-approximation when having n communication requests. This algorithm is built on a constant-factor approximation for the special case of the respective problem where utility functions only consist of a single step. In other words, each link has an individual threshold and we aim at maximizing the number of links whose threshold is satisfied. On the way, this improves the result in [Kesselheim, SODA 2011] by not only extending it to individual thresholds but also showing a constant approximation factor independent of assumptions on the underlying metric space or the network parameters. In addition, we consider the latency-minimization problem. Here, each link has a demand, e.g., representing an amount of data. We have to compute a schedule of shortest possible length such that for each link the demand is fulfilled, that is the overall summed utility (or data transferred) is at least as large as its demand. Based on the capacity-maximization algorithm, we show an O(log^2 n)-approximation for this problem

    Scheduling in Wireless Networks with {Rayleigh}-fading Interference

    No full text

    Scheduling in Wireless Networks with Rayleigh-Fading Interference

    No full text
    We study algorithms for wireless spectrum access of n communication requests when interference conditions are given by the Rayleigh-fading model. This model extends the recently popular deterministic interference model based on the signal-to-interference-plus-noise ratio (SINR) using stochastic propagation to address fading effects observed in reality. We consider worst-case approximation guarantees for the two standard problems of capacity maximization (maximize the expected number of successful transmissions in a single slot) and latency minimization (minimize the expected number of slots until all transmissions were successful). Our main result is a generic reduction of Rayleigh fading to the deterministic SINR model. It allows to apply existing algorithms for the non-fading model in the Rayleigh-fading scenario while losing only a factor of O(log ∗ n) in the approximation guarantee. This way, we obtain the first approximation guarantees for Rayleigh fading and, more fundamentally, show that non-trivial stochastic fading effects can be successfully handled using existing and future techniques for the non-fading model. Using a more detailed argument, a similar result applies even for distributed and game-theoretic capacity maximization approaches. For example, it allows to show that regret learning yields an O(log ∗ n)-approximation with uniform power assignments. Our analytical treatment is supported by simulations illustrating the performance of regret learning and, more generally, the relationship between both models
    corecore