942 research outputs found

    A Greedy Link Scheduler for Wireless Networks with Fading Channels

    Full text link
    We consider the problem of link scheduling for wireless networks with fading channels, where the link rates are varying with time. Due to the high computational complexity of the throughput optimal scheduler, we provide a low complexity greedy link scheduler GFS, with provable performance guarantees. We show that the performance of our greedy scheduler can be analyzed using the Local Pooling Factor (LPF) of a network graph, which has been previously used to characterize the stability of the Greedy Maximal Scheduling (GMS) policy for networks with static channels. We conjecture that the performance of GFS is a lower bound on the performance of GMS for wireless networks with fading channel

    Multiuser Scheduling in a Markov-modeled Downlink using Randomly Delayed ARQ Feedback

    Full text link
    We focus on the downlink of a cellular system, which corresponds to the bulk of the data transfer in such wireless systems. We address the problem of opportunistic multiuser scheduling under imperfect channel state information, by exploiting the memory inherent in the channel. In our setting, the channel between the base station and each user is modeled by a two-state Markov chain and the scheduled user sends back an ARQ feedback signal that arrives at the scheduler with a random delay that is i.i.d across users and time. The scheduler indirectly estimates the channel via accumulated delayed-ARQ feedback and uses this information to make scheduling decisions. We formulate a throughput maximization problem as a partially observable Markov decision process (POMDP). For the case of two users in the system, we show that a greedy policy is sum throughput optimal for any distribution on the ARQ feedback delay. For the case of more than two users, we prove that the greedy policy is suboptimal and demonstrate, via numerical studies, that it has near optimal performance. We show that the greedy policy can be implemented by a simple algorithm that does not require the statistics of the underlying Markov channel or the ARQ feedback delay, thus making it robust against errors in system parameter estimation. Establishing an equivalence between the two-user system and a genie-aided system, we obtain a simple closed form expression for the sum capacity of the Markov-modeled downlink. We further derive inner and outer bounds on the capacity region of the Markov-modeled downlink and tighten these bounds for special cases of the system parameters.Comment: Contains 22 pages, 6 figures and 8 tables; revised version including additional analytical and numerical results; work submitted, Feb 2010, to IEEE Transactions on Information Theory, revised April 2011; authors can be reached at [email protected]/[email protected]/[email protected]

    Feedback Allocation For OFDMA Systems With Slow Frequency-domain Scheduling

    Get PDF
    We study the problem of allocating limited feedback resources across multiple users in an orthogonal-frequency-division-multiple-access downlink system with slow frequency-domain scheduling. Many flavors of slow frequency-domain scheduling (e.g., persistent scheduling, semi-persistent scheduling), that adapt user-sub-band assignments on a slower time-scale, are being considered in standards such as 3GPP Long-Term Evolution. In this paper, we develop a feedback allocation algorithm that operates in conjunction with any arbitrary slow frequency-domain scheduler with the goal of improving the throughput of the system. Given a user-sub-band assignment chosen by the scheduler, the feedback allocation algorithm involves solving a weighted sum-rate maximization at each (slow) scheduling instant. We first develop an optimal dynamic-programming-based algorithm to solve the feedback allocation problem with pseudo-polynomial complexity in the number of users and in the total feedback bit budget. We then propose two approximation algorithms with complexity further reduced, for scenarios where the problem exhibits additional structure.Comment: Accepted to IEEE Transactions on Signal Processin

    Low-feedback multiple-access and scheduling via location and geometry information

    Get PDF

    Scheduling for next generation WLANs: filling the gap between offered and observed data rates

    Get PDF
    In wireless networks, opportunistic scheduling is used to increase system throughput by exploiting multi-user diversity. Although recent advances have increased physical layer data rates supported in wireless local area networks (WLANs), actual throughput realized are significantly lower due to overhead. Accordingly, the frame aggregation concept is used in next generation WLANs to improve efficiency. However, with frame aggregation, traditional opportunistic schemes are no longer optimal. In this paper, we propose schedulers that take queue and channel conditions into account jointly, to maximize throughput observed at the users for next generation WLANs. We also extend this work to design two schedulers that perform block scheduling for maximizing network throughput over multiple transmission sequences. For these schedulers, which make decisions over long time durations, we model the system using queueing theory and determine users' temporal access proportions according to this model. Through detailed simulations, we show that all our proposed algorithms offer significant throughput improvement, better fairness, and much lower delay compared with traditional opportunistic schedulers, facilitating the practical use of the evolving standard for next generation wireless networks

    Spatial CSMA: A Distributed Scheduling Algorithm for the SIR Model with Time-varying Channels

    Full text link
    Recent work has shown that adaptive CSMA algorithms can achieve throughput optimality. However, these adaptive CSMA algorithms assume a rather simplistic model for the wireless medium. Specifically, the interference is typically modelled by a conflict graph, and the channels are assumed to be static. In this work, we propose a distributed and adaptive CSMA algorithm under a more realistic signal-to-interference ratio (SIR) based interference model, with time-varying channels. We prove that our algorithm is throughput optimal under this generalized model. Further, we augment our proposed algorithm by using a parallel update technique. Numerical results show that our algorithm outperforms the conflict graph based algorithms, in terms of supportable throughput and the rate of convergence to steady-state.Comment: This work has been presented at National Conference on Communication, 2015, held at IIT Bombay, Mumbai, Indi

    Autonomous Algorithms for Centralized and Distributed Interference Coordination: A Virtual Layer Based Approach

    Get PDF
    Interference mitigation techniques are essential for improving the performance of interference limited wireless networks. In this paper, we introduce novel interference mitigation schemes for wireless cellular networks with space division multiple access (SDMA). The schemes are based on a virtual layer that captures and simplifies the complicated interference situation in the network and that is used for power control. We show how optimization in this virtual layer generates gradually adapting power control settings that lead to autonomous interference minimization. Thereby, the granularity of control ranges from controlling frequency sub-band power via controlling the power on a per-beam basis, to a granularity of only enforcing average power constraints per beam. In conjunction with suitable short-term scheduling, our algorithms gradually steer the network towards a higher utility. We use extensive system-level simulations to compare three distributed algorithms and evaluate their applicability for different user mobility assumptions. In particular, it turns out that larger gains can be achieved by imposing average power constraints and allowing opportunistic scheduling instantaneously, rather than controlling the power in a strict way. Furthermore, we introduce a centralized algorithm, which directly solves the underlying optimization and shows fast convergence, as a performance benchmark for the distributed solutions. Moreover, we investigate the deviation from global optimality by comparing to a branch-and-bound-based solution.Comment: revised versio

    Performance analysis of SIMO space-time scheduling with convex utility function: Zero-forcing linear processing

    Get PDF
    In a multiple-antenna system, an optimized design across the link and scheduling layers is crucial toward fully exploiting the temporal and spatial dimensions of the communication channel. In this paper, based on discrete optimization techniques, we derive a novel analytical framework for designing optimal space-time scheduling algorithms with respect to general convex utility functions. We focus on the reverse link (i.e., client to base station) and assume that the mobile terminal has a single transmit antenna while the base station has nR receive antennas. In order that our proposed framework is practicable and can be implemented with a reasonable cost in a real environment, we further assume that the physical layer involves only linear-processing complexity in separating signals from different users. As an illustration of the efficacy of our proposed analytical design framework, we apply the framework to two commonly used system utility functions, namely maximal throughput and proportional fair. We then devise an optimal scheduling algorithm based on our design framework. However, in view of the formidable time complexity of the optimal algorithm, we propose two fast practical scheduling techniques, namely the greedy algorithm and the genetic algorithm (GA). The greedy algorithm, which is similar to the one widely used in 3G1X and Qualcomm high-data-rate (HDR) systems (optimal when nR = 1), exhibits significantly inferior performance when nR > 1 as compared with the optimal approach. On the other hand, the GA is quite promising in terms of performance complexity tradeoff, especially for a system with a large number of users with even a moderately large nR. As a case in point, for a system with 20 users and nR = 4, the GA is more than 36 times faster than the optimal while the performance degradation is less than 10%, making it an attractive choice in the practical implementation for real-time link scheduling.published_or_final_versio
    • …
    corecore