4,341 research outputs found

    Slow Adaptive OFDMA Systems Through Chance Constrained Programming

    Full text link
    Adaptive OFDMA has recently been recognized as a promising technique for providing high spectral efficiency in future broadband wireless systems. The research over the last decade on adaptive OFDMA systems has focused on adapting the allocation of radio resources, such as subcarriers and power, to the instantaneous channel conditions of all users. However, such "fast" adaptation requires high computational complexity and excessive signaling overhead. This hinders the deployment of adaptive OFDMA systems worldwide. This paper proposes a slow adaptive OFDMA scheme, in which the subcarrier allocation is updated on a much slower timescale than that of the fluctuation of instantaneous channel conditions. Meanwhile, the data rate requirements of individual users are accommodated on the fast timescale with high probability, thereby meeting the requirements except occasional outage. Such an objective has a natural chance constrained programming formulation, which is known to be intractable. To circumvent this difficulty, we formulate safe tractable constraints for the problem based on recent advances in chance constrained programming. We then develop a polynomial-time algorithm for computing an optimal solution to the reformulated problem. Our results show that the proposed slow adaptation scheme drastically reduces both computational cost and control signaling overhead when compared with the conventional fast adaptive OFDMA. Our work can be viewed as an initial attempt to apply the chance constrained programming methodology to wireless system designs. Given that most wireless systems can tolerate an occasional dip in the quality of service, we hope that the proposed methodology will find further applications in wireless communications

    Random projections for linear programming

    Get PDF
    Random projections are random linear maps, sampled from appropriate distributions, that approx- imately preserve certain geometrical invariants so that the approximation improves as the dimension of the space grows. The well-known Johnson-Lindenstrauss lemma states that there are random ma- trices with surprisingly few rows that approximately preserve pairwise Euclidean distances among a set of points. This is commonly used to speed up algorithms based on Euclidean distances. We prove that these matrices also preserve other quantities, such as the distance to a cone. We exploit this result to devise a probabilistic algorithm to solve linear programs approximately. We show that this algorithm can approximately solve very large randomly generated LP instances. We also showcase its application to an error correction coding problem.Comment: 26 pages, 1 figur

    The CONEstrip algorithm

    Get PDF
    Uncertainty models such as sets of desirable gambles and (conditional) lower previsions can be represented as convex cones. Checking the consistency of and drawing inferences from such models requires solving feasibility and optimization problems. We consider finitely generated such models. For closed cones, we can use linear programming; for conditional lower prevision-based cones, there is an efficient algorithm using an iteration of linear programs. We present an efficient algorithm for general cones that also uses an iteration of linear programs

    Probabilistic analysis of a differential equation for linear programming

    Full text link
    In this paper we address the complexity of solving linear programming problems with a set of differential equations that converge to a fixed point that represents the optimal solution. Assuming a probabilistic model, where the inputs are i.i.d. Gaussian variables, we compute the distribution of the convergence rate to the attracting fixed point. Using the framework of Random Matrix Theory, we derive a simple expression for this distribution in the asymptotic limit of large problem size. In this limit, we find that the distribution of the convergence rate is a scaling function, namely it is a function of one variable that is a combination of three parameters: the number of variables, the number of constraints and the convergence rate, rather than a function of these parameters separately. We also estimate numerically the distribution of computation times, namely the time required to reach a vicinity of the attracting fixed point, and find that it is also a scaling function. Using the problem size dependence of the distribution functions, we derive high probability bounds on the convergence rates and on the computation times.Comment: 1+37 pages, latex, 5 eps figures. Version accepted for publication in the Journal of Complexity. Changes made: Presentation reorganized for clarity, expanded discussion of measure of complexity in the non-asymptotic regime (added a new section
    corecore