9,409 research outputs found

    Slow Adaptive OFDMA Systems Through Chance Constrained Programming

    Full text link
    Adaptive OFDMA has recently been recognized as a promising technique for providing high spectral efficiency in future broadband wireless systems. The research over the last decade on adaptive OFDMA systems has focused on adapting the allocation of radio resources, such as subcarriers and power, to the instantaneous channel conditions of all users. However, such "fast" adaptation requires high computational complexity and excessive signaling overhead. This hinders the deployment of adaptive OFDMA systems worldwide. This paper proposes a slow adaptive OFDMA scheme, in which the subcarrier allocation is updated on a much slower timescale than that of the fluctuation of instantaneous channel conditions. Meanwhile, the data rate requirements of individual users are accommodated on the fast timescale with high probability, thereby meeting the requirements except occasional outage. Such an objective has a natural chance constrained programming formulation, which is known to be intractable. To circumvent this difficulty, we formulate safe tractable constraints for the problem based on recent advances in chance constrained programming. We then develop a polynomial-time algorithm for computing an optimal solution to the reformulated problem. Our results show that the proposed slow adaptation scheme drastically reduces both computational cost and control signaling overhead when compared with the conventional fast adaptive OFDMA. Our work can be viewed as an initial attempt to apply the chance constrained programming methodology to wireless system designs. Given that most wireless systems can tolerate an occasional dip in the quality of service, we hope that the proposed methodology will find further applications in wireless communications

    Constraint Programming and Safe Global Optimization

    Get PDF
    International audienceWe investigate the capabilities of constraints programming techniques in rigor- ous global optimization methods. We introduce different constraint programming techniques to reduce the gap between efficient but unsafe systems like Baron1, and safe but slow global optimization approaches. We show how constraint program- ming filtering techniques can be used to implement optimality-based reduction in a safe and efficient way, and thus to take advantage of the known bounds of the ob- jective function to reduce the domain of the variables, and to speed up the search of a global optimum. We describe an efficient strategy to compute very accurate approximations of feasible points. This strategy takes advantage of the Newton method for under-constrained systems of equalities and inequalities to compute efficiently a promising upper bound. Experiments on the COCONUT benchmarks demonstrate that these different techniques drastically improve the performances

    Survey on Combinatorial Register Allocation and Instruction Scheduling

    Full text link
    Register allocation (mapping variables to processor registers or memory) and instruction scheduling (reordering instructions to increase instruction-level parallelism) are essential tasks for generating efficient assembly code in a compiler. In the last three decades, combinatorial optimization has emerged as an alternative to traditional, heuristic algorithms for these two tasks. Combinatorial optimization approaches can deliver optimal solutions according to a model, can precisely capture trade-offs between conflicting decisions, and are more flexible at the expense of increased compilation time. This paper provides an exhaustive literature review and a classification of combinatorial optimization approaches to register allocation and instruction scheduling, with a focus on the techniques that are most applied in this context: integer programming, constraint programming, partitioned Boolean quadratic programming, and enumeration. Researchers in compilers and combinatorial optimization can benefit from identifying developments, trends, and challenges in the area; compiler practitioners may discern opportunities and grasp the potential benefit of applying combinatorial optimization

    Towards Fast-Convergence, Low-Delay and Low-Complexity Network Optimization

    Full text link
    Distributed network optimization has been studied for well over a decade. However, we still do not have a good idea of how to design schemes that can simultaneously provide good performance across the dimensions of utility optimality, convergence speed, and delay. To address these challenges, in this paper, we propose a new algorithmic framework with all these metrics approaching optimality. The salient features of our new algorithm are three-fold: (i) fast convergence: it converges with only O(log(1/ϵ))O(\log(1/\epsilon)) iterations that is the fastest speed among all the existing algorithms; (ii) low delay: it guarantees optimal utility with finite queue length; (iii) simple implementation: the control variables of this algorithm are based on virtual queues that do not require maintaining per-flow information. The new technique builds on a kind of inexact Uzawa method in the Alternating Directional Method of Multiplier, and provides a new theoretical path to prove global and linear convergence rate of such a method without requiring the full rank assumption of the constraint matrix

    Progressive construction of a parametric reduced-order model for PDE-constrained optimization

    Full text link
    An adaptive approach to using reduced-order models as surrogates in PDE-constrained optimization is introduced that breaks the traditional offline-online framework of model order reduction. A sequence of optimization problems constrained by a given Reduced-Order Model (ROM) is defined with the goal of converging to the solution of a given PDE-constrained optimization problem. For each reduced optimization problem, the constraining ROM is trained from sampling the High-Dimensional Model (HDM) at the solution of some of the previous problems in the sequence. The reduced optimization problems are equipped with a nonlinear trust-region based on a residual error indicator to keep the optimization trajectory in a region of the parameter space where the ROM is accurate. A technique for incorporating sensitivities into a Reduced-Order Basis (ROB) is also presented, along with a methodology for computing sensitivities of the reduced-order model that minimizes the distance to the corresponding HDM sensitivity, in a suitable norm. The proposed reduced optimization framework is applied to subsonic aerodynamic shape optimization and shown to reduce the number of queries to the HDM by a factor of 4-5, compared to the optimization problem solved using only the HDM, with errors in the optimal solution far less than 0.1%
    corecore