253 research outputs found

    A stochastic approximation algorithm for stochastic semidefinite programming

    Full text link
    Motivated by applications to multi-antenna wireless networks, we propose a distributed and asynchronous algorithm for stochastic semidefinite programming. This algorithm is a stochastic approximation of a continous- time matrix exponential scheme regularized by the addition of an entropy-like term to the problem's objective function. We show that the resulting algorithm converges almost surely to an Δ\varepsilon-approximation of the optimal solution requiring only an unbiased estimate of the gradient of the problem's stochastic objective. When applied to throughput maximization in wireless multiple-input and multiple-output (MIMO) systems, the proposed algorithm retains its convergence properties under a wide array of mobility impediments such as user update asynchronicities, random delays and/or ergodically changing channels. Our theoretical analysis is complemented by extensive numerical simulations which illustrate the robustness and scalability of the proposed method in realistic network conditions.Comment: 25 pages, 4 figure

    Preconditioners for Krylov subspace methods: An overview

    Get PDF
    When simulating a mechanism from science or engineering, or an industrial process, one is frequently required to construct a mathematical model, and then resolve this model numerically. If accurate numerical solutions are necessary or desirable, this can involve solving large-scale systems of equations. One major class of solution methods is that of preconditioned iterative methods, involving preconditioners which are computationally cheap to apply while also capturing information contained in the linear system. In this article, we give a short survey of the field of preconditioning. We introduce a range of preconditioners for partial differential equations, followed by optimization problems, before discussing preconditioners constructed with less standard objectives in mind

    Maximum block improvement and polynomial optimization

    Get PDF

    Author index for volumes 101–200

    Get PDF

    Numerical solution of saddle point problems

    Full text link

    Parallel alternating iterative algorithms with and without overlapping on multicore architectures

    Get PDF
    We consider the problem of solving large sparse linear systems where the coefficient matrix is possibly singular but the equations are consistent. Block two-stage methods in which the inner iterations are performed using alternating methods are studied. These methods are ideal for parallel processing and provide a very general setting to study parallel block methods including overlapping. Convergence properties of these methods are established when the matrix in question is either M-matrix or symmetric matrix. Different parallel versions of these methods and implementation strategies, with and without overlapping blocks, are explored. The reported experiments show the behavior and effectiveness of the designed parallel algorithms by exploiting the benefits of shared memory inside the nodes of current SMP supercomputers.This research was partially supported by the Spanish Ministry of Science and Innovation under grant number TIN2011-26254, and by the European Union FEDER (CAPAP-H5 network TIN2014-53522- REDT)
    • 

    corecore