20,092 research outputs found

    Robust Smoothed Analysis of a Condition Number for Linear Programming

    Full text link
    We perform a smoothed analysis of the GCC-condition number C(A) of the linear programming feasibility problem \exists x\in\R^{m+1} Ax < 0. Suppose that \bar{A} is any matrix with rows \bar{a_i} of euclidean norm 1 and, independently for all i, let a_i be a random perturbation of \bar{a_i} following the uniform distribution in the spherical disk in S^m of angular radius \arcsin\sigma and centered at \bar{a_i}. We prove that E(\ln C(A)) = O(mn / \sigma). A similar result was shown for Renegar's condition number and Gaussian perturbations by Dunagan, Spielman, and Teng [arXiv:cs.DS/0302011]. Our result is robust in the sense that it easily extends to radially symmetric probability distributions supported on a spherical disk of radius \arcsin\sigma, whose density may even have a singularity at the center of the perturbation. Our proofs combine ideas from a recent paper of B\"urgisser, Cucker, and Lotz (Math. Comp. 77, No. 263, 2008) with techniques of Dunagan et al.Comment: 34 pages. Version 3: only cosmetic change

    A cell-based smoothed finite element method for kinematic limit analysis

    Get PDF
    This paper presents a new numerical procedure for kinematic limit analysis problems, which incorporates the cell-based smoothed finite element method with second-order cone programming. The application of a strain smoothing technique to the standard displacement finite element both rules out volumetric locking and also results in an efficient method that can provide accurate solutions with minimal computational effort. The non-smooth optimization problem is formulated as a problem of minimizing a sum of Euclidean norms, ensuring that the resulting optimization problem can be solved by an efficient second-order cone programming algorithm. Plane stress and plane strain problems governed by the von Mises criterion are considered, but extensions to problems with other yield criteria having a similar conic quadratic form or 3D problems can be envisaged

    MAGMA: Multi-level accelerated gradient mirror descent algorithm for large-scale convex composite minimization

    Full text link
    Composite convex optimization models arise in several applications, and are especially prevalent in inverse problems with a sparsity inducing norm and in general convex optimization with simple constraints. The most widely used algorithms for convex composite models are accelerated first order methods, however they can take a large number of iterations to compute an acceptable solution for large-scale problems. In this paper we propose to speed up first order methods by taking advantage of the structure present in many applications and in image processing in particular. Our method is based on multi-level optimization methods and exploits the fact that many applications that give rise to large scale models can be modelled using varying degrees of fidelity. We use Nesterov's acceleration techniques together with the multi-level approach to achieve O(1/ϵ)\mathcal{O}(1/\sqrt{\epsilon}) convergence rate, where ϵ\epsilon denotes the desired accuracy. The proposed method has a better convergence rate than any other existing multi-level method for convex problems, and in addition has the same rate as accelerated methods, which is known to be optimal for first-order methods. Moreover, as our numerical experiments show, on large-scale face recognition problems our algorithm is several times faster than the state of the art

    Sample Complexity of Sample Average Approximation for Conditional Stochastic Optimization

    Full text link
    In this paper, we study a class of stochastic optimization problems, referred to as the \emph{Conditional Stochastic Optimization} (CSO), in the form of \min_{x \in \mathcal{X}} \EE_{\xi}f_\xi\Big({\EE_{\eta|\xi}[g_\eta(x,\xi)]}\Big), which finds a wide spectrum of applications including portfolio selection, reinforcement learning, robust learning, causal inference and so on. Assuming availability of samples from the distribution \PP(\xi) and samples from the conditional distribution \PP(\eta|\xi), we establish the sample complexity of the sample average approximation (SAA) for CSO, under a variety of structural assumptions, such as Lipschitz continuity, smoothness, and error bound conditions. We show that the total sample complexity improves from \cO(d/\eps^4) to \cO(d/\eps^3) when assuming smoothness of the outer function, and further to \cO(1/\eps^2) when the empirical function satisfies the quadratic growth condition. We also establish the sample complexity of a modified SAA, when ξ\xi and η\eta are independent. Several numerical experiments further support our theoretical findings. Keywords: stochastic optimization, sample average approximation, large deviations theoryComment: Typo corrected. Reference added. Revision comments handle
    corecore