1,741 research outputs found

    Structural Optimization Using the Newton Modified Barrier Method

    Get PDF
    The Newton Modified Barrier Method (NMBM) is applied to structural optimization problems with large a number of design variables and constraints. This nonlinear mathematical programming algorithm was based on the Modified Barrier Function (MBF) theory and the Newton method for unconstrained optimization. The distinctive feature of the NMBM method is the rate of convergence that is due to the fact that the design remains in the Newton area after each Lagrange multiplier update. This convergence characteristic is illustrated by application to structural problems with a varying number of design variables and constraints. The results are compared with those obtained by optimality criteria (OC) methods and by the ASTROS program

    On representations of the feasible set in convex optimization

    Full text link
    We consider the convex optimization problem min⁥{f(x):gj(x)≀0,j=1,...,m}\min \{f(x) : g_j(x)\leq 0, j=1,...,m\} where ff is convex, the feasible set K is convex and Slater's condition holds, but the functions gjg_j are not necessarily convex. We show that for any representation of K that satisfies a mild nondegeneracy assumption, every minimizer is a Karush-Kuhn-Tucker (KKT) point and conversely every KKT point is a minimizer. That is, the KKT optimality conditions are necessary and sufficient as in convex programming where one assumes that the gjg_j are convex. So in convex optimization, and as far as one is concerned with KKT points, what really matters is the geometry of K and not so much its representation.Comment: to appear in Optimization Letter

    Fast Primal-Dual Gradient Method for Strongly Convex Minimization Problems with Linear Constraints

    Full text link
    In this paper we consider a class of optimization problems with a strongly convex objective function and the feasible set given by an intersection of a simple convex set with a set given by a number of linear equality and inequality constraints. A number of optimization problems in applications can be stated in this form, examples being the entropy-linear programming, the ridge regression, the elastic net, the regularized optimal transport, etc. We extend the Fast Gradient Method applied to the dual problem in order to make it primal-dual so that it allows not only to solve the dual problem, but also to construct nearly optimal and nearly feasible solution of the primal problem. We also prove a theorem about the convergence rate for the proposed algorithm in terms of the objective function and the linear constraints infeasibility.Comment: Submitted for DOOR 201

    Sums over Graphs and Integration over Discrete Groupoids

    Full text link
    We show that sums over graphs such as appear in the theory of Feynman diagrams can be seen as integrals over discrete groupoids. From this point of view, basic combinatorial formulas of the theory of Feynman diagrams can be interpreted as pull-back or push-forward formulas for integrals over suitable groupoids.Comment: 27 pages, 4 eps figures; LaTeX2e; uses Xy-Pic. Some ambiguities fixed, and several proofs simplifie

    Advances in low-memory subgradient optimization

    Get PDF
    One of the main goals in the development of non-smooth optimization is to cope with high dimensional problems by decomposition, duality or Lagrangian relaxation which greatly reduces the number of variables at the cost of worsening differentiability of objective or constraints. Small or medium dimensionality of resulting non-smooth problems allows to use bundle-type algorithms to achieve higher rates of convergence and obtain higher accuracy, which of course came at the cost of additional memory requirements, typically of the order of n2, where n is the number of variables of non-smooth problem. However with the rapid development of more and more sophisticated models in industry, economy, finance, et all such memory requirements are becoming too hard to satisfy. It raised the interest in subgradient-based low-memory algorithms and later developments in this area significantly improved over their early variants still preserving O(n) memory requirements. To review these developments this chapter is devoted to the black-box subgradient algorithms with the minimal requirements for the storage of auxiliary results, which are necessary to execute these algorithms. To provide historical perspective this survey starts with the original result of N.Z. Shor which opened this field with the application to the classical transportation problem. The theoretical complexity bounds for smooth and non-smooth convex and quasi-convex optimization problems are briefly exposed in what follows to introduce to the relevant fundamentals of non-smooth optimization. Special attention in this section is given to the adaptive step-size policy which aims to attain lowest complexity bounds. Unfortunately the non-differentiability of objective function in convex optimization essentially slows down the theoretical low bounds for the rate of convergence in subgradient optimization compared to the smooth case but there are different modern techniques that allow to solve non-smooth convex optimization problems faster then dictate lower complexity bounds. In this work the particular attention is given to Nesterov smoothing technique, Nesterov Universal approach, and Legendre (saddle point) representation approach. The new results on Universal Mirror Prox algorithms represent the original parts of the survey. To demonstrate application of non-smooth convex optimization algorithms for solution of huge-scale extremal problems we consider convex optimization problems with non-smooth functional constraints and propose two adaptive Mirror Descent methods. The first method is of primal-dual variety and proved to be optimal in terms of lower oracle bounds for the class of Lipschitz-continuous convex objective and constraints. The advantages of application of this method to sparse Truss Topology Design problem are discussed in certain details. The second method can be applied for solution of convex and quasi-convex optimization problems and is optimal in a sense of complexity bounds. The conclusion part of the survey contains the important references that characterize recent developments of non-smooth convex optimization

    Molecular-orbital-free algorithm for excited states in time-dependent perturbation theory

    Full text link
    A non-linear conjugate gradient optimization scheme is used to obtain excitation energies within the Random Phase Approximation (RPA). The solutions to the RPA eigenvalue equation are located through a variational characterization using a modified Thouless functional, which is based upon an asymmetric Rayleigh quotient, in an orthogonalized atomic orbital representation. In this way, the computational bottleneck of calculating molecular orbitals is avoided. The variational space is reduced to the physically-relevant transitions by projections. The feasibility of an RPA implementation scaling linearly with system size, N, is investigated by monitoring convergence behavior with respect to the quality of initial guess and sensitivity to noise under thresholding, both for well- and ill-conditioned problems. The molecular- orbital-free algorithm is found to be robust and computationally efficient providing a first step toward a large-scale, reduced complexity calculation of time-dependent optical properties and linear response. The algorithm is extensible to other forms of time-dependent perturbation theory including, but not limited to, time-dependent Density Functional theory.Comment: 9 pages, 7 figure

    The Hopf Algebra of Renormalization, Normal Coordinates and Kontsevich Deformation Quantization

    Full text link
    Using normal coordinates in a Poincar\'e-Birkhoff-Witt basis for the Hopf algebra of renormalization in perturbative quantum field theory, we investigate the relation between the twisted antipode axiom in that formalism, the Birkhoff algebraic decomposition and the universal formula of Kontsevich for quantum deformation.Comment: 21 pages, 15 figure

    The Pure Virtual Braid Group Is Quadratic

    Full text link
    If an augmented algebra K over Q is filtered by powers of its augmentation ideal I, the associated graded algebra grK need not in general be quadratic: although it is generated in degree 1, its relations may not be generated by homogeneous relations of degree 2. In this paper we give a sufficient criterion (called the PVH Criterion) for grK to be quadratic. When K is the group algebra of a group G, quadraticity is known to be equivalent to the existence of a (not necessarily homomorphic) universal finite type invariant for G. Thus the PVH Criterion also implies the existence of such a universal finite type invariant for the group G. We apply the PVH Criterion to the group algebra of the pure virtual braid group (also known as the quasi-triangular group), and show that the corresponding associated graded algebra is quadratic, and hence that these groups have a (not necessarily homomorphic) universal finite type invariant.Comment: 53 pages, 15 figures. Some clarifications added and inaccuracies corrected, reflecting suggestions made by the referee of the published version of the pape

    Gradient methods for problems with inexact model of the objective

    Get PDF
    We consider optimization methods for convex minimization problems under inexact information on the objective function. We introduce inexact model of the objective, which as a particular cases includes inexact oracle [19] and relative smoothness condition [43]. We analyze gradient method which uses this inexact model and obtain convergence rates for convex and strongly convex problems. To show potential applications of our general framework we consider three particular problems. The first one is clustering by electorial model introduced in [49]. The second one is approximating optimal transport distance, for which we propose a Proximal Sinkhorn algorithm. The third one is devoted to approximating optimal transport barycenter and we propose a Proximal Iterative Bregman Projections algorithm. We also illustrate the practical performance of our algorithms by numerical experiments
    • 

    corecore