25,774 research outputs found

    Quadratic Optimization for Nonsmooth Optimization Algorithms: Theory and Numerical Experiments

    Get PDF
    Nonsmooth optimization arises in many scientific and engineering applications, such as optimal control, neural network training, and others. Gradient sampling and bundle methods are two ef- ficient types of algorithms for solving nonsmooth optimization problems. Quadratic optimization (commonly referred to as QP) problems arise as subproblems in both types of algorithms. This thesis introduces an algorithm for solving the types of QP problems that arise in such methods. The proposed algorithm is inspired by one proposed in a paper written by Krzysztof C. Kiwiel in the 1980s. Improvements are proposed so that the algorithm may solve problems with addi- tional bound constraints, which are often required in practice. The solver also allows for general quadratic terms in the objective. Our QP solver has been implemented in C++. This thesis not only covers the theoretical background related to the QP solver; it also contains the results of numerical experiments on a wide range of randomly generated test problems

    Constrained Deep Networks: Lagrangian Optimization via Log-Barrier Extensions

    Full text link
    This study investigates the optimization aspects of imposing hard inequality constraints on the outputs of CNNs. In the context of deep networks, constraints are commonly handled with penalties for their simplicity, and despite their well-known limitations. Lagrangian-dual optimization has been largely avoided, except for a few recent works, mainly due to the computational complexity and stability/convergence issues caused by alternating explicit dual updates/projections and stochastic optimization. Several studies showed that, surprisingly for deep CNNs, the theoretical and practical advantages of Lagrangian optimization over penalties do not materialize in practice. We propose log-barrier extensions, which approximate Lagrangian optimization of constrained-CNN problems with a sequence of unconstrained losses. Unlike standard interior-point and log-barrier methods, our formulation does not need an initial feasible solution. Furthermore, we provide a new technical result, which shows that the proposed extensions yield an upper bound on the duality gap. This generalizes the duality-gap result of standard log-barriers, yielding sub-optimality certificates for feasible solutions. While sub-optimality is not guaranteed for non-convex problems, our result shows that log-barrier extensions are a principled way to approximate Lagrangian optimization for constrained CNNs via implicit dual variables. We report comprehensive weakly supervised segmentation experiments, with various constraints, showing that our formulation outperforms substantially the existing constrained-CNN methods, both in terms of accuracy, constraint satisfaction and training stability, more so when dealing with a large number of constraints

    Design optimization applied in structural dynamics

    Get PDF
    This paper introduces the design optimization strategies, especially for structures which have dynamic constraints. Design optimization involves first the modeling and then the optimization of the problem. Utilizing the Finite Element (FE) model of a structure directly in an optimization process requires a long computation time. Therefore the Backpropagation Neural Networks (NNs) are introduced as a so called surrogate model for the FE model. Optimization techniques mentioned in this study cover the Genetic Algorithm (GA) and the Sequential Quadratic Programming (SQP) methods. For the applications of the introduced techniques, a multisegment cantilever beam problem under the constraints of its first and second natural frequency has been selected and solved using four different approaches
    corecore