13 research outputs found

    Optimizing condition numbers

    Get PDF
    In this paper we study the problem of minimizing condition numbers over a compact convex subset of the cone of symmetric positive semidefinite n×nn\times n matrices. We show that the condition number is a Clarke regular strongly pseudoconvex function. We prove that a global solution of the problem can be approximated by an exact or an inexact solution of a nonsmooth convex program. This asymptotic analysis provides a valuable tool for designing an implementable algorithm for solving the problem of minimizing condition numbers

    A descent subgradient method using Mifflin line search for nonsmooth nonconvex optimization

    Full text link
    We propose a descent subgradient algorithm for minimizing a real function, assumed to be locally Lipschitz, but not necessarily smooth or convex. To find an effective descent direction, the Goldstein subdifferential is approximated through an iterative process. The method enjoys a new two-point variant of Mifflin line search in which the subgradients are arbitrary. Thus, the line search procedure is easy to implement. Moreover, in comparison to bundle methods, the quadratic subproblems have a simple structure, and to handle nonconvexity the proposed method requires no algorithmic modification. We study the global convergence of the method and prove that any accumulation point of the generated sequence is Clarke stationary, assuming that the objective ff is weakly upper semismooth. We illustrate the efficiency and effectiveness of the proposed algorithm on a collection of academic and semi-academic test problems

    Minimizing the condition number of a Gram matrix

    Get PDF
    2010-2011 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe

    On the performance of sampling methods for unconstrained minimization

    Get PDF
    Orientadores: Sandra Augusta Santos, Lucas Eduardo Azevedo SimõesDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação CientíficaResumo: A otimização não suave é um ramo da otimização que trabalha com funções objetivo não diferenciáveis em um subconjunto do domínio. Neste trabalho, apresentamos resultados computacionais para a minimização de problemas nos quais as funções objetivo são não diferenciáveis em um subconjunto de medida nula do domínio, e não apresentam restrições. O algoritmo Gradient Sampling (GS) foi proposto recentemente e minimiza a função objetivo com base no gradiente calculado em amostras de pontos gerados uniformemente em uma vizinhança do ponto corrente. Variações deste método envolvendo diferentes direções e diferentes valores de parâmetros foram exploradas. Problemas conhecidos da literatura foram utilizados para analisar comparativamente o comportamento de algumas variantes do método e sua dependência com relação ao número de pontos amostrados. O número de iterações e o valor ótimo obtido foram as medidas de eficiência utilizadas, e pela natureza randômica do método, cada problema foi resolvido diversas vezes, para garantir a relevância estatística dos resultadosAbstract: Nonsmooth optimization is a branch of optimization that deals with non-differentiable objective functions in a subset of the domain. In this work, we present computational results for the minimization of problems in which the objective functions are non-differentiable in a subset of the domain with null measure, and do not present restrictions. The Gradient Sampling (GS) algorithm was recently proposed and minimizes the objective function based on the computed gradient at sampled points uniformly generated in a neighborhood of the current point. Variations of this method involving different directions and different parameter values have been explored. Problems from the literature were used to comparatively analyze the behavior of some variants of the method and its dependence on the number of sampled points. The number of iterations and the optimum value obtained were the efficiency measures used, and due to the random nature of the method, each problem was solved several times, to guarantee the statistical relevance of the resultsMestradoMatematica AplicadaMestre em Matemática AplicadaCAPE

    Expedition in Data and Harmonic Analysis on Graphs

    Get PDF
    The graph Laplacian operator is widely studied in spectral graph theory largely due to its importance in modern data analysis. Recently, the Fourier transform and other time-frequency operators have been defined on graphs using Laplacian eigenvalues and eigenvectors. We extend these results and prove that the translation operator to the i’th node is invertible if and only if all eigenvectors are nonzero on the i’th node. Because of this dependency on the support of eigenvectors we study the characteristic set of Laplacian eigenvectors. We prove that the Fiedler vector of a planar graph cannot vanish on large neighborhoods and then explicitly construct a family of non-planar graphs that do exhibit this property. We then prove original results in modern analysis on graphs. We extend results on spectral graph wavelets to create vertex-dyanamic spectral graph wavelets whose support depends on both scale and translation parameters. We prove that Spielman’s Twice-Ramanujan graph sparsifying algorithm cannot outperform his conjectured optimal sparsification constant. Finally, we present numerical results on graph conditioning, in which edges of a graph are rescaled to best approximate the complete graph and reduce average commute time

    Evolvability-guided Optimization of Linear Deformation Setups for Evolutionary Design Optimization

    Get PDF
    Richter A. Evolvability-guided Optimization of Linear Deformation Setups for Evolutionary Design Optimization. Bielefeld: Universität Bielefeld; 2019.Andreas Richter gratefully acknowledges the financial support from Honda Research Institute Europe (HRI-EU).This thesis targets efficient solutions for optimal representation setups for evolutionary design optimization problems. The representation maps the abstract parameters of an optimizer to a meaningful variation of the design model, e.g., the shape of a car. Thereby, it determines the convergence speed to and the quality of the final result. Thus, engineers are eager to employ well-tuned representations to achieve high-quality design solutions. But, setting up optimal representations is a cumbersome process because the setup procedure requires detailed knowledge about the objective functions, e.g., a fluid dynamics simulation, and the parameters of the employed representation itself. Thus, we target efficient routines to set up representations automatically to support engineers from their tedious, partly manual work. Inspired by the concept of evolvability, we present novel quality criteria for the evaluation of linear deformations commonly applied as representations. We define and analyze the criteria variability, regularity, and improvement potential which measure the expected quality and convergence speed of an evolutionary design optimization process based on the linear deformation setup. Moreover, we target the efficient optimization of deformation setups with respect to these three criteria. In dynamic design optimization scenarios a suitable compromise between exploration and exploitation is crucial for efficient solutions. We discuss the construction of optimal compromises for these dynamic scenarios with our criteria because they characterize exploration and exploitation. As a result an engineer can initialize and adjust the deformation setup for improved convergence speed of a design process and for enhanced quality of the design solutions with our methods

    Técnicas amostrais para otimização não suave

    Get PDF
    Orientadores: Sandra Augusta Santos, Elias Salomão Helou NetoTese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação CientíficaResumo: O método amostral de gradientes (GS) é um algoritmo recentemente desenvolvido para resolver problemas de otimização não suave. Fazendo uso de informações de primeira ordem da função objetivo, este método generaliza o método de máxima descida, um dos clássicos algoritmos para minimização de funções suaves. Este estudo tem como objetivo desenvolver e explorar diferentes métodos amostrais para a otimização numérica de funções não suaves. Inicialmente, provamos que é possível ter uma convergência global para o método GS na ausência do procedimento chamado "teste de diferenciabilidade". Posteriormente, apresentamos condições que devem ser esperadas para a obtenção de uma taxa de convergência local linear do método GS. Finalmente, um novo método amostral com convergência local superlinear é apresentado, o qual se baseia não somente no cálculo de gradientes, mas também nos valores da função objetivo nos pontos sorteadosAbstract: The Gradient Sampling (GS) method is a recently developed tool for solving unconstrained nonsmooth optimization problems. Using just first order information of the objective function, it generalizes the steepest descent method, one of the most classical methods for minimizing a smooth function. This study aims at developing and exploring different sampling algorithms for the numerical optimization of nonsmooth functions. First, we prove that it is possible to have a global convergence result for the GS method in the abscence of the differentiability check procedure. Second, we prove in which circumstances one can expect the GS method to have a linear convergence rate. Lastly, a new sampling algorithm with superlinear convergence is presented, which rests not only upon the gradient but also on the objective function value at the sampled pointsDoutoradoMatematica AplicadaDoutor em Matemática Aplicada2013/14615-7CAPESFAPES

    Evolvability-guided Optimization of Linear Deformation Setups for Evolutionary Design Optimization

    Get PDF
    Richter A. Evolvability-guided Optimization of Linear Deformation Setups for Evolutionary Design Optimization. Bielefeld: Universität Bielefeld; 2019.Andreas Richter gratefully acknowledges the financial support from Honda Research Institute Europe (HRI-EU).This thesis targets efficient solutions for optimal representation setups for evolutionary design optimization problems. The representation maps the abstract parameters of an optimizer to a meaningful variation of the design model, e.g., the shape of a car. Thereby, it determines the convergence speed to and the quality of the final result. Thus, engineers are eager to employ well-tuned representations to achieve high-quality design solutions. But, setting up optimal representations is a cumbersome process because the setup procedure requires detailed knowledge about the objective functions, e.g., a fluid dynamics simulation, and the parameters of the employed representation itself. Thus, we target efficient routines to set up representations automatically to support engineers from their tedious, partly manual work. Inspired by the concept of evolvability, we present novel quality criteria for the evaluation of linear deformations commonly applied as representations. We define and analyze the criteria variability, regularity, and improvement potential which measure the expected quality and convergence speed of an evolutionary design optimization process based on the linear deformation setup. Moreover, we target the efficient optimization of deformation setups with respect to these three criteria. In dynamic design optimization scenarios a suitable compromise between exploration and exploitation is crucial for efficient solutions. We discuss the construction of optimal compromises for these dynamic scenarios with our criteria because they characterize exploration and exploitation. As a result an engineer can initialize and adjust the deformation setup for improved convergence speed of a design process and for enhanced quality of the design solutions with our methods

    Méthodes de réduction de modèle pour les équations paramétrées -- Applications à la quantification d’incertitude.

    Get PDF
    Model order reduction has become an inescapable tool for the solution of high dimensional parameter-dependent equations arising in uncertainty quantification, optimization or inverse problems. In this thesis we focus on low rank approximation methods, in particular on reduced basis methods and on tensor approximation methods.The approximation obtained by Galerkin projections may be inaccurate when the operator is ill-conditioned. For projection based methods, we propose preconditioners built by interpolation of the operator inverse. We rely on randomized linear algebra for the efficient computation of these preconditioners. Adaptive interpolation strategies are proposed in order to improve either the error estimates or the projection onto reduced spaces. For tensor approximation methods, we propose a minimal residual formulation with ideal residual norms. The proposed algorithm, which can be interpreted as a gradient algorithm with an implicit preconditioner, allows obtaining a quasi-optimal approximation of the solution.Finally, we address the problem of the approximation of vector-valued or functional-valued quantities of interest. For this purpose we generalize the 'primal-dual' approaches to the non-scalar case, and we propose new methods for the projection onto reduced spaces. In the context of tensor approximation we consider a norm which depends on the error on the quantity of interest. This allows obtaining approximations of the solution that take into account the objective of the numerical simulation.Les méthodes de réduction de modèle sont incontournables pour la résolution d'équations paramétrées de grande dimension qui apparaissent dans les problèmes de quantification d'incertitude, d'optimisation ou encore les problèmes inverses. Dans cette thèse nous nous intéressons aux méthodes d'approximation de faible rang, notamment aux méthodes de bases réduites et d'approximation de tenseur.L'approximation obtenue par projection de Galerkin peut être de mauvaise qualité lorsque l'opérateur est mal conditionné. Pour les méthodes de projection sur des espaces réduits, nous proposons des préconditionneurs construits par interpolation d'inverse d'opérateur, calculés efficacement par des outils d'algèbre linéaire "randomisée". Des stratégies d'interpolation adaptatives sont proposées pour améliorer soit les estimateurs d'erreur, soit les projections sur les espaces réduits. Pour les méthodes d'approximation de tenseur, nous proposons une formulation en minimum de résidu avec utilisation de norme idéale. L'algorithme de résolution, qui s'interprète comme un algorithme de gradient avec préconditionneur implicite, permet d'obtenir une approximation quasi-optimale de la solution.Enfin nous nous intéressons à l'approximation de quantités d'intérêt à valeur fonctionnelle ou vectorielle. Nous généralisons pour cela les approches de type "primale-duale" au cas non scalaire, et nous proposons de nouvelles méthodes de projection sur espaces réduits. Dans le cadre de l'approximation de tenseur, nous considérons une norme dépendant de l'erreur en quantité d'intérêt afin d'obtenir une approximation de la solution qui tient compte de l'objectif du calcul
    corecore