10 research outputs found

    Lagrangeano aumentado exponencial aplicado ao problema de equilíbrio

    Get PDF
    Orientador : Luiz Carlos MatioliTese (doutorado) - Universidade Federal do Paraná, Setor de Tecnologia, Programa de Pós-Graduação em Métodos Numéricos em Engenharia. Defesa: Curitiba, 11/09/2015Inclui referências : f. 65-67Área de concentração : Programação matemáticaResumo: Neste estudo propomos um Método de Lagrangeano Aumentado Exponencial para resolução do Problema de Equilíbrio Geral. Tal método é uma extensão dos algoritmos apresentados em (NASRI, 2010). Em nossa proposta, substituímos a penalidade clássica de Rockafellar presente nestes algoritmos pela penalidade exponencial. Em seguida, refizemos a teoria geral em torno do novo algoritmo. A teoria de equivalência é construída via ponto proximal, ou seja, a partir da relação de dualidade entre lagrangeano aumentado e ponto proximal, porém o termo de regularização aqui utilizado é uma quase distância de Bregman, diferente do termo quadrático empregado em (NASRI, 2010). Para contornar possíveis problemas de mau condicionamento causados pela presença da penalidade exponencial, procedemos a um ajuste quadrático da mesma. Em seguida, testamos a nova metodologia por meio de experimentos numéricos, considerando inicialmente o método com a penalidade exponencial e, em seguida, com a quadrática ajustada. Para realizar os testes, escolhemos problemas de equilíbrio e problemas de equilíbrio de Nash generalizados (GNEPs), os quais compõem uma classe particular de problemas de equilíbrio. O método puro resolveu problemas pequenos e, para problemas com dimensões maiores, a versão com quadrática ajustada mostrou-se melhor. Na sequência, discutimos os resultados numéricos e apresentamos as considerações finais. Para finalizar, deixamos algumas perspectivas de trabalhos futuros e uma lista de referências, as quais serviram de suporte para essa pesquisa. Palavras-chaves: Palavras-chaves: Problema de Equilíbrio, Lagrangeano Aumentado, Penalidade Exponencial.Abstract: In this study we propose an Exponential Augmented Lagrangian Method to the resolution of the General Equilibrium Problem. Such a method is an extension of the algorithms presented in (NASRI, 2010). In our proposal, we replaced the classic penalty Rockafellar present in these algorithms by the exponential penalty. Then, we redid the general theory surrounding the new algorithm. The equivalence theory is built via proximal point, ie, from the dual relationship between Augmented Lagrangian and the proximal point, but the regularization term used here is almost distance Bregman, different from the quadratic term used in (NASRI, 2010). To work around possible bad conditioning problems caused by the presence of the exponential penalty, we proceeded to a quadratic adjustment of it. Next, we tested the new method by numerical experiments, considering initially the method with the exponential penalty and then with the quadratic adjusted. To perform the tests, we chose equilibrium problems and Generalized Nash equilibrium problems (GNEPs), which compose a particular class of equilibrium problems. The pure method solved little problems and for problems with larger dimensions, the version with the quadratic adjusted proved to be better. Following, we discuss the numerical results and present the final considerations. Finally, we leave some perspectives of future works and a list of references, which served as support to this research. Key-words: Equilibrium Problems, Augmented Lagrangian, Exponential Penalty

    Augmented Lagrangian method with nonmonotone penalty parameters for constrained optimization

    No full text
    Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)At each outer iteration of standard Augmented Lagrangian methods one tries to solve a box-constrained optimization problem with some prescribed tolerance. In the continuous world, using exact arithmetic, this subproblem is always solvable. Therefore, the possibility of finishing the subproblem resolution without satisfying the theoretical stopping conditions is not contemplated in usual convergence theories. However, in practice, one might not be able to solve the subproblem up to the required precision. This may be due to different reasons. One of them is that the presence of an excessively large penalty parameter could impair the performance of the box-constraint optimization solver. In this paper a practical strategy for decreasing the penalty parameter in situations like the one mentioned above is proposed. More generally, the different decisions that may be taken when, in practice, one is not able to solve the Augmented Lagrangian subproblem will be discussed. As a result, an improved Augmented Lagrangian method is presented, which takes into account numerical difficulties in a satisfactory way, preserving suitable convergence theory. Numerical experiments are presented involving all the CUTEr collection test problems.513941965Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP

    Augmented Lagrangian method with nonmonotone penalty parameters for constrained optimization

    No full text
    At each outer iteration of standard Augmented Lagrangian methods one tries to solve a box-constrained optimization problem with some prescribed tolerance. In the continuous world, using exact arithmetic, this subproblem is always solvable. Therefore, the possibility of finishing the subproblem resolution without satisfying the theoretical stopping conditions is not contemplated in usual convergence theories. However, in practice, one might not be able to solve the subproblem up to the required precision. This may be due to different reasons. One of them is that the presence of an excessively large penalty parameter could impair the performance of the box-constraint optimization solver. In this paper a practical strategy for decreasing the penalty parameter in situations like the one mentioned above is proposed. More generally, the different decisions that may be taken when, in practice, one is not able to solve the Augmented Lagrangian subproblem will be discussed. As a result, an improved Augmented Lagrangian method is presented, which takes into account numerical difficulties in a satisfactory way, preserving suitable convergence theory. Numerical experiments are presented involving all the CUTEr collection test problems.PRONEX-CNPq/FAPERJ [E-26/171.1510/2006-APQ1]PRONEXCNPq/FAPERJFAPESP [2006/53768-0, 2006/03496-3, 2009/10241-0]CNPq [304484/2007-5

    Uma classe de métodos de lagrangiano aumentado

    Get PDF
    Resumo: Estudamos uma classe de métodos de Lagrangiano aumentado para problemas de minimização, num conjunto convexo e compacto, sujeito a restrições de desigualdade. Esta classe de métodos envolve uma função de penalização. Mostramos, sob certas hipóteses, a convergência global dos métodos desde que as funções de penalização envolvidas satisfaçam certas propriedades. Para os testes computacionais escolhemos três funções de penalização que satisfazem as propriedades exigidas, sendo uma delas a clássica função de penalização quadrática devida a Powell-Hestenes-Rockafellar. São apresentados resultados numéricos de comparação do desempenho computacional desta classe de métodos de Lagrangiano aumentado com as três funções de penalização, na resolução de problemas da coleção CUTEr

    J.M.: Augmented Lagrangian method with nonmonotone penalty parameters for constrained optimization

    No full text
    Abstract At each outer iteration of standard Augmented Lagrangian methods one tries to solve a box-constrained optimization problem with some prescribed tolerance. In the continuous world, using exact arithmetic, this subproblem is always solvable. Therefore, the possibility of finishing the subproblem resolution without satisfying the theoretical stopping conditions is not contemplated in usual convergence theories. However, in practice, one might not be able to solve the subproblem up to the required precision. This may be due to different reasons. One of them is that the presence of an excessively large penalty parameter could impair the performance of the box-constraint optimization solver. In this paper a practical strategy for decreasing the penalty parameter in situations like the one mentioned above is proposed. More generally, the different decisions that may be taken when, in practice, one is not able to solve the Augmented Lagrangian subproblem will be discussed. As a result, an improved Augmented Lagrangian method is presented, which takes into account numerical difficulties in a satisfactory way, preserving suitable convergence theory. Numerical experiments are presented involving all the CUTEr collection test problems

    Efficient Optimization Algorithms for Nonlinear Data Analysis

    Get PDF
    Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.Siirretty Doriast

    Trajectory-based methods for solving nonlinear and mixed integer nonlinear programming problems

    Get PDF
    A thesis submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Doctor of Philosophy. Johannesburg, 2015.I would like to acknowledge a number of people who contributed towards the completion of this thesis. Firstly, I thank my supervisor Professor Montaz Ali for his patience, enthusiasm, guidance and teachings. The skills I have acquired during this process have infiltrated every aspect of my life. I remain forever grateful. Secondly, I would like to say a special thank you to Professor Jan Snyman for his assistance, which contributed immensely towards this thesis. I would also like to thank Professor Dominque Orban for his willingness to assist me for countless hours with the installation of CUTEr, as well as Professor Jose Mario Martinez for his email correspondence. A heartfelt thanks goes out to my family and friends at large, for their prayers, support and faith in me when I had little faith in myself. Thank you also to my colleagues who kept me sane and motivated, as well as all the support staff who played a pivotal roll in this process. Above all, I would like to thank God, without whom none of this would have been possible

    Infeasibility in augmented lagrangian methods

    Get PDF
    Orientador: José Mario Martínez PérezTese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação CientíficaResumo: Algoritmos de programação não-linear práticos podem convergir para pontos inviáveis mesmo quando o problema a ser resolvido é viável. Quando isso ocorre, é natural que o usuário mude o ponto inicial e/ou parâmetros algorítmicos e reaplique o método na tentativa de encontrar uma solução viável e ótima. Desta forma, o ideal é que um algoritmo não só seja eficiente em encontrar soluções viáveis, mas também que detecte rapidamente quando ele está fadado a convergir para um ponto inviável. Na tentativa de atingir esse objetivo, apresentamos modificações em um algoritmo baseado em Lagrangiano aumentado de modo que, no caso de convergência para um ponto inviável, os subproblemas são resolvidos com tolerâncias moderadas e, mesmo assim, as propriedades de convergência global são mantidas. Experimentos numéricos são apresentadosAbstract Practical Nonlinear Programming algorithms may converge to infeasible points even when the problem to be solved is feasible. When this occurs, it is natural for the user to change the starting point and/or algorithmic parameters and reapply the method in an attempt to find a feasible and optimal solution. Thus, the ideal is that an algorithm is eficient not only in finding feasible solutions, but also in quickly detecting when it is fated to converge to an infeasible point. In pursuit of this goal, we present modifications of an algorithm based on Augmented Lagrangians so that, in the case of convergence to an infeasible point, the subproblems are solved with moderate tolerances and, even then, the global convergence properties are maintained. Numerical experiments are presentedDoutoradoMatematica AplicadaDoutor em Matemática Aplicad
    corecore