14 research outputs found

    On numerical testing of the regularity of semidefinite problems

    Get PDF
    This paper is devoted to study regularity of Semidefinite Programming (SDP) problems. Current methods for SDP rely on assumptions of regularity such as constraint qualifications and wellposedness. Absence of regularity may compromise characterization of optimality and algorithms may present numerical difficulties. Prior that solving problems, one should evaluate the expected efficiency of algorithms. Therefore, it is important to have simple procedures that verify regularity. Here we use an algorithm to test regularity of linear SDP problems in terms of Slater’s condition. We present numerical tests using problems from SDPLIB and compare our results with those from others available in literature

    Advances in Optimization and Nonlinear Analysis

    Get PDF
    The present book focuses on that part of calculus of variations, optimization, nonlinear analysis and related applications which combines tools and methods from partial differential equations with geometrical techniques. More precisely, this work is devoted to nonlinear problems coming from different areas, with particular reference to those introducing new techniques capable of solving a wide range of problems. The book is a valuable guide for researchers, engineers and students in the field of mathematics, operations research, optimal control science, artificial intelligence, management science and economics

    Optimal Control of the Landau-de Gennes Model of Nematic Liquid Crystals

    Full text link
    We present an analysis and numerical study of an optimal control problem for the Landau-de Gennes (LdG) model of nematic liquid crystals (LCs), which is a crucial component in modern technology. They exhibit long range orientational order in their nematic phase, which is represented by a tensor-valued (spatial) order parameter Q=Q(x)Q = Q(x). Equilibrium LC states correspond to QQ functions that (locally) minimize an LdG energy functional. Thus, we consider an L2L^2-gradient flow of the LdG energy that allows for finding local minimizers and leads to a semi-linear parabolic PDE, for which we develop an optimal control framework. We then derive several a priori estimates for the forward problem, including continuity in space-time, that allow us to prove existence of optimal boundary and external ``force'' controls and to derive optimality conditions through the use of an adjoint equation. Next, we present a simple finite element scheme for the LdG model and a straightforward optimization algorithm. We illustrate optimization of LC states through numerical experiments in two and three dimensions that seek to place LC defects (where Q(x)=0Q(x) = 0) in desired locations, which is desirable in applications.Comment: 26 pages, 9 figure

    Linear convergence of accelerated conditional gradient algorithms in spaces of measures

    Full text link
    A class of generalized conditional gradient algorithms for the solution of optimization problem in spaces of Radon measures is presented. The method iteratively inserts additional Dirac-delta functions and optimizes the corresponding coefficients. Under general assumptions, a sub-linear O(1/k)\mathcal{O}(1/k) rate in the objective functional is obtained, which is sharp in most cases. To improve efficiency, one can fully resolve the finite-dimensional subproblems occurring in each iteration of the method. We provide an analysis for the resulting procedure: under a structural assumption on the optimal solution, a linear O(ζk)\mathcal{O}(\zeta^k) convergence rate is obtained locally.Comment: 30 pages, 7 figure

    Estudo numérico de regularidade em programação semidefinida e aplicações

    Get PDF
    Doutoramento em MatemáticaThis thesis is devoted to the study of regularity in semidefinite programming (SDP), an important area of convex optimization with a wide range of applications. The duality theory, optimality conditions and methods for SDP rely on certain assumptions of regularity that are not always satisfied. Absence of regularity, i.e., nonregularity, may affect the characterization of optimality of solutions and SDP solvers may run into numerical difficulties, leading to unreliable results. There exist different notions associated to regularity. In this thesis, we study in particular, well-posedness, good behaviour and constraint qualifications (CQs), as well as relations among them. A widely used CQ in SDP is the Slater condition. This condition guarantees that the first order necessary optimality conditions in the Karush-Kuhn-Tucker formulation are satisfied. Current SDP solvers do not check if a problem satisfies the Slater condition, but work assuming its fulfilment. We develop and implement in MATLAB numerical procedures to verify if a given SDP problem is regular in terms of the Slater condition and to determine the irregularity degree in the case of nonregularity. Numerical experiments presented in this work show that the proposed procedures are quite effcient and confirm the obtained conclusions about the relationship between the Slater condition and other regularity notions. Other contribution of the thesis consists in the development and MATLAB implementation of an algorithm for generating nonregular SDP problems with a desired irregularity degree. The database of nonregular problems constructed using this generator is publicly available and can be used for testing new SDP methods and solvers. Another contribution of this thesis is concerned with an SDP application to data analysis. We consider a nonlinear SDP model and linear SDP relaxations for clustering problems and study their regularity. We show that the nonlinear SDP model is nonregular, while its relaxations are regular. We suggest a SDP-based algorithm for solving clustering and dimensionality reduction problems and implement it in R. Numerical tests on various real-life data sets confirm the fastness and efficiency of this numerical procedure.Esta tese _e dedicada ao estudo de regularidade em programação semidefinida (SDP - semidefinite programming), uma importante área da optimização convexa com uma vasta gama de aplicações. A teoria de dualidade, condições de optimalidade e métodos para SDP assentam em certos pressupostos de regularidade que nem sempre são satisfeitos. A ausência de regularidade, isto é, não regularidade, pode afetar a caracterização da optimalidade de soluções e os solvers podem apresentar dificuldades numéricas, conduzindo a resultados pouco fiáveis. Existem diferentes noções associadas a regularidade. Nesta tese, estudamos em particular, os conceitos de problemas bem-postos, bem comportados e condições de qualificação de restrições (CQ - constraint qualifications), bem como as relações entre eles. Uma das CQs mais utilizadas em SDP é a condição de Slater. Esta condição garante que as condições de optimalidade de primeira ordem, conhecidas como condições de Karush-Kuhn-Tucker, estão satisfeitas. Os solvers atuais não verificam se um problema a resolver satisfaz a condição de Slater, mas trabalham nesse pressuposto. Desenvolvemos e implementamos em MATLAB procedimentos numéricos para verificar se um dado problema de SDP é regular em termos da condição de Slater e determinar o grau de irregularidade no caso de problemas não regulares. Os resultados das experiências numéricas apresentados neste trabalho mostram que os procedimentos propostos são eficientes e confirmam as conclusões obtidas sobre a relação entre a condição de Slater e outras noções de regularidade. Outra contribuição da tese consiste no desenvolvimento e na implementação em MATLAB de um procedimento numérico para gerar problemas de SDP não regulares com um determinado grau de irregularidade. A colecção de problemas não regulares construídos usando este gerador é de acesso livre e permite testar novos métodos e solvers para SDP. Uma outra contribuição desta tese está relacionada com uma aplicação de SDP em análise de dados. Consideramos um modelo de SDP não linear, bem como as suas relaxações lineares para problemas de clusterização, e estudamos a sua regularidade. Mostramos que o modelo não linear é não regular, enquanto que as suas relaxações são regulares. Sugerimos um algoritmo baseado em modelos de SDP para resolver problemas de clusterização e redução de dimensionalidade, e implementámo-lo em R. Os testes numéricos usando vários conjuntos de dados confirmam a rapidez e eficiência deste procedimento numérico

    Livro de atas do XVI Congresso da Associação Portuguesa de Investigação Operacional

    Get PDF
    Fundação para a Ciência e Tecnologia - FC

    Learning Probabilistic Graphical Models for Image Segmentation

    Get PDF
    Probabilistic graphical models provide a powerful framework for representing image structures. Based on this concept, many inference and learning algorithms have been developed. However, both algorithm classes are NP-hard combinatorial problems in the general case. As a consequence, relaxation methods were developed to approximate the original problems but with the benefit of being computationally efficient. In this work we consider the learning problem on binary graphical models and their relaxations. Two novel methods for determining the model parameters in discrete energy functions from training data are proposed. Learning the model parameters overcomes the problem of heuristically determining them. Motivated by common learning methods which aim at minimizing the training error measured by a loss function we develop a new learning method similar in fashion to structured SVM. However, computationally more efficient. We term this method as linearized approach (LA) as it is restricted to linearly dependent potentials. The linearity of LA is crucial to come up with a tight convex relaxation, which allows to use off-the-shelf inference solvers to approach subproblems which emerge from solving the overall problem. However, this type of learning methods almost never yield optimal solutions or perfect performance on the training data set. So what happens if the learned graphical model on the training data would lead to exact ground segmentation? Will this give a benefit when predicting? Motivated by the idea of inverse optimization, we take advantage of inverse linear programming to develop a learning approach, referred to as inverse linear programming approach (invLPA). It further refines the graphical models trained, using the previously introduced methods and is capable to perfectly predict ground truth on training data. The empirical results from implementing invLPA give answers to our questions posed before. LA is able to learn both unary and pairwise potentials jointly while with invLPA this is not possible due to the representation we use. On the other hand, invLPA does not rely on a certain form for the potentials and thus is flexible in the choice of the fitting method. Although the corrected potentials with invLPA always result in ground truth segmentation of the training data, invLPA is able to find corrections on the foreground segments only. Due to the relaxed problem formulation this does not affect the final segmentation result. Moreover, as long as we initialize invLPA with model parameters of a learning method performing sufficiently well, this drawback of invLPA does not significantly affect the final prediction result. The performance of the proposed learning methods is evaluated on both synthetic and real world datasets. We demonstrate that LA is competitive in comparison to other parameter learning methods using loss functions based on Maximum a Posteriori Marginal (MPM) and Maximum Likelihood Estimation (MLE). Moreover, we illustrate the benefits of learning with inverse linear programming. In a further experiment we demonstrate the versatility of our learning methods by applying LA to learning motion segmentation in video sequences and comparing it to state-of-the-art segmentation algorithms

    Fixed Point Iterations for Finite Sum Monotone Inclusions

    Get PDF
    This thesis studies two families of methods for finding zeros of finite sums of monotone operators, the first being variance-reduced stochastic gradient (VRSG) methods. This is a large family of algorithms that use random sampling to improve the convergence rate compared to more traditional approaches. We examine the optimal sampling distributions and their interaction with the epoch length. Specifically, we show that in methods like SAGA, where the epoch length is directly tied to the random sampling, the optimal sampling becomes more complex compared to for instance L-SVRG, where the epoch length can be chosen independently. We also show that biased VRSG estimates in the style of SAG are sensitive to the problem setting. More precisely, a significantly larger step-size can be used when the monotone operators are cocoercive gradients compared to when they just are cocoercive. This is noteworthy since the standard gradient descent is not affected by this change and the fact that the sensitivity to the problem assumption vanishes when the estimates are unbiased. The second set of methods we examine are deterministic operator splitting methods and we focus on frameworks for constructing and analyzing such splitting methods. One such framework is based on what we call nonlinear resolvents and we present a novel way of ensuring convergence of iterations of nonlinear resolvents by the means of a momentum term. This approach leads in many cases to cheaper per-iteration cost compared to a previously established projection approach. The framework covers many existing methods and we provide a new primal-dual method that uses an extra resolvent step as well as a general approach for adding momentum to any special case of our nonlinear resolvent method. We use a similar concept to the nonlinear resolvent to derive a representation of the entire class of frugal splitting operators, which are splitting operators that use exactly one direct or resolvent evaluation of each operator of the monotone inclusion problem. The representation reveals several new results regarding lifting numbers, existence of solution maps, and parallelizability of the forward/backward evaluations. We show that the minimal lifting is n − 1 − f where n is the number of monotone operators and f is the number of direct evaluations in the splitting. A new convergent and parallelizable frugal splitting operator with minimal lifting is also presented
    corecore