18 research outputs found

    Counting Unique-Sink Orientations

    Get PDF
    Unique-sink orientations (USOs) are an abstract class of orientations of the n-cube graph. We consider some classes of USOs that are of interest in connection with the linear complementarity problem. We summarise old and show new lower and upper bounds on the sizes of some such classes. Furthermore, we provide a characterisation of K-matrices in terms of their corresponding USOs.Comment: 13 pages; v2: proof of main theorem expanded, plus various other corrections. Now 16 pages; v3: minor correction

    Author index for volumes 101–200

    Get PDF

    New optimization methods in predictive control

    No full text
    This thesis is mainly concerned with the efficient solution of a linear discrete-time finite horizon optimal control problem (FHOCP) with quadratic cost and linear constraints on the states and inputs. In predictive control, such a FHOCP needs to be solved online at each sampling instant. In order to solve such a FHOCP, it is necessary to solve a quadratic programming (QP) problem. Interior point methods (IPMs) have proven to be an efficient way of solving quadratic programming problems. A linear system of equations needs to be solved in each iteration of an IPM. The ill-conditioning of this linear system in the later iterations of the IPM prevents the use of an iterative method in solving the linear system due to a very slow rate of convergence; in some cases the solution never reaches the desired accuracy. A new well-conditioned IPM, which increases the rate of convergence of the iterative method is proposed. The computational advantage is obtained by the use of an inexact Newton method along with the use of novel preconditioners. A new warm-start strategy is also presented to solve a QP with an interior-point method whose data is slightly perturbed from the previous QP. The effectiveness of this warm-start strategy is demonstrated on a number of available online benchmark problems. Numerical results indicate that the proposed technique depends upon the size of perturbation and it leads to a reduction of 30-74% in floating point operations compared to a cold-start interior point method. Following the main theme of this thesis, which is to improve the computational efficiency of an algorithm, an efficient algorithm for solving the coupled Sylvester equation that arises in converting a system of linear differential-algebraic equations (DAEs) to ordinary differential equations is also presented. A significant computational advantage is obtained by exploiting the structure of the involved matrices. The proposed algorithm removes the need to solve a standard Sylvester equation or to invert a matrix. The improved performance of this new method over existing techniques is demonstrated by comparing the number of floating-point operations and via numerical examples

    Application of domain decomposition methods to problems in topology optimisation

    Get PDF
    Determination of the optimal layout of structures can be seen in everyday life, from nature to industry, with research dating back to the eighteenth century. The focus of this thesis involves investigation into the relatively modern field of topology optimisation, where the aim is to determine both the optimal shape and topology of structures. However, the inherent large-scale nature means that even problems defined using a relatively coarse finite element discretisation can be computationally demanding. This thesis aims to describe alternative approaches allowing for the practical use of topology optimisation on a large scale. Commonly used solution methods will be compared and scrutinised, with observations used in the application of a novel substructuring domain decomposition method for the subsequent large-scale linear systems. Numerical and analytical investigations involving the governing equations of linear elasticity will lead to the development of three different algorithms for compliance minimisation problems in topology optimisation. Each algorithm will involve an appropriate preconditioning strategy incorporating a matrix representation of a discrete interpolation norm, with numerical results indicating mesh independent performance

    Multigrid Methods for Elliptic Optimal Control Problems

    Get PDF
    In this dissertation we study multigrid methods for linear-quadratic elliptic distributed optimal control problems. For optimal control problems constrained by general second order elliptic partial differential equations, we design and analyze a P1P_1 finite element method based on a saddle point formulation. We construct a WW-cycle algorithm for the discrete problem and show that it is uniformly convergent in the energy norm for convex domains. Moreover, the contraction number decays at the optimal rate of m1m^{-1}, where mm is the number of smoothing steps. We also prove that the convergence is robust with respect to a regularization parameter. The robust convergence of VV-cycle and WW-cycle algorithms on general domains are demonstrated by numerical results. For optimal control problems constrained by symmetric second order elliptic partial differential equations together with pointwise constraints on the state variable, we design and analyze symmetric positive definite P1P_1 finite element methods based on a reformulation of the optimal control problem as a fourth order variational inequality. We develop a multigrid algorithm for the reduced systems that appear in a primal-dual active set method for the discrete variational inequalities. The performance of the algorithm is demonstrated by numerical results

    Preconditioned iterative methods for optimal control problems with time-dependent PDEs as constraints

    Get PDF
    In this work, we study fast and robust solvers for optimal control problems with Partial Differential Equations (PDEs) as constraints. Speci cally, we devise preconditioned iterative methods for time-dependent PDE-constrained optimization problems, usually when a higher-order discretization method in time is employed as opposed to most previous solvers. We also consider the control of stationary problems arising in uid dynamics, as well as that of unsteady Fractional Differential Equations (FDEs). The preconditioners we derive are employed within an appropriate Krylov subspace method. The fi rst key contribution of this thesis involves the study of fast and robust preconditioned iterative solution strategies for the all-at-once solution of optimal control problems with time-dependent PDEs as constraints, when a higher-order discretization method in time is employed. In fact, as opposed to most work in preconditioning this class of problems, where a ( first-order accurate) backward Euler method is used for the discretization of the time derivative, we employ a (second-order accurate) Crank-Nicolson method in time. By applying a carefully tailored invertible transformation, we symmetrize the system obtained, and then derive a preconditioner for the resulting matrix. We prove optimality of the preconditioner through bounds on the eigenvalues, and test our solver against a widely-used preconditioner for the linear system arising from a backward Euler discretization. These theoretical and numerical results demonstrate the effectiveness and robustness of our solver with respect to mesh-sizes and regularization parameter. Then, the optimal preconditioner so derived is generalized from the heat control problem to time-dependent convection{diffusion control with Crank- Nicolson discretization in time. Again, we prove optimality of the approximations of the main blocks of the preconditioner through bounds on the eigenvalues, and, through a range of numerical experiments, show the effectiveness and robustness of our approach with respect to all the parameters involved in the problem. For the next substantial contribution of this work, we focus our attention on the control of problems arising in fluid dynamics, speci fically, the Stokes and the Navier-Stokes equations. We fi rstly derive fast and effective preconditioned iterative methods for the stationary and time-dependent Stokes control problems, then generalize those methods to the case of the corresponding Navier-Stokes control problems when employing an Oseen approximation to the non-linear term. The key ingredients of the solvers are a saddle-point type approximation for the linear systems, an inner iteration for the (1,1)-block accelerated by a preconditioner for convection-diffusion control problems, and an approximation to the Schur complement based on a potent commutator argument applied to an appropriate block matrix. Through a range of numerical experiments, we show the effectiveness of our approximations, and observe their considerable parameter-robustness. The fi nal chapter of this work is devoted to the derivation of efficient and robust solvers for convex quadratic FDE-constrained optimization problems, with box constraints on the state and/or control variables. By employing an Alternating Direction Method of Multipliers for solving the non-linear problem, one can separate the equality from the inequality constraints, solving the equality constraints and then updating the current approximation of the solutions. In order to solve the equality constraints, a preconditioner based on multilevel circulant matrices is derived, and then employed within an appropriate preconditioned Krylov subspace method. Numerical results show the e ciency and scalability of the strategy, with the cost of the overall process being proportional to N log N, where N is the dimension of the problem under examination. Moreover, the strategy presented allows the storage of a highly dense system, due to the memory required being proportional to N

    Variable metric line-search based methods for nonconvex optimization

    Get PDF
    L'obiettivo di questa tesi è quello di proporre nuovi metodi iterativi del prim'ordine per un'ampia classe di problemi di ottimizzazione non convessa, in cui la funzione obiettivo è data dalla somma di un termine differenziabile, eventualmente non convesso, e di uno convesso, eventualmente non differenziabile. Tali problemi sono frequenti in applicazioni scientifiche quali l'elaborazione numerica di immagini e segnali, in cui il primo termine gioca il ruolo di funzione di discrepanza tra il dato osservato e l'oggetto ricostruito, mentre il secondo è il termine di regolarizzazione, volto ad imporre alcune specifiche proprietà sull'oggetto desiderato. Il nostro approccio è duplice: da un lato, i metodi proposti vengono accelerati facendo uso di strategie adattive di selezione dei parametri coinvolti; dall'altro lato, la convergenza di tali metodi viene garantita imponendo, ad ogni iterazione, un'opportuna condizione di sufficiente decrescita della funzione obiettivo. Il nostro primo contributo consiste nella messa a punto di un nuovo metodo di tipo proximal-gradient, che alterna un passo del gradiente sulla parte differenziabile ad uno proximal sulla parte convessa, denominato Variable Metric Inexact Line-search based Algorithm (VMILA). Tale metodo è innovativo da più punti di vista. Innanzitutto, a differenza della maggior parte dei metodi proximal-gradient, VMILA permette di adottare una metrica variabile nel calcolo dell'operatore proximal con estrema libertà di scelta, imponendo soltanto che i parametri coinvolti appartengano a sottoinsiemi limitati degli spazi in cui vengono definiti. In secondo luogo, in VMILA il calcolo del punto proximal viene effettuato tramite un preciso criterio di inesattezza, che può essere concretamente implementato in alcuni casi di interesse. Questo aspetto assume una rilevante importanza ogni qualvolta l'operatore proximal non sia calcolabile in forma chiusa. Infine, le iterate di VMILA sono calcolate tramite una ricerca di linea inesatta lungo la direzione ammissibile e secondo una specifica condizione di sufficiente decrescita di tipo Armijo. Il secondo contributo di questa tesi è proposto in un caso particolare del problema di ottimizzazione precedentemente considerato, in cui si assume che il termine convesso sia dato dalla somma di un numero finito di funzioni indicatrici di insiemi chiusi e convessi. In altre parole, si considera il problema di minimizzare una funzione differenziabile in cui i vincoli sulle incognite hanno una struttura separabile. In letteratura, il metodo classico per affrontare tale problema è senza dubbio il metodo di Gauss-Seidel (GS) non lineare, dove la minimizzazione della funzione obiettivo è ciclicamente alternata su ciascun blocco di variabili del problema. In questa tesi, viene proposta una versione inesatta dello schema GS, denominata Cyclic Block Generalized Gradient Projection (CBGGP) method, in cui la minimizzazione parziale su ciascun blocco di variabili è realizzata mediante un numero finito di passi del metodo del gradiente proiettato. La novità nell'approccio proposto consiste nell'introduzione di metriche non euclidee nel calcolo del gradiente proiettato. Per entrambi i metodi si dimostra, senza alcuna ipotesi di convessità sulla funzione obiettivo, che ciascun punto di accumulazione della successione delle iterate è stazionario. Nel caso di VMILA, è invece possibile dimostrare la convergenza forte delle iterate ad un punto stazionario quando la funzione obiettivo soddisfa la disuguaglianza di Kurdyka-Lojasiewicz. Numerosi test numerici in problemi di elaborazione di immagini, quali la ricostruzione di immagini sfocate e rumorose, la compressione di immagini, la stima di fase in microscopia e la deconvoluzione cieca di immagini in astronomia, danno prova della flessibilità ed efficacia dei metodi proposti.The aim of this thesis is to propose novel iterative first order methods tailored for a wide class of nonconvex nondifferentiable optimization problems, in which the objective function is given by the sum of a differentiable, possibly nonconvex function and a convex, possibly nondifferentiable term. Such problems have become ubiquitous in scientific applications such as image or signal processing, where the first term plays the role of the fit-to-data term, describing the relation between the desired object and the measured data, whereas the second one is the penalty term, aimed at restricting the search of the object itself to those satisfying specific properties. Our approach is twofold: on one hand, we accelerate the proposed methods by making use of suitable adaptive strategies to choose the involved parameters; on the other hand, we ensure convergence by imposing a sufficient decrease condition on the objective function at each iteration. Our first contribution is the development of a novel proximal--gradient method denominated Variable Metric Inexact Line-search based Algorithm (VMILA). The proposed approach is innovative from several points of view. First of all, VMILA allows to adopt a variable metric in the computation of the proximal point with a relative freedom of choice. Indeed the only assumption that we make is that the parameters involved belong to bounded sets. This is unusual with respect to the state-of-the-art proximal-gradient methods, where the parameters are usually chosen by means of a fixed rule or tightly related to the Lipschitz constant of the problem. Second, we introduce an inexactness criterion for computing the proximal point which can be practically implemented in some cases of interest. This aspect assumes a relevant importance whenever the proximal operator is not available in a closed form, which is often the case. Third, the VMILA iterates are computed by performing a line-search along the feasible direction and according to a specific Armijo-like condition, which can be considered as an extension of the classical Armijo rule proposed in the context of differentiable optimization. The second contribution is given for a special instance of the previously considered optimization problem, where the convex term is assumed to be a finite sum of the indicator functions of closed, convex sets. In other words, we consider a problem of constrained differentiable optimization in which the constraints have a separable structure. The most suited method to deal with this problem is undoubtedly the nonlinear Gauss-Seidel (GS) or block coordinate descent method, where the minimization of the objective function is cyclically alternated on each block of variables of the problem. In this thesis, we propose an inexact version of the GS scheme, denominated Cyclic Block Generalized Gradient Projection (CBGGP) method, in which the partial minimization over each block of variables is performed inexactly by means of a fixed number of gradient projection steps. The novelty of the proposed approach consists in the introduction of non Euclidean metrics in the computation of the gradient projection. As for VMILA, the sufficient decrease of the function is imposed by means of a block version of the Armijo line-search. For both methods, we prove that each limit point of the sequence of iterates is stationary, without any convexity assumptions. In the case of VMILA, strong convergence of the iterates to a stationary point is also proved when the objective function satisfies the Kurdyka-Lojasiewicz property. Extensive numerical experience in image processing applications, such as image deblurring and denoising in presence of non-Gaussian noise, image compression, phase estimation and image blind deconvolution, shows the flexibility of our methods in addressing different nonconvex problems, as well as their ability to effectively accelerate the progress towards the solution of the treated problem

    Adaptive Multiple Shooting for Boundary Value Problems and Constrained Parabolic Optimization Problems

    Get PDF
    Subject of this thesis is the development of adaptive techniques for multiple shooting methods. The focus is on the application to optimal control problems governed by parabolic partial differential equations. In order to retain as much freedom as possible in the later choice of discretization schemes, the details of both direct and indirect multiple shooting variants are worked out on an abstract function space level. Therefore, shooting techniques do not constitute a way of discretizing a problem. A thorough examination of the connections between the approaches provides an overview of different shooting formulations and enables their comparison for both linear and nonlinear problems. We extend current research by considering additional constraints on the control variable in the multiple shooting context. An optimization problem is developed which includes so-called box constraints in the multiple shooting context. Several modern algorithms treating control constraints are adapted to the requirements of shooting methods. The modified algorithms permit an extended comparison of the different shooting approaches. The efficiency of numerical methods can often be increased by developing grid adaptation techniques. While adaptive discretization schemes can be readily transferred to the multiple shooting context, questions of conditioning and stability make it difficult to develop adaptive features for shooting point distribution in multiple shooting processes. We concentrate on the design and comparison of two different approaches to shooting grid adaptation in the framework of ordinary differential equations. A residual-based adaptive algorithm is transferred to parabolic optimization problems with control constraints. The presented concepts and methods are verified by means of several examples, whereby theoretical results are numerically confirmed. We choose the test problems so that the simple shooting method becomes unstable and therefore a genuine multiple shooting technique is required
    corecore