175,489 research outputs found

    Algorithms for Inverse Optimization Problems

    Get PDF
    We study inverse optimization problems, wherein the goal is to map given solutions to an underlying optimization problem to a cost vector for which the given solutions are the (unique) optimal solutions. Inverse optimization problems find diverse applications and have been widely studied. A prominent problem in this field is the inverse shortest path (ISP) problem [D. Burton and Ph.L. Toint, 1992; W. Ben-Ameur and E. Gourdin, 2004; A. Bley, 2007], which finds applications in shortest-path routing protocols used in telecommunications. Here we seek a cost vector that is positive, integral, induces a set of given paths as the unique shortest paths, and has minimum l_infty norm. Despite being extensively studied, very few algorithmic results are known for inverse optimization problems involving integrality constraints on the desired cost vector whose norm has to be minimized. Motivated by ISP, we initiate a systematic study of such integral inverse optimization problems from the perspective of designing polynomial time approximation algorithms. For ISP, our main result is an additive 1-approximation algorithm for multicommodity ISP with node-disjoint commodities, which we show is tight assuming P!=NP. We then consider the integral-cost inverse versions of various other fundamental combinatorial optimization problems, including min-cost flow, max/min-cost bipartite matching, and max/min-cost basis in a matroid, and obtain tight or nearly-tight approximation guarantees for these. Our guarantees for the first two problems are based on results for a broad generalization, namely integral inverse polyhedral optimization, for which we also give approximation guarantees. Our techniques also give similar results for variants, including l_p-norm minimization of the integral cost vector, and distance-minimization from an initial cost vector

    Setting Parameters by Example

    Full text link
    We introduce a class of "inverse parametric optimization" problems, in which one is given both a parametric optimization problem and a desired optimal solution; the task is to determine parameter values that lead to the given solution. We describe algorithms for solving such problems for minimum spanning trees, shortest paths, and other "optimal subgraph" problems, and discuss applications in multicast routing, vehicle path planning, resource allocation, and board game programming.Comment: 13 pages, 3 figures. To be presented at 40th IEEE Symp. Foundations of Computer Science (FOCS '99

    Probabilistic Interpretation of Linear Solvers

    Full text link
    This manuscript proposes a probabilistic framework for algorithms that iteratively solve unconstrained linear problems Bx=bBx = b with positive definite BB for xx. The goal is to replace the point estimates returned by existing methods with a Gaussian posterior belief over the elements of the inverse of BB, which can be used to estimate errors. Recent probabilistic interpretations of the secant family of quasi-Newton optimization algorithms are extended. Combined with properties of the conjugate gradient algorithm, this leads to uncertainty-calibrated methods with very limited cost overhead over conjugate gradients, a self-contained novel interpretation of the quasi-Newton and conjugate gradient algorithms, and a foundation for new nonlinear optimization methods.Comment: final version, in press at SIAM J Optimizatio

    Inverse Problems with Poisson noise: Primal and Primal-Dual Splitting

    Get PDF
    In this paper, we propose two algorithms for solving linear inverse problems when the observations are corrupted by Poisson noise. A proper data fidelity term (log-likelihood) is introduced to reflect the Poisson statistics of the noise. On the other hand, as a prior, the images to restore are assumed to be positive and sparsely represented in a dictionary of waveforms. Piecing together the data fidelity and the prior terms, the solution to the inverse problem is cast as the minimization of a non-smooth convex functional. We establish the well-posedness of the optimization problem, characterize the corresponding minimizers, and solve it by means of primal and primal-dual proximal splitting algorithms originating from the field of non-smooth convex optimization theory. Experimental results on deconvolution and comparison to prior methods are also reported

    Linear inverse problems with noise: primal and primal-dual splitting

    Get PDF
    In this paper, we propose two algorithms for solving linear inverse problems when the observations are corrupted by noise. A proper data fidelity term (log-likelihood) is introduced to reflect the statistics of the noise (e.g. Gaussian, Poisson). On the other hand, as a prior, the images to restore are assumed to be positive and sparsely represented in a dictionary of waveforms. Piecing together the data fidelity and the prior terms, the solution to the inverse problem is cast as the minimization of a non-smooth convex functional. We establish the well-posedness of the optimization problem, characterize the corresponding minimizers, and solve it by means of primal and primal-dual proximal splitting algorithms originating from the field of non-smooth convex optimization theory. Experimental results on deconvolution, inpainting and denoising with some comparison to prior methods are also reported

    Second order adjoints for solving PDE-constrained optimization problems

    Get PDF
    Inverse problems are of utmost importance in many fields of science and engineering. In the variational approach inverse problems are formulated as PDE-constrained optimization problems, where the optimal estimate of the uncertain parameters is the minimizer of a certain cost functional subject to the constraints posed by the model equations. The numerical solution of such optimization problems requires the computation of derivatives of the model output with respect to model parameters. The first order derivatives of a cost functional (defined on the model output) with respect to a large number of model parameters can be calculated efficiently through first order adjoint sensitivity analysis. Second order adjoint models give second derivative information in the form of matrix-vector products between the Hessian of the cost functional and user defined vectors. Traditionally, the construction of second order derivatives for large scale models has been considered too costly. Consequently, data assimilation applications employ optimization algorithms that use only first order derivative information, like nonlinear conjugate gradients and quasi-Newton methods. In this paper we discuss the mathematical foundations of second order adjoint sensitivity analysis and show that it provides an efficient approach to obtain Hessian-vector products. We study the benefits of using of second order information in the numerical optimization process for data assimilation applications. The numerical studies are performed in a twin experiment setting with a two-dimensional shallow water model. Different scenarios are considered with different discretization approaches, observation sets, and noise levels. Optimization algorithms that employ second order derivatives are tested against widely used methods that require only first order derivatives. Conclusions are drawn regarding the potential benefits and the limitations of using high-order information in large scale data assimilation problems
    • …
    corecore