381,447 research outputs found

    Inverse polynomial optimization

    Full text link
    We consider the inverse optimization problem associated with the polynomial program f^*=\min \{f(x): x\in K\}andagivencurrentfeasiblesolution and a given current feasible solution y\in K.Weprovideasystematicnumericalschemetocomputeaninverseoptimalsolution.Thatis,wecomputeapolynomial. We provide a systematic numerical scheme to compute an inverse optimal solution. That is, we compute a polynomial \tilde{f}(whichmaybeofsamedegreeas (which may be of same degree as fifdesired)withthefollowingproperties:(a) if desired) with the following properties: (a) yisaglobalminimizerof is a global minimizer of \tilde{f}on on KwithaPutinarscertificatewithanaprioridegreebound with a Putinar's certificate with an a priori degree bound dfixed,and(b), fixed, and (b), \tilde{f}minimizes minimizes \Vert f-\tilde{f}\Vert(whichcanbethe (which can be the \ell_1,, \ell_2or or \ell_\inftynormofthecoefficients)overallpolynomialswithsuchproperties.Computing-norm of the coefficients) over all polynomials with such properties. Computing \tilde{f}_dreducestosolvingasemidefiniteprogramwhoseoptimalvaluealsoprovidesaboundonhowfaris reduces to solving a semidefinite program whose optimal value also provides a bound on how far is f(\y)fromtheunknownoptimalvalue from the unknown optimal value f^*.Thesizeofthesemidefiniteprogramcanbeadaptedtothecomputationalcapabilitiesavailable.Moreover,ifoneusesthe. The size of the semidefinite program can be adapted to the computational capabilities available. Moreover, if one uses the \ell_1norm,then-norm, then \tilde{f}$ takes a simple and explicit canonical form. Some variations are also discussed.Comment: 25 pages; to appear in Math. Oper. Res; Rapport LAAS no. 1114

    Optimization Methods for Inverse Problems

    Full text link
    Optimization plays an important role in solving many inverse problems. Indeed, the task of inversion often either involves or is fully cast as a solution of an optimization problem. In this light, the mere non-linear, non-convex, and large-scale nature of many of these inversions gives rise to some very challenging optimization problems. The inverse problem community has long been developing various techniques for solving such optimization tasks. However, other, seemingly disjoint communities, such as that of machine learning, have developed, almost in parallel, interesting alternative methods which might have stayed under the radar of the inverse problem community. In this survey, we aim to change that. In doing so, we first discuss current state-of-the-art optimization methods widely used in inverse problems. We then survey recent related advances in addressing similar challenges in problems faced by the machine learning community, and discuss their potential advantages for solving inverse problems. By highlighting the similarities among the optimization challenges faced by the inverse problem and the machine learning communities, we hope that this survey can serve as a bridge in bringing together these two communities and encourage cross fertilization of ideas.Comment: 13 page

    Inverse Optimization with Noisy Data

    Full text link
    Inverse optimization refers to the inference of unknown parameters of an optimization problem based on knowledge of its optimal solutions. This paper considers inverse optimization in the setting where measurements of the optimal solutions of a convex optimization problem are corrupted by noise. We first provide a formulation for inverse optimization and prove it to be NP-hard. In contrast to existing methods, we show that the parameter estimates produced by our formulation are statistically consistent. Our approach involves combining a new duality-based reformulation for bilevel programs with a regularization scheme that smooths discontinuities in the formulation. Using epi-convergence theory, we show the regularization parameter can be adjusted to approximate the original inverse optimization problem to arbitrary accuracy, which we use to prove our consistency results. Next, we propose two solution algorithms based on our duality-based formulation. The first is an enumeration algorithm that is applicable to settings where the dimensionality of the parameter space is modest, and the second is a semiparametric approach that combines nonparametric statistics with a modified version of our formulation. These numerical algorithms are shown to maintain the statistical consistency of the underlying formulation. Lastly, using both synthetic and real data, we demonstrate that our approach performs competitively when compared with existing heuristics

    Data-driven Inverse Optimization with Imperfect Information

    Full text link
    In data-driven inverse optimization an observer aims to learn the preferences of an agent who solves a parametric optimization problem depending on an exogenous signal. Thus, the observer seeks the agent's objective function that best explains a historical sequence of signals and corresponding optimal actions. We focus here on situations where the observer has imperfect information, that is, where the agent's true objective function is not contained in the search space of candidate objectives, where the agent suffers from bounded rationality or implementation errors, or where the observed signal-response pairs are corrupted by measurement noise. We formalize this inverse optimization problem as a distributionally robust program minimizing the worst-case risk that the {\em predicted} decision ({\em i.e.}, the decision implied by a particular candidate objective) differs from the agent's {\em actual} response to a random signal. We show that our framework offers rigorous out-of-sample guarantees for different loss functions used to measure prediction errors and that the emerging inverse optimization problems can be exactly reformulated as (or safely approximated by) tractable convex programs when a new suboptimality loss function is used. We show through extensive numerical tests that the proposed distributionally robust approach to inverse optimization attains often better out-of-sample performance than the state-of-the-art approaches

    Optimizing the Post Sandvik Nanoflex material model using inverse optimization and the finite element method

    Get PDF
    This article describes an inverse optimization method for the Sandvik Nanoflex steel in cold forming\ud processes. The optimization revolves around measured samples and calculations using the Finite Element\ud Method. Sandvik Nanoflex is part of the group of meta-stable stainless steels. These materials are characterized\ud by a good corrosion resistance, high strength, good formability and crack resistance. In addition, Sandvik\ud Nanoflex has a strain-induced transformation and, depending on austenising conditions and chemical composition,\ud a stress-assisted transformation can occur. The martensite phase of this material shows a substantial aging\ud response. The inverse optimization is a sub-category of the optimization techniques. The inverse optimization\ud method uses a top down approach, as the name implies. The starting point is a prototype state where the current\ud state is to converge on. In our experiment the test specimen is used as prototype and a calculation result as\ud current state. The calculation is then adapted so that the result converges towards the test example. An iterative\ud numerical optimization algorithm controls the adaptation. For the inverse optimization method two parameters\ud are defined: shape of the product and martensite profile. These parameters are extracted from both calculation\ud and test specimen, using Fourier analysis and integrals. An optimization parameter is then formulated from\ud the extracted parameters. The method uses this optimization parameter to increase the accuracy of ”The Post”\ud material model for Sandvik Nanoflex. [1] The article will describe a method to optimize material models, using\ud a combination practical experiments, Finite Element Method and parameter extraction

    Inverse Optimization of Convex Risk Functions

    Full text link
    The theory of convex risk functions has now been well established as the basis for identifying the families of risk functions that should be used in risk averse optimization problems. Despite its theoretical appeal, the implementation of a convex risk function remains difficult, as there is little guidance regarding how a convex risk function should be chosen so that it also well represents one's own risk preferences. In this paper, we address this issue through the lens of inverse optimization. Specifically, given solution data from some (forward) risk-averse optimization problems we develop an inverse optimization framework that generates a risk function that renders the solutions optimal for the forward problems. The framework incorporates the well-known properties of convex risk functions, namely, monotonicity, convexity, translation invariance, and law invariance, as the general information about candidate risk functions, and also the feedbacks from individuals, which include an initial estimate of the risk function and pairwise comparisons among random losses, as the more specific information. Our framework is particularly novel in that unlike classical inverse optimization, no parametric assumption is made about the risk function, i.e. it is non-parametric. We show how the resulting inverse optimization problems can be reformulated as convex programs and are polynomially solvable if the corresponding forward problems are polynomially solvable. We illustrate the imputed risk functions in a portfolio selection problem and demonstrate their practical value using real-life data

    Inverse Optimization: Closed-form Solutions, Geometry and Goodness of fit

    Full text link
    In classical inverse linear optimization, one assumes a given solution is a candidate to be optimal. Real data is imperfect and noisy, so there is no guarantee this assumption is satisfied. Inspired by regression, this paper presents a unified framework for cost function estimation in linear optimization comprising a general inverse optimization model and a corresponding goodness-of-fit metric. Although our inverse optimization model is nonconvex, we derive a closed-form solution and present the geometric intuition. Our goodness-of-fit metric, ρ\rho, the coefficient of complementarity, has similar properties to R2R^2 from regression and is quasiconvex in the input data, leading to an intuitive geometric interpretation. While ρ\rho is computable in polynomial-time, we derive a lower bound that possesses the same properties, is tight for several important model variations, and is even easier to compute. We demonstrate the application of our framework for model estimation and evaluation in production planning and cancer therapy
    corecore