77,620 research outputs found

    Inverse Optimization with Noisy Data

    Full text link
    Inverse optimization refers to the inference of unknown parameters of an optimization problem based on knowledge of its optimal solutions. This paper considers inverse optimization in the setting where measurements of the optimal solutions of a convex optimization problem are corrupted by noise. We first provide a formulation for inverse optimization and prove it to be NP-hard. In contrast to existing methods, we show that the parameter estimates produced by our formulation are statistically consistent. Our approach involves combining a new duality-based reformulation for bilevel programs with a regularization scheme that smooths discontinuities in the formulation. Using epi-convergence theory, we show the regularization parameter can be adjusted to approximate the original inverse optimization problem to arbitrary accuracy, which we use to prove our consistency results. Next, we propose two solution algorithms based on our duality-based formulation. The first is an enumeration algorithm that is applicable to settings where the dimensionality of the parameter space is modest, and the second is a semiparametric approach that combines nonparametric statistics with a modified version of our formulation. These numerical algorithms are shown to maintain the statistical consistency of the underlying formulation. Lastly, using both synthetic and real data, we demonstrate that our approach performs competitively when compared with existing heuristics

    Solving an inverse elliptic coefficient problem by convex non-linear semidefinite programming

    Full text link
    Several applications in medical imaging and non-destructive material testing lead to inverse elliptic coefficient problems, where an unknown coefficient function in an elliptic PDE is to be determined from partial knowledge of its solutions. This is usually a highly non-linear ill-posed inverse problem, for which unique reconstructability results, stability estimates and global convergence of numerical methods are very hard to achieve. The aim of this note is to point out a new connection between inverse coefficient problems and semidefinite programming that may help addressing these challenges. We show that an inverse elliptic Robin transmission problem with finitely many measurements can be equivalently rewritten as a uniquely solvable convex non-linear semidefinite optimization problem. This allows to explicitly estimate the number of measurements that is required to achieve a desired resolution, to derive an error estimate for noisy data, and to overcome the problem of local minima that usually appears in optimization-based approaches for inverse coefficient problems

    Multifrequency mismatch functions for nonlinear parametric identifications

    No full text
    International audienceThis investigation is concerned with the ill-posed nature of (non-linear) inverse problems, in the frequency domain, for which the unknown parameters are determined, in iterative manner, by seeking the minima of a mismatch function quantifying the discrepancy between given data and the output of a numerical model. One might think that the identification can be done solely by the search of the minima of the mismatch function, but this problem is mathematically ill-posed and it is illusory to try to identify a satisfactory solution until the problems of stability (noisy data) and non-uniqueness (local minima) are resolved. This paper shows that a mismatch function resulting from summing up mono frequency mismatch functions corresponding to several different frequencies turns the inverse problem into a global optimization problem and overcomes the problem of the sensitivity of solutions to noisy data, without requiring a priori information on the probed medium

    Conditioning of extreme learning machine for noisy data using heuristic optimization

    Get PDF
    This article provides a tool that can be used in the exact sciences to obtain good approximations to reality when noisy data is inevitable. Two heuristic optimization algorithms are implemented: Simulated Annealing and Particle Swarming for the determination of the extreme learning machine output weights. The first operates in a large search space and at each iteration it probabilistically decides between staying at its current state or moving to another. The swarm of particles, it optimizes a problem from a population of candidate solutions, moving them throughout the search space according to position and speed. The methodology consists of building data sets around a polynomial function, implementing the heuristic algorithms and comparing the errors with the traditional computation method using the Moore–Penrose inverse. The results show that the heuristic optimization algorithms implemented improve the estimation of the output weights when the input have highly noisy data

    Biochemical systems identification by a random drift particle swarm optimization approach

    Get PDF
    BACKGROUND: Finding an efficient method to solve the parameter estimation problem (inverse problem) for nonlinear biochemical dynamical systems could help promote the functional understanding at the system level for signalling pathways. The problem is stated as a data-driven nonlinear regression problem, which is converted into a nonlinear programming problem with many nonlinear differential and algebraic constraints. Due to the typical ill conditioning and multimodality nature of the problem, it is in general difficult for gradient-based local optimization methods to obtain satisfactory solutions. To surmount this limitation, many stochastic optimization methods have been employed to find the global solution of the problem. RESULTS: This paper presents an effective search strategy for a particle swarm optimization (PSO) algorithm that enhances the ability of the algorithm for estimating the parameters of complex dynamic biochemical pathways. The proposed algorithm is a new variant of random drift particle swarm optimization (RDPSO), which is used to solve the above mentioned inverse problem and compared with other well known stochastic optimization methods. Two case studies on estimating the parameters of two nonlinear biochemical dynamic models have been taken as benchmarks, under both the noise-free and noisy simulation data scenarios. CONCLUSIONS: The experimental results show that the novel variant of RDPSO algorithm is able to successfully solve the problem and obtain solutions of better quality than other global optimization methods used for finding the solution to the inverse problems in this study
    • …
    corecore