928 research outputs found

    Optimization Methods for Inverse Problems

    Full text link
    Optimization plays an important role in solving many inverse problems. Indeed, the task of inversion often either involves or is fully cast as a solution of an optimization problem. In this light, the mere non-linear, non-convex, and large-scale nature of many of these inversions gives rise to some very challenging optimization problems. The inverse problem community has long been developing various techniques for solving such optimization tasks. However, other, seemingly disjoint communities, such as that of machine learning, have developed, almost in parallel, interesting alternative methods which might have stayed under the radar of the inverse problem community. In this survey, we aim to change that. In doing so, we first discuss current state-of-the-art optimization methods widely used in inverse problems. We then survey recent related advances in addressing similar challenges in problems faced by the machine learning community, and discuss their potential advantages for solving inverse problems. By highlighting the similarities among the optimization challenges faced by the inverse problem and the machine learning communities, we hope that this survey can serve as a bridge in bringing together these two communities and encourage cross fertilization of ideas.Comment: 13 page

    Chebyshev Interpolation Polynomial-based Tools for Rigorous Computing

    Get PDF
    17 pagesInternational audiencePerforming numerical computations, yet being able to provide rigorous mathematical statements about the obtained result, is required in many domains like global optimization, ODE solving or integration. Taylor models, which associate to a function a pair made of a Taylor approximation polynomial and a rigorous remainder bound, are a widely used rigorous computation tool. This approach benefits from the advantages of numerical methods, but also gives the ability to make reliable statements about the approximated function. Despite the fact that approximation polynomials based on interpolation at Chebyshev nodes offer a quasi-optimal approximation to a function, together with several other useful features, an analogous to Taylor models, based on such polynomials, has not been yet well-established in the field of validated numerics. This paper presents a preliminary work for obtaining such interpolation polynomials together with validated interval bounds for approximating univariate functions. We propose two methods that make practical the use of this: one is based on a representation in Newton basis and the other uses Chebyshev polynomial basis. We compare the quality of the obtained remainders and the performance of the approaches to the ones provided by Taylor models

    An application of a linear programing technique to nonlinear minimax problems

    Get PDF
    A differential correction technique for solving nonlinear minimax problems is presented. The basis of the technique is a linear programing algorithm which solves the linear minimax problem. By linearizing the original nonlinear equations about a nominal solution, both nonlinear approximation and estimation problems using the minimax norm may be solved iteratively. Some consideration is also given to improving convergence and to the treatment of problems with more than one measured quantity. A sample problem is treated with this technique and with the least-squares differential correction method to illustrate the properties of the minimax solution. The results indicate that for the sample approximation problem, the minimax technique provides better estimates than the least-squares method if a sufficient amount of data is used. For the sample estimation problem, the minimax estimates are better if the mathematical model is incomplete

    Certification of Real Inequalities -- Templates and Sums of Squares

    Full text link
    We consider the problem of certifying lower bounds for real-valued multivariate transcendental functions. The functions we are dealing with are nonlinear and involve semialgebraic operations as well as some transcendental functions like cos\cos, arctan\arctan, exp\exp, etc. Our general framework is to use different approximation methods to relax the original problem into polynomial optimization problems, which we solve by sparse sums of squares relaxations. In particular, we combine the ideas of the maxplus estimators (originally introduced in optimal control) and of the linear templates (originally introduced in static analysis by abstract interpretation). The nonlinear templates control the complexity of the semialgebraic relaxations at the price of coarsening the maxplus approximations. In that way, we arrive at a new - template based - certified global optimization method, which exploits both the precision of sums of squares relaxations and the scalability of abstraction methods. We analyze the performance of the method on problems from the global optimization literature, as well as medium-size inequalities issued from the Flyspeck project.Comment: 27 pages, 3 figures, 4 table

    Convergence rates of general regularization methods for statistical inverse problems and applications

    Get PDF
    During the past the convergence analysis for linear statistical inverse problems has mainly focused on spectral cut-off and Tikhonov type estimators. Spectral cut-off estimators achieve minimax rates for a broad range of smoothness classes and operators, but their practical usefulness is limited by the fact that they require a complete spectral decomposition of the operator. Tikhonov estimators are simpler to compute, but still involve the inversion of an operator and achieve minimax rates only in restricted smoothness classes. In this paper we introduce a unifying technique to study the mean square error of a large class of regularization methods (spectral methods) including the aforementioned estimators as well as many iterative methods, such as í-methods and the Landweber iteration. The latter estimators converge at the same rate as spectral cut-off, but only require matrixvector products. Our results are applied to various problems, in particular we obtain precise convergence rates for satellite gradiometry, L2-boosting, and errors in variable problems. --Statistical inverse problems,iterative regularization methods,Tikhonov regularization,nonparametric regression,minimax convergence rates,satellite gradiometry,Hilbert scales,boosting,errors in variable
    corecore