32,278 research outputs found

    Global optimization in Hilbert space

    Get PDF
    We propose a complete-search algorithm for solving a class of non-convex, possibly infinite-dimensional, optimization problems to global optimality. We assume that the optimization variables are in a bounded subset of a Hilbert space, and we determine worst-case run-time bounds for the algorithm under certain regularity conditions of the cost functional and the constraint set. Because these run-time bounds are independent of the number of optimization variables and, in particular, are valid for optimization problems with infinitely many optimization variables, we prove that the algorithm converges to an (Formula presented.)-suboptimal global solution within finite run-time for any given termination tolerance (Formula presented.). Finally, we illustrate these results for a problem of calculus of variations

    Mutually Unbiased Bases and Semi-definite Programming

    Full text link
    A complex Hilbert space of dimension six supports at least three but not more than seven mutually unbiased bases. Two computer-aided analytical methods to tighten these bounds are reviewed, based on a discretization of parameter space and on Grobner bases. A third algorithmic approach is presented: the non-existence of more than three mutually unbiased bases in composite dimensions can be decided by a global optimization method known as semidefinite programming. The method is used to confirm that the spectral matrix cannot be part of a complete set of seven mutually unbiased bases in dimension six.Comment: 11 pages

    An extension of the projected gradient method to a Banach space setting with application in structural topology optimization

    Get PDF
    For the minimization of a nonlinear cost functional jj under convex constraints the relaxed projected gradient process φk+1=φk+αk(PH(φk−λk∇Hj(φk))−φk)\varphi_{k+1} = \varphi_{k} + \alpha_k(P_H(\varphi_{k}-\lambda_k \nabla_H j(\varphi_{k}))-\varphi_{k}) is a well known method. The analysis is classically performed in a Hilbert space HH. We generalize this method to functionals jj which are differentiable in a Banach space. Thus it is possible to perform e.g. an L2L^2 gradient method if jj is only differentiable in L∞L^\infty. We show global convergence using Armijo backtracking in αk\alpha_k and allow the inner product and the scaling λk\lambda_k to change in every iteration. As application we present a structural topology optimization problem based on a phase field model, where the reduced cost functional jj is differentiable in H1∩L∞H^1\cap L^\infty. The presented numerical results using the H1H^1 inner product and a pointwise chosen metric including second order information show the expected mesh independency in the iteration numbers. The latter yields an additional, drastic decrease in iteration numbers as well as in computation time. Moreover we present numerical results using a BFGS update of the H1H^1 inner product for further optimization problems based on phase field models

    Numerical optimization in Hilbert space using inexact function and gradient evaluations

    Get PDF
    Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented

    A New Approach to Collaborative Filtering: Operator Estimation with Spectral Regularization

    Get PDF
    We present a general approach for collaborative filtering (CF) using spectral regularization to learn linear operators from "users" to the "objects" they rate. Recent low-rank type matrix completion approaches to CF are shown to be special cases. However, unlike existing regularization based CF methods, our approach can be used to also incorporate information such as attributes of the users or the objects -- a limitation of existing regularization based CF methods. We then provide novel representer theorems that we use to develop new estimation methods. We provide learning algorithms based on low-rank decompositions, and test them on a standard CF dataset. The experiments indicate the advantages of generalizing the existing regularization based CF methods to incorporate related information about users and objects. Finally, we show that certain multi-task learning methods can be also seen as special cases of our proposed approach
    • …
    corecore