32,278 research outputs found
Global optimization in Hilbert space
We propose a complete-search algorithm for solving a class of non-convex, possibly infinite-dimensional, optimization problems to global optimality. We assume that the optimization variables are in a bounded subset of a Hilbert space, and we determine worst-case run-time bounds for the algorithm under certain regularity conditions of the cost functional and the constraint set. Because these run-time bounds are independent of the number of optimization variables and, in particular, are valid for optimization problems with infinitely many optimization variables, we prove that the algorithm converges to an (Formula presented.)-suboptimal global solution within finite run-time for any given termination tolerance (Formula presented.). Finally, we illustrate these results for a problem of calculus of variations
Mutually Unbiased Bases and Semi-definite Programming
A complex Hilbert space of dimension six supports at least three but not more
than seven mutually unbiased bases. Two computer-aided analytical methods to
tighten these bounds are reviewed, based on a discretization of parameter space
and on Grobner bases. A third algorithmic approach is presented: the
non-existence of more than three mutually unbiased bases in composite
dimensions can be decided by a global optimization method known as semidefinite
programming. The method is used to confirm that the spectral matrix cannot be
part of a complete set of seven mutually unbiased bases in dimension six.Comment: 11 pages
An extension of the projected gradient method to a Banach space setting with application in structural topology optimization
For the minimization of a nonlinear cost functional under convex
constraints the relaxed projected gradient process is
a well known method. The analysis is classically performed in a Hilbert space
. We generalize this method to functionals which are differentiable in a
Banach space. Thus it is possible to perform e.g. an gradient method if
is only differentiable in . We show global convergence using
Armijo backtracking in and allow the inner product and the scaling
to change in every iteration. As application we present a
structural topology optimization problem based on a phase field model, where
the reduced cost functional is differentiable in . The
presented numerical results using the inner product and a pointwise
chosen metric including second order information show the expected mesh
independency in the iteration numbers. The latter yields an additional, drastic
decrease in iteration numbers as well as in computation time. Moreover we
present numerical results using a BFGS update of the inner product for
further optimization problems based on phase field models
Numerical optimization in Hilbert space using inexact function and gradient evaluations
Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented
A New Approach to Collaborative Filtering: Operator Estimation with Spectral Regularization
We present a general approach for collaborative filtering (CF) using spectral
regularization to learn linear operators from "users" to the "objects" they
rate. Recent low-rank type matrix completion approaches to CF are shown to be
special cases. However, unlike existing regularization based CF methods, our
approach can be used to also incorporate information such as attributes of the
users or the objects -- a limitation of existing regularization based CF
methods. We then provide novel representer theorems that we use to develop new
estimation methods. We provide learning algorithms based on low-rank
decompositions, and test them on a standard CF dataset. The experiments
indicate the advantages of generalizing the existing regularization based CF
methods to incorporate related information about users and objects. Finally, we
show that certain multi-task learning methods can be also seen as special cases
of our proposed approach
- …