9,366 research outputs found
A Greedy Algorithm for Subspace Approximation Problem
In the subspace approximation problem, given m points in R^{n} and an integer k <= n, the goal is to find a k-dimension subspace of R^{n} that minimizes the l_{p}-norm of the Euclidean distances to the given points. This problem generalizes several subspace approximation problems and has applications from statistics, machine learning, signal processing to biology. Deshpande et al. [Deshpande et al., 2011] gave a randomized O(sqrt{p})-approximation and this bound is proved to be tight assuming NP != P by Guruswami et al. [Guruswami et al., 2016]. It is an intriguing question of determining the performance guarantee of deterministic algorithms for the problem. In this paper, we present a simple deterministic O(sqrt{p})-approximation algorithm with also a simple analysis. That definitely settles the status of the problem in term of approximation up to a constant factor. Besides, the simplicity of the algorithm makes it practically appealing
Stochastic subspace correction in Hilbert space
We consider an incremental approximation method for solving variational
problems in infinite-dimensional Hilbert spaces, where in each step a randomly
and independently selected subproblem from an infinite collection of
subproblems is solved. we show that convergence rates for the expectation of
the squared error can be guaranteed under weaker conditions than previously
established in [Constr. Approx. 44:1 (2016), 121-139]. A connection to the
theory of learning algorithms in reproducing kernel Hilbert spaces is revealed.Comment: 15 page
Approximation of Eigenfunctions in Kernel-based Spaces
Kernel-based methods in Numerical Analysis have the advantage of yielding
optimal recovery processes in the "native" Hilbert space \calh in which they
are reproducing. Continuous kernels on compact domains have an expansion into
eigenfunctions that are both -orthonormal and orthogonal in \calh
(Mercer expansion). This paper examines the corresponding eigenspaces and
proves that they have optimality properties among all other subspaces of
\calh. These results have strong connections to -widths in Approximation
Theory, and they establish that errors of optimal approximations are closely
related to the decay of the eigenvalues.
Though the eigenspaces and eigenvalues are not readily available, they can be
well approximated using the standard -dimensional subspaces spanned by
translates of the kernel with respect to nodes or centers. We give error
bounds for the numerical approximation of the eigensystem via such subspaces. A
series of examples shows that our numerical technique via a greedy point
selection strategy allows to calculate the eigensystems with good accuracy
Low-rank approximate inverse for preconditioning tensor-structured linear systems
In this paper, we propose an algorithm for the construction of low-rank
approximations of the inverse of an operator given in low-rank tensor format.
The construction relies on an updated greedy algorithm for the minimization of
a suitable distance to the inverse operator. It provides a sequence of
approximations that are defined as the projections of the inverse operator in
an increasing sequence of linear subspaces of operators. These subspaces are
obtained by the tensorization of bases of operators that are constructed from
successive rank-one corrections. In order to handle high-order tensors,
approximate projections are computed in low-rank Hierarchical Tucker subsets of
the successive subspaces of operators. Some desired properties such as symmetry
or sparsity can be imposed on the approximate inverse operator during the
correction step, where an optimal rank-one correction is searched as the tensor
product of operators with the desired properties. Numerical examples illustrate
the ability of this algorithm to provide efficient preconditioners for linear
systems in tensor format that improve the convergence of iterative solvers and
also the quality of the resulting low-rank approximations of the solution
- …