582 research outputs found
Image Restoration using Total Variation with Overlapping Group Sparsity
Image restoration is one of the most fundamental issues in imaging science.
Total variation (TV) regularization is widely used in image restoration
problems for its capability to preserve edges. In the literature, however, it
is also well known for producing staircase-like artifacts. Usually, the
high-order total variation (HTV) regularizer is an good option except its
over-smoothing property. In this work, we study a minimization problem where
the objective includes an usual data-fidelity term and an overlapping
group sparsity total variation regularizer which can avoid staircase effect and
allow edges preserving in the restored image. We also proposed a fast algorithm
for solving the corresponding minimization problem and compare our method with
the state-of-the-art TV based methods and HTV based method. The numerical
experiments illustrate the efficiency and effectiveness of the proposed method
in terms of PSNR, relative error and computing time.Comment: 11 pages, 37 figure
A fast algorithm for globally solving Tikhonov regularized total least squares problem
The total least squares problem with the general Tikhonov regularization can
be reformulated as a one-dimensional parametric minimization problem (PM),
where each parameterized function evaluation corresponds to solving an
n-dimensional trust region subproblem. Under a mild assumption, the parametric
function is differentiable and then an efficient bisection method has been
proposed for solving (PM) in literature. In the first part of this paper, we
show that the bisection algorithm can be greatly improved by reducing the
initially estimated interval covering the optimal parameter. It is observed
that the bisection method cannot guarantee to find the globally optimal
solution since the nonconvex (PM) could have a local non-global minimizer. The
main contribution of this paper is to propose an efficient branch-and-bound
algorithm for globally solving (PM), based on a novel underestimation of the
parametric function over any given interval using only the information of the
parametric function evaluations at the two endpoints. We can show that the new
algorithm(BTD Algorithm) returns a global \epsilon-approximation solution in a
computational effort of at most O(n^3/\epsilon) under the same assumption as in
the bisection method. The numerical results demonstrate that our new global
optimization algorithm performs even much faster than the improved version of
the bisection heuristic algorithm.Comment: 26 pages, 1 figur
Non-smooth Variable Projection
Variable projection solves structured optimization problems by completely
minimizing over a subset of the variables while iterating over the remaining
variables. Over the last 30 years, the technique has been widely used, with
empirical and theoretical results demonstrating both greater efficacy and
greater stability compared to competing approaches. Classic examples have
exploited closed-form projections and smoothness of the objective function. We
extend the approach to problems that include non-smooth terms, and where the
projection subproblems can only be solved inexactly by iterative methods. We
propose an inexact adaptive algonrithm for solving such problems and analyze
its computational complexity. Finally, we show how the theory can be used to
design methods for selected problems occurring frequently in machine-learning
and inverse problems
A Semidefinite Program Solver for the Conformal Bootstrap
We introduce SDPB: an open-source, parallelized, arbitrary-precision
semidefinite program solver, designed for the conformal bootstrap. SDPB
significantly outperforms less specialized solvers and should enable many new
computations. As an example application, we compute a new rigorous
high-precision bound on operator dimensions in the 3d Ising CFT,
, .Comment: 34 pages, 3 figures, and 3500 lines of C+
Fast methods for denoising matrix completion formulations, with applications to robust seismic data interpolation
Recent SVD-free matrix factorization formulations have enabled rank
minimization for systems with millions of rows and columns, paving the way for
matrix completion in extremely large-scale applications, such as seismic data
interpolation.
In this paper, we consider matrix completion formulations designed to hit a
target data-fitting error level provided by the user, and propose an algorithm
called LR-BPDN that is able to exploit factorized formulations to solve the
corresponding optimization problem. Since practitioners typically have strong
prior knowledge about target error level, this innovation makes it easy to
apply the algorithm in practice, leaving only the factor rank to be determined.
Within the established framework, we propose two extensions that are highly
relevant to solving practical challenges of data interpolation. First, we
propose a weighted extension that allows known subspace information to improve
the results of matrix completion formulations. We show how this weighting can
be used in the context of frequency continuation, an essential aspect to
seismic data interpolation. Second, we propose matrix completion formulations
that are robust to large measurement errors in the available data.
We illustrate the advantages of LR-BPDN on the collaborative filtering
problem using the MovieLens 1M, 10M, and Netflix 100M datasets. Then, we use
the new method, along with its robust and subspace re-weighted extensions, to
obtain high-quality reconstructions for large scale seismic interpolation
problems with real data, even in the presence of data contamination.Comment: 26 pages, 13 figure
Fast Approximate Dynamic Programming for Input-Affine Dynamics
We propose two novel numerical schemes for approximate implementation of the
Dynamic Programming (DP) operation concerned with finite-horizon optimal
control of discrete-time, stochastic systems with input-affine dynamics. The
proposed algorithms involve discretization of the state and input spaces, and
are based on an alternative path that solves the dual problem corresponding to
the DP operation. We provide error bounds for the proposed algorithms, along
with a detailed analyses of their computational complexity. In particular, for
a specific class of problems with separable data in the state and input
variables, the proposed approach can reduce the typical time complexity of the
DP operation from O(XU) to O(X+U) where X and U denote the size of the discrete
state and input spaces, respectively. In a broader perspective, the key
contribution here can be viewed as an algorithmic transformation of the
minimization in DP operation to addition via discrete conjugation. This bridge
enables us to utilize any complexity reduction on the discrete conjugation
front within the proposed algorithms. In particular, motivated by the recent
development of quantum algorithms for computing the discrete conjugate
transform, we discuss the possibility of a quantum mechanical implementation of
the proposed algorithms
Fast Approximate Dynamic Programming for Infinite-Horizon Markov Decision Processes
In this study, we consider the infinite-horizon, discounted cost, optimal
control of stochastic nonlinear systems with separable cost and constraints in
the state and input variables. Using the linear-time Legendre transform, we
propose a novel numerical scheme for implementation of the corresponding value
iteration (VI) algorithm in the conjugate domain. Detailed analyses of the
convergence, time complexity, and error of the proposed algorithm are provided.
In particular, with a discretization of size and for the state and
input spaces, respectively, the proposed approach reduces the time complexity
of each iteration in the VI algorithm from to , by replacing
the minimization operation in the primal domain with a simple addition in the
conjugate domain
Convex Optimization without Projection Steps
For the general problem of minimizing a convex function over a compact convex
domain, we will investigate a simple iterative approximation algorithm based on
the method by Frank & Wolfe 1956, that does not need projection steps in order
to stay inside the optimization domain. Instead of a projection step, the
linearized problem defined by a current subgradient is solved, which gives a
step direction that will naturally stay in the domain. Our framework
generalizes the sparse greedy algorithm of Frank & Wolfe and its primal-dual
analysis by Clarkson 2010 (and the low-rank SDP approach by Hazan 2008) to
arbitrary convex domains. We give a convergence proof guaranteeing
{\epsilon}-small duality gap after O(1/{\epsilon}) iterations.
The method allows us to understand the sparsity of approximate solutions for
any l1-regularized convex optimization problem (and for optimization over the
simplex), expressed as a function of the approximation quality. We obtain
matching upper and lower bounds of {\Theta}(1/{\epsilon}) for the sparsity for
l1-problems. The same bounds apply to low-rank semidefinite optimization with
bounded trace, showing that rank O(1/{\epsilon}) is best possible here as well.
As another application, we obtain sparse matrices of O(1/{\epsilon}) non-zero
entries as {\epsilon}-approximate solutions when optimizing any convex function
over a class of diagonally dominant symmetric matrices.
We show that our proposed first-order method also applies to nuclear norm and
max-norm matrix optimization problems. For nuclear norm regularized
optimization, such as matrix completion and low-rank recovery, we demonstrate
the practical efficiency and scalability of our algorithm for large matrix
problems, as e.g. the Netflix dataset. For general convex optimization over
bounded matrix max-norm, our algorithm is the first with a convergence
guarantee, to the best of our knowledge
A model reduction approach to numerical inversion for a parabolic partial differential equation
We propose a novel numerical inversion algorithm for the coefficients of
parabolic partial differential equations, based on model reduction. The study
is motivated by the application of controlled source electromagnetic
exploration, where the unknown is the subsurface electrical resistivity and the
data are time resolved surface measurements of the magnetic field. The
algorithm presented in this paper considers inversion in one and two
dimensions. The reduced model is obtained with rational interpolation in the
frequency (Laplace) domain and a rational Krylov subspace projection method. It
amounts to a nonlinear mapping from the function space of the unknown
resistivity to the small dimensional space of the parameters of the reduced
model. We use this mapping as a nonlinear preconditioner for the Gauss-Newton
iterative solution of the inverse problem. The advantage of the inversion
algorithm is twofold. First, the nonlinear preconditioner resolves most of the
nonlinearity of the problem. Thus the iterations are less likely to get stuck
in local minima and the convergence is fast. Second, the inversion is
computationally efficient because it avoids repeated accurate simulations of
the time-domain response. We study the stability of the inversion algorithm for
various rational Krylov subspaces, and assess its performance with numerical
experiments.Comment: 31 pages, 9 figures, 2 table
Stable Cosparse Recovery via \ell_p-analysis Optimization
In this paper we study the -analysis optimization ()
problem for cosparse signal recovery. We establish a bound for recovery error
via the restricted -isometry property over any subspace. We further prove
that the nonconvex -analysis optimization can do recovery with a lower
sample complexity and in a wider range of cosparsity than its convex
counterpart. In addition, we develop an iteratively reweighted method to solve
the optimization problem under a variational framework. Empirical results of
preliminary computational experiments illustrate that the nonconvex method
outperforms its convex counterpart
- …