8,490 research outputs found
FAASTA: A fast solver for total-variation regularization of ill-conditioned problems with application to brain imaging
The total variation (TV) penalty, as many other analysis-sparsity problems,
does not lead to separable factors or a proximal operatorwith a closed-form
expression, such as soft thresholding for the penalty. As a result,
in a variational formulation of an inverse problem or statisticallearning
estimation, it leads to challenging non-smooth optimization problemsthat are
often solved with elaborate single-step first-order methods. When thedata-fit
term arises from empirical measurements, as in brain imaging, it isoften very
ill-conditioned and without simple structure. In this situation, in proximal
splitting methods, the computation cost of thegradient step can easily dominate
each iteration. Thus it is beneficialto minimize the number of gradient
steps.We present fAASTA, a variant of FISTA, that relies on an internal solver
forthe TV proximal operator, and refines its tolerance to balance
computationalcost of the gradient and the proximal steps. We give benchmarks
andillustrations on "brain decoding": recovering brain maps from
noisymeasurements to predict observed behavior. The algorithm as well as
theempirical study of convergence speed are valuable for any non-exact
proximaloperator, in particular analysis-sparsity problems
EIT Reconstruction Algorithms: Pitfalls, Challenges and Recent Developments
We review developments, issues and challenges in Electrical Impedance
Tomography (EIT), for the 4th Workshop on Biomedical Applications of EIT,
Manchester 2003. We focus on the necessity for three dimensional data
collection and reconstruction, efficient solution of the forward problem and
present and future reconstruction algorithms. We also suggest common pitfalls
or ``inverse crimes'' to avoid.Comment: A review paper for the 4th Workshop on Biomedical Applications of
EIT, Manchester, UK, 200
Projected Newton Method for noise constrained Tikhonov regularization
Tikhonov regularization is a popular approach to obtain a meaningful solution
for ill-conditioned linear least squares problems. A relatively simple way of
choosing a good regularization parameter is given by Morozov's discrepancy
principle. However, most approaches require the solution of the Tikhonov
problem for many different values of the regularization parameter, which is
computationally demanding for large scale problems. We propose a new and
efficient algorithm which simultaneously solves the Tikhonov problem and finds
the corresponding regularization parameter such that the discrepancy principle
is satisfied. We achieve this by formulating the problem as a nonlinear system
of equations and solving this system using a line search method. We obtain a
good search direction by projecting the problem onto a low dimensional Krylov
subspace and computing the Newton direction for the projected problem. This
projected Newton direction, which is significantly less computationally
expensive to calculate than the true Newton direction, is then combined with a
backtracking line search to obtain a globally convergent algorithm, which we
refer to as the Projected Newton method. We prove convergence of the algorithm
and illustrate the improved performance over current state-of-the-art solvers
with some numerical experiments
Parametric Level Set Methods for Inverse Problems
In this paper, a parametric level set method for reconstruction of obstacles
in general inverse problems is considered. General evolution equations for the
reconstruction of unknown obstacles are derived in terms of the underlying
level set parameters. We show that using the appropriate form of parameterizing
the level set function results a significantly lower dimensional problem, which
bypasses many difficulties with traditional level set methods, such as
regularization, re-initialization and use of signed distance function.
Moreover, we show that from a computational point of view, low order
representation of the problem paves the path for easier use of Newton and
quasi-Newton methods. Specifically for the purposes of this paper, we
parameterize the level set function in terms of adaptive compactly supported
radial basis functions, which used in the proposed manner provides flexibility
in presenting a larger class of shapes with fewer terms. Also they provide a
"narrow-banding" advantage which can further reduce the number of active
unknowns at each step of the evolution. The performance of the proposed
approach is examined in three examples of inverse problems, i.e., electrical
resistance tomography, X-ray computed tomography and diffuse optical
tomography
Regularized Optimal Transport and the Rot Mover's Distance
This paper presents a unified framework for smooth convex regularization of
discrete optimal transport problems. In this context, the regularized optimal
transport turns out to be equivalent to a matrix nearness problem with respect
to Bregman divergences. Our framework thus naturally generalizes a previously
proposed regularization based on the Boltzmann-Shannon entropy related to the
Kullback-Leibler divergence, and solved with the Sinkhorn-Knopp algorithm. We
call the regularized optimal transport distance the rot mover's distance in
reference to the classical earth mover's distance. We develop two generic
schemes that we respectively call the alternate scaling algorithm and the
non-negative alternate scaling algorithm, to compute efficiently the
regularized optimal plans depending on whether the domain of the regularizer
lies within the non-negative orthant or not. These schemes are based on
Dykstra's algorithm with alternate Bregman projections, and further exploit the
Newton-Raphson method when applied to separable divergences. We enhance the
separable case with a sparse extension to deal with high data dimensions. We
also instantiate our proposed framework and discuss the inherent specificities
for well-known regularizers and statistical divergences in the machine learning
and information geometry communities. Finally, we demonstrate the merits of our
methods with experiments using synthetic data to illustrate the effect of
different regularizers and penalties on the solutions, as well as real-world
data for a pattern recognition application to audio scene classification
- …