36,107 research outputs found
On Algorithms Based on Joint Estimation of Currents and Contrast in Microwave Tomography
This paper deals with improvements to the contrast source inversion method
which is widely used in microwave tomography. First, the method is reviewed and
weaknesses of both the criterion form and the optimization strategy are
underlined. Then, two new algorithms are proposed. Both of them are based on
the same criterion, similar but more robust than the one used in contrast
source inversion. The first technique keeps the main characteristics of the
contrast source inversion optimization scheme but is based on a better
exploitation of the conjugate gradient algorithm. The second technique is based
on a preconditioned conjugate gradient algorithm and performs simultaneous
updates of sets of unknowns that are normally processed sequentially. Both
techniques are shown to be more efficient than original contrast source
inversion.Comment: 12 pages, 12 figures, 5 table
Computation of Ground States of the Gross-Pitaevskii Functional via Riemannian Optimization
In this paper we combine concepts from Riemannian Optimization and the theory
of Sobolev gradients to derive a new conjugate gradient method for direct
minimization of the Gross-Pitaevskii energy functional with rotation. The
conservation of the number of particles constrains the minimizers to lie on a
manifold corresponding to the unit norm. The idea developed here is to
transform the original constrained optimization problem to an unconstrained
problem on this (spherical) Riemannian manifold, so that fast minimization
algorithms can be applied as alternatives to more standard constrained
formulations. First, we obtain Sobolev gradients using an equivalent definition
of an inner product which takes into account rotation. Then, the
Riemannian gradient (RG) steepest descent method is derived based on projected
gradients and retraction of an intermediate solution back to the constraint
manifold. Finally, we use the concept of the Riemannian vector transport to
propose a Riemannian conjugate gradient (RCG) method for this problem. It is
derived at the continuous level based on the "optimize-then-discretize"
paradigm instead of the usual "discretize-then-optimize" approach, as this
ensures robustness of the method when adaptive mesh refinement is performed in
computations. We evaluate various design choices inherent in the formulation of
the method and conclude with recommendations concerning selection of the best
options. Numerical tests demonstrate that the proposed RCG method outperforms
the simple gradient descent (RG) method in terms of rate of convergence. While
on simple problems a Newton-type method implemented in the {\tt Ipopt} library
exhibits a faster convergence than the (RCG) approach, the two methods perform
similarly on more complex problems requiring the use of mesh adaptation. At the
same time the (RCG) approach has far fewer tunable parameters.Comment: 28 pages, 13 figure
A conjugate gradient algorithm for the astrometric core solution of Gaia
The ESA space astrometry mission Gaia, planned to be launched in 2013, has
been designed to make angular measurements on a global scale with
micro-arcsecond accuracy. A key component of the data processing for Gaia is
the astrometric core solution, which must implement an efficient and accurate
numerical algorithm to solve the resulting, extremely large least-squares
problem. The Astrometric Global Iterative Solution (AGIS) is a framework that
allows to implement a range of different iterative solution schemes suitable
for a scanning astrometric satellite. In order to find a computationally
efficient and numerically accurate iteration scheme for the astrometric
solution, compatible with the AGIS framework, we study an adaptation of the
classical conjugate gradient (CG) algorithm, and compare it to the so-called
simple iteration (SI) scheme that was previously known to converge for this
problem, although very slowly. The different schemes are implemented within a
software test bed for AGIS known as AGISLab, which allows to define, simulate
and study scaled astrometric core solutions. After successful testing in
AGISLab, the CG scheme has been implemented also in AGIS. The two algorithms CG
and SI eventually converge to identical solutions, to within the numerical
noise (of the order of 0.00001 micro-arcsec). These solutions are independent
of the starting values (initial star catalogue), and we conclude that they are
equivalent to a rigorous least-squares estimation of the astrometric
parameters. The CG scheme converges up to a factor four faster than SI in the
tested cases, and in particular spatially correlated truncation errors are much
more efficiently damped out with the CG scheme.Comment: 24 pages, 16 figures. Accepted for publication in Astronomy &
Astrophysic
Weighted maximal regularity estimates and solvability of non-smooth elliptic systems II
We continue the development, by reduction to a first order system for the
conormal gradient, of \textit{a priori} estimates and solvability for
boundary value problems of Dirichlet, regularity, Neumann type for divergence
form second order, complex, elliptic systems. We work here on the unit ball and
more generally its bi-Lipschitz images, assuming a Carleson condition as
introduced by Dahlberg which measures the discrepancy of the coefficients to
their boundary trace near the boundary. We sharpen our estimates by proving a
general result concerning \textit{a priori} almost everywhere non-tangential
convergence at the boundary. Also, compactness of the boundary yields more
solvability results using Fredholm theory. Comparison between classes of
solutions and uniqueness issues are discussed. As a consequence, we are able to
solve a long standing regularity problem for real equations, which may not be
true on the upper half-space, justifying \textit{a posteriori} a separate work
on bounded domains.Comment: 76 pages, new abstract and few typos corrected. The second author has
changed nam
CoCoA: A General Framework for Communication-Efficient Distributed Optimization
The scale of modern datasets necessitates the development of efficient
distributed optimization methods for machine learning. We present a
general-purpose framework for distributed computing environments, CoCoA, that
has an efficient communication scheme and is applicable to a wide variety of
problems in machine learning and signal processing. We extend the framework to
cover general non-strongly-convex regularizers, including L1-regularized
problems like lasso, sparse logistic regression, and elastic net
regularization, and show how earlier work can be derived as a special case. We
provide convergence guarantees for the class of convex regularized loss
minimization objectives, leveraging a novel approach in handling
non-strongly-convex regularizers and non-smooth loss functions. The resulting
framework has markedly improved performance over state-of-the-art methods, as
we illustrate with an extensive set of experiments on real distributed
datasets
- …