1,024 research outputs found
Robust Dropping Criteria for F-norm Minimization Based Sparse Approximate Inverse Preconditioning
Dropping tolerance criteria play a central role in Sparse Approximate Inverse
preconditioning. Such criteria have received, however, little attention and
have been treated heuristically in the following manner: If the size of an
entry is below some empirically small positive quantity, then it is set to
zero. The meaning of "small" is vague and has not been considered rigorously.
It has not been clear how dropping tolerances affect the quality and
effectiveness of a preconditioner . In this paper, we focus on the adaptive
Power Sparse Approximate Inverse algorithm and establish a mathematical theory
on robust selection criteria for dropping tolerances. Using the theory, we
derive an adaptive dropping criterion that is used to drop entries of small
magnitude dynamically during the setup process of . The proposed criterion
enables us to make both as sparse as possible as well as to be of
comparable quality to the potentially denser matrix which is obtained without
dropping. As a byproduct, the theory applies to static F-norm minimization
based preconditioning procedures, and a similar dropping criterion is given
that can be used to sparsify a matrix after it has been computed by a static
sparse approximate inverse procedure. In contrast to the adaptive procedure,
dropping in the static procedure does not reduce the setup time of the matrix
but makes the application of the sparser for Krylov iterations cheaper.
Numerical experiments reported confirm the theory and illustrate the robustness
and effectiveness of the dropping criteria.Comment: 27 pages, 2 figure
Fast minimum variance wavefront reconstruction for extremely large telescopes
We present a new algorithm, FRiM (FRactal Iterative Method), aiming at the
reconstruction of the optical wavefront from measurements provided by a
wavefront sensor. As our application is adaptive optics on extremely large
telescopes, our algorithm was designed with speed and best quality in mind. The
latter is achieved thanks to a regularization which enforces prior statistics.
To solve the regularized problem, we use the conjugate gradient method which
takes advantage of the sparsity of the wavefront sensor model matrix and avoids
the storage and inversion of a huge matrix. The prior covariance matrix is
however non-sparse and we derive a fractal approximation to the Karhunen-Loeve
basis thanks to which the regularization by Kolmogorov statistics can be
computed in O(N) operations, N being the number of phase samples to estimate.
Finally, we propose an effective preconditioning which also scales as O(N) and
yields the solution in 5-10 conjugate gradient iterations for any N. The
resulting algorithm is therefore O(N). As an example, for a 128 x 128
Shack-Hartmann wavefront sensor, FRiM appears to be more than 100 times faster
than the classical vector-matrix multiplication method.Comment: to appear in the Journal of the Optical Society of America
- …