13 research outputs found
Contraction and optimality properties of an adaptive Legendre-Galerkin method: the multi-dimensional case
We analyze the theoretical properties of an adaptive Legendre-Galerkin method
in the multidimensional case. After the recent investigations for
Fourier-Galerkin methods in a periodic box and for Legendre-Galerkin methods in
the one dimensional setting, the present study represents a further step
towards a mathematically rigorous understanding of adaptive spectral/
discretizations of elliptic boundary-value problems. The main contribution of
the paper is a careful construction of a multidimensional Riesz basis in ,
based on a quasi-orthonormalization procedure. This allows us to design an
adaptive algorithm, to prove its convergence by a contraction argument, and to
discuss its optimality properties (in the sense of non-linear approximation
theory) in certain sparsity classes of Gevrey type
Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity
Dense kernel matrices obtained from point evaluations of a covariance function at locations arise in statistics, machine learning, and numerical analysis. For covariance functions that are Green's functions of elliptic boundary value problems and homogeneously distributed sampling points, we show how to identify a subset , with , such that the zero fill-in incomplete Cholesky factorization of the sparse matrix is an -approximation of . This factorization can provably be obtained in complexity in space and in time, improving upon the state of the art for general elliptic operators; we further present numerical evidence that can be taken to be the intrinsic dimension of the data set rather than that of the ambient space. The algorithm only needs to know the spatial configuration of the and does not require an analytic representation of . Furthermore, this factorization straightforwardly provides an approximate sparse PCA with optimal rate of convergence in the operator norm. Hence, by using only subsampling and the incomplete Cholesky factorization, we obtain, at nearly linear complexity, the compression, inversion, and approximate PCA of a large class of covariance matrices. By inverting the order of the Cholesky factorization we also obtain a solver for elliptic PDE with complexity in space and in time, improving upon the state of the art for general elliptic operators
Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity
Dense kernel matrices Θ∈R^(N×N) obtained from point evaluations of a covariance function G at locations {x_i}1≤i≤N arise in statistics, machine learning, and numerical analysis. For covariance functions that are Green's functions elliptic boundary value problems and approximately equally spaced sampling points, we show how to identify a subset S⊂{1,…,N}×{1,…,N}, with #S=O(Nlog(N)log^d(N/ϵ)), such that the zero fill-in block-incomplete Cholesky decomposition of Θ_(i,j)1_((i,j)∈S) is an ϵ-approximation of Θ. This block-factorisation can provably be obtained in O(Nlog^2(N)(log(1/ϵ)+log^2(N))^(4d+1)) complexity in time. Numerical evidence further suggests that element-wise Cholesky decomposition with the same ordering constitutes an O(Nlog^2(N)log^(2d)(N/ϵ)) solver. The algorithm only needs to know the spatial configuration of the x_i and does not require an analytic representation of G. Furthermore, an approximate PCA with optimal rate of convergence in the operator norm can be easily read off from this decomposition. Hence, by using only subsampling and the incomplete Cholesky decomposition, we obtain at nearly linear complexity the compression, inversion and approximate PCA of a large class of covariance matrices. By inverting the order of the Cholesky decomposition we also obtain a near-linear-time solver for elliptic PDEs
Preconditioned fast solvers for large linear systems with specific sparse and/or Toeplitz-like structures and applications
In this thesis, the design of the preconditioners we propose starts from applications instead of treating the problem in a completely general way. The reason is that not all types of linear systems can be addressed with the same tools. In this sense, the techniques for designing efficient iterative solvers depends mostly on properties inherited from the continuous problem, that has originated the discretized sequence of matrices. Classical examples are locality, isotropy in the PDE context, whose discrete counterparts are sparsity and matrices constant along the diagonals, respectively.
Therefore, it is often important to take into account the properties of the originating continuous model for obtaining better performances and for providing an accurate convergence analysis. We consider linear systems that arise in the solution of both linear and nonlinear partial differential equation of both integer and fractional type. For the latter case, an introduction to both the theory and the numerical treatment is given.
All the algorithms and the strategies presented in this thesis are developed having in mind their parallel implementation. In particular, we consider the processor-co-processor framework, in which the main part of the computation is performed on a Graphics Processing Unit (GPU) accelerator.
In Part I we introduce our proposal for sparse approximate inverse preconditioners for either the solution of time-dependent Partial Differential Equations (PDEs), Chapter 3, and Fractional Differential Equations (FDEs), containing both classical and fractional terms, Chapter 5. More precisely, we propose a new technique for updating preconditioners for dealing with sequences of linear systems for PDEs and FDEs, that can be used also to compute matrix functions of large matrices via quadrature formula in Chapter 4 and for optimal control of FDEs in Chapter 6. At last, in Part II, we consider structured preconditioners for quasi-Toeplitz systems. The focus is towards the numerical treatment of discretized convection-diffusion equations in Chapter 7 and on the solution of FDEs with linear multistep formula in boundary value form in Chapter 8