203,293 research outputs found
Localized bases for finite dimensional homogenization approximations with non-separated scales and high-contrast
We construct finite-dimensional approximations of solution spaces of
divergence form operators with -coefficients. Our method does not
rely on concepts of ergodicity or scale-separation, but on the property that
the solution space of these operators is compactly embedded in if source
terms are in the unit ball of instead of the unit ball of .
Approximation spaces are generated by solving elliptic PDEs on localized
sub-domains with source terms corresponding to approximation bases for .
The -error estimates show that -dimensional spaces
with basis elements localized to sub-domains of diameter (with ) result in an
accuracy for elliptic, parabolic and hyperbolic
problems. For high-contrast media, the accuracy of the method is preserved
provided that localized sub-domains contain buffer zones of width
where the contrast of the medium
remains bounded. The proposed method can naturally be generalized to vectorial
equations (such as elasto-dynamics).Comment: Accepted for publication in SIAM MM
Compressed Sensing Using Binary Matrices of Nearly Optimal Dimensions
In this paper, we study the problem of compressed sensing using binary
measurement matrices and -norm minimization (basis pursuit) as the
recovery algorithm. We derive new upper and lower bounds on the number of
measurements to achieve robust sparse recovery with binary matrices. We
establish sufficient conditions for a column-regular binary matrix to satisfy
the robust null space property (RNSP) and show that the associated sufficient
conditions % sparsity bounds for robust sparse recovery obtained using the RNSP
are better by a factor of compared to the
sufficient conditions obtained using the restricted isometry property (RIP).
Next we derive universal \textit{lower} bounds on the number of measurements
that any binary matrix needs to have in order to satisfy the weaker sufficient
condition based on the RNSP and show that bipartite graphs of girth six are
optimal. Then we display two classes of binary matrices, namely parity check
matrices of array codes and Euler squares, which have girth six and are nearly
optimal in the sense of almost satisfying the lower bound. In principle,
randomly generated Gaussian measurement matrices are "order-optimal". So we
compare the phase transition behavior of the basis pursuit formulation using
binary array codes and Gaussian matrices and show that (i) there is essentially
no difference between the phase transition boundaries in the two cases and (ii)
the CPU time of basis pursuit with binary matrices is hundreds of times faster
than with Gaussian matrices and the storage requirements are less. Therefore it
is suggested that binary matrices are a viable alternative to Gaussian matrices
for compressed sensing using basis pursuit. \end{abstract}Comment: 28 pages, 3 figures, 5 table
Expanded mixed multiscale finite element methods and their applications for flows in porous media
We develop a family of expanded mixed Multiscale Finite Element Methods
(MsFEMs) and their hybridizations for second-order elliptic equations. This
formulation expands the standard mixed Multiscale Finite Element formulation in
the sense that four unknowns (hybrid formulation) are solved simultaneously:
pressure, gradient of pressure, velocity and Lagrange multipliers. We use
multiscale basis functions for the both velocity and gradient of pressure. In
the expanded mixed MsFEM framework, we consider both cases of separable-scale
and non-separable spatial scales. We specifically analyze the methods in three
categories: periodic separable scales, - convergence separable scales, and
continuum scales. When there is no scale separation, using some global
information can improve accuracy for the expanded mixed MsFEMs. We present
rigorous convergence analysis for expanded mixed MsFEMs. The analysis includes
both conforming and nonconforming expanded mixed MsFEM. Numerical results are
presented for various multiscale models and flows in porous media with shales
to illustrate the efficiency of the expanded mixed MsFEMs.Comment: 33 page
Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or
implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k))
floating-point operations (flops) in contrast to O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data
Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions
Low-rank matrix approximations, such as the truncated singular value
decomposition and the rank-revealing QR decomposition, play a central role in
data analysis and scientific computing. This work surveys and extends recent
research which demonstrates that randomization offers a powerful tool for
performing low-rank matrix approximation. These techniques exploit modern
computational architectures more fully than classical methods and open the
possibility of dealing with truly massive data sets.
This paper presents a modular framework for constructing randomized
algorithms that compute partial matrix decompositions. These methods use random
sampling to identify a subspace that captures most of the action of a matrix.
The input matrix is then compressed---either explicitly or implicitly---to this
subspace, and the reduced matrix is manipulated deterministically to obtain the
desired low-rank factorization. In many cases, this approach beats its
classical competitors in terms of accuracy, speed, and robustness. These claims
are supported by extensive numerical experiments and a detailed error analysis
- …