197,797 research outputs found

    Localized bases for finite dimensional homogenization approximations with non-separated scales and high-contrast

    Get PDF
    We construct finite-dimensional approximations of solution spaces of divergence form operators with LL^\infty-coefficients. Our method does not rely on concepts of ergodicity or scale-separation, but on the property that the solution space of these operators is compactly embedded in H1H^1 if source terms are in the unit ball of L2L^2 instead of the unit ball of H1H^{-1}. Approximation spaces are generated by solving elliptic PDEs on localized sub-domains with source terms corresponding to approximation bases for H2H^2. The H1H^1-error estimates show that O(hd)\mathcal{O}(h^{-d})-dimensional spaces with basis elements localized to sub-domains of diameter O(hαln1h)\mathcal{O}(h^\alpha \ln \frac{1}{h}) (with α[1/2,1)\alpha \in [1/2,1)) result in an O(h22α)\mathcal{O}(h^{2-2\alpha}) accuracy for elliptic, parabolic and hyperbolic problems. For high-contrast media, the accuracy of the method is preserved provided that localized sub-domains contain buffer zones of width O(hαln1h)\mathcal{O}(h^\alpha \ln \frac{1}{h}) where the contrast of the medium remains bounded. The proposed method can naturally be generalized to vectorial equations (such as elasto-dynamics).Comment: Accepted for publication in SIAM MM

    Compressed Sensing Using Binary Matrices of Nearly Optimal Dimensions

    Get PDF
    In this paper, we study the problem of compressed sensing using binary measurement matrices and 1\ell_1-norm minimization (basis pursuit) as the recovery algorithm. We derive new upper and lower bounds on the number of measurements to achieve robust sparse recovery with binary matrices. We establish sufficient conditions for a column-regular binary matrix to satisfy the robust null space property (RNSP) and show that the associated sufficient conditions % sparsity bounds for robust sparse recovery obtained using the RNSP are better by a factor of (33)/22.6(3 \sqrt{3})/2 \approx 2.6 compared to the sufficient conditions obtained using the restricted isometry property (RIP). Next we derive universal \textit{lower} bounds on the number of measurements that any binary matrix needs to have in order to satisfy the weaker sufficient condition based on the RNSP and show that bipartite graphs of girth six are optimal. Then we display two classes of binary matrices, namely parity check matrices of array codes and Euler squares, which have girth six and are nearly optimal in the sense of almost satisfying the lower bound. In principle, randomly generated Gaussian measurement matrices are "order-optimal". So we compare the phase transition behavior of the basis pursuit formulation using binary array codes and Gaussian matrices and show that (i) there is essentially no difference between the phase transition boundaries in the two cases and (ii) the CPU time of basis pursuit with binary matrices is hundreds of times faster than with Gaussian matrices and the storage requirements are less. Therefore it is suggested that binary matrices are a viable alternative to Gaussian matrices for compressed sensing using basis pursuit. \end{abstract}Comment: 28 pages, 3 figures, 5 table

    Expanded mixed multiscale finite element methods and their applications for flows in porous media

    Get PDF
    We develop a family of expanded mixed Multiscale Finite Element Methods (MsFEMs) and their hybridizations for second-order elliptic equations. This formulation expands the standard mixed Multiscale Finite Element formulation in the sense that four unknowns (hybrid formulation) are solved simultaneously: pressure, gradient of pressure, velocity and Lagrange multipliers. We use multiscale basis functions for the both velocity and gradient of pressure. In the expanded mixed MsFEM framework, we consider both cases of separable-scale and non-separable spatial scales. We specifically analyze the methods in three categories: periodic separable scales, GG- convergence separable scales, and continuum scales. When there is no scale separation, using some global information can improve accuracy for the expanded mixed MsFEMs. We present rigorous convergence analysis for expanded mixed MsFEMs. The analysis includes both conforming and nonconforming expanded mixed MsFEM. Numerical results are presented for various multiscale models and flows in porous media with shales to illustrate the efficiency of the expanded mixed MsFEMs.Comment: 33 page

    Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions

    Get PDF
    Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast to O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data

    Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions

    Get PDF
    Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed---either explicitly or implicitly---to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis
    corecore