2,283 research outputs found

    A Multi-level Correction Scheme for Eigenvalue Problems

    Full text link
    In this paper, a new type of multi-level correction scheme is proposed for solving eigenvalue problems by finite element method. With this new scheme, the accuracy of eigenpair approximations can be improved after each correction step which only needs to solve a source problem on finer finite element space and an eigenvalue problem on the coarsest finite element space. This correction scheme can improve the efficiency of solving eigenvalue problems by finite element method.Comment: 16 pages, 5 figure

    On the Role of Hadamard Gates in Quantum Circuits

    Full text link
    We study a reduced quantum circuit computation paradigm in which the only allowable gates either permute the computational basis states or else apply a "global Hadamard operation", i.e. apply a Hadamard operation to every qubit simultaneously. In this model, we discuss complexity bounds (lower-bounding the number of global Hadamard operations) for common quantum algorithms : we illustrate upper bounds for Shor's Algorithm, and prove lower bounds for Grover's Algorithm. We also use our formalism to display a gate that is neither quantum-universal nor classically simulable, on the assumption that Integer Factoring is not in BPP.Comment: 16 pages, last section clarified, typos corrected, references added, minor rewordin

    Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions

    Get PDF
    Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressedā€”either explicitly or implicitlyā€”to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m Ɨ n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast to O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data

    Hybridization and Postprocessing Techniques for Mixed Eigenfunctions

    Get PDF
    We introduce hybridization and postprocessing techniques for the Raviartā€“Thomas approximation of second-order elliptic eigenvalue problems. Hybridization reduces the Raviartā€“Thomas approximation to a condensed eigenproblem. The condensed eigenproblem is nonlinear, but smaller than the original mixed approximation. We derive multiple iterative algorithms for solving the condensed eigenproblem and examine their interrelationships and convergence rates. An element-by-element postprocessing technique to improve accuracy of computed eigenfunctions is also presented. We prove that a projection of the error in the eigenspace approximation by the mixed method (of any order) superconverges and that the postprocessed eigenfunction approximations converge faster for smooth eigenfunctions. Numerical experiments using a square and an L-shaped domain illustrate the theoretical results

    An uncued brain-computer interface using reservoir computing

    Get PDF
    Brain-Computer Interfaces are an important and promising avenue for possible next-generation assistive devices. In this article, we show how Reservoir Comput- ing ā€“ a computationally efficient way of training recurrent neural networks ā€“ com- bined with a novel feature selection algorithm based on Common Spatial Patterns can be used to drastically improve performance in an uncued motor imagery based Brain-Computer Interface (BCI). The objective of this BCI is to label each sample of EEG data as either motor imagery class 1 (e.g. left hand), motor imagery class 2 (e.g. right hand) or a rest state (i.e., no motor imagery). When comparing the re- sults of the proposed method with the results from the BCI Competition IV (where this dataset was introduced), it turns out that the proposed method outperforms the winner of the competition

    A Zienkiewicz-type finite element applied to fourth-order problems

    Get PDF
    AbstractThis paper deals with convergence analysis and applications of a Zienkiewicz-type (Z-type) triangular element, applied to fourth-order partial differential equations. For the biharmonic problem we prove the order of convergence by comparison to a suitable modified Hermite triangular finite element. This method is more natural and it could be applied to the corresponding fourth-order eigenvalue problem. We also propose a simple postprocessing method which improves the order of convergence of finite element eigenpairs. Thus, an a posteriori analysis is presented by means of different triangular elements. Some computational aspects are discussed and numerical examples are given
    • ā€¦
    corecore