79,053 research outputs found

    Closed orbit correction at synchrotrons for symmetric and near-symmetric lattices

    Full text link
    This contribution compiles the benefits of lattice symmetry in the context of closed orbit correction. A symmetric arrangement of BPMs and correctors results in structured orbit response matrices of Circulant or block Circulant type. These forms of matrices provide favorable properties in terms of computational complexity, information compression and interpretation of mathematical vector spaces of BPMs and correctors. For broken symmetries, a nearest-Circulant approximation is introduced and the practical advantages of symmetry exploitation are demonstrated with the help of simulations and experiments in the context of FAIR synchrotrons

    A System for Compressive Sensing Signal Reconstruction

    Full text link
    An architecture for hardware realization of a system for sparse signal reconstruction is presented. The threshold based reconstruction method is considered, which is further modified in this paper to reduce the system complexity in order to provide easier hardware realization. Instead of using the partial random Fourier transform matrix, the minimization problem is reformulated using only the triangular R matrix from the QR decomposition. The triangular R matrix can be efficiently implemented in hardware without calculating the orthogonal Q matrix. A flexible and scalable realization of matrix R is proposed, such that the size of R changes with the number of available samples and sparsity level.Comment: 6 page

    Convolutional Dictionary Learning through Tensor Factorization

    Get PDF
    Tensor methods have emerged as a powerful paradigm for consistent learning of many latent variable models such as topic models, independent component analysis and dictionary learning. Model parameters are estimated via CP decomposition of the observed higher order input moments. However, in many domains, additional invariances such as shift invariances exist, enforced via models such as convolutional dictionary learning. In this paper, we develop novel tensor decomposition algorithms for parameter estimation of convolutional models. Our algorithm is based on the popular alternating least squares method, but with efficient projections onto the space of stacked circulant matrices. Our method is embarrassingly parallel and consists of simple operations such as fast Fourier transforms and matrix multiplications. Our algorithm converges to the dictionary much faster and more accurately compared to the alternating minimization over filters and activation maps
    • …
    corecore