115 research outputs found

    Convolutional Dictionary Learning through Tensor Factorization

    Get PDF
    Tensor methods have emerged as a powerful paradigm for consistent learning of many latent variable models such as topic models, independent component analysis and dictionary learning. Model parameters are estimated via CP decomposition of the observed higher order input moments. However, in many domains, additional invariances such as shift invariances exist, enforced via models such as convolutional dictionary learning. In this paper, we develop novel tensor decomposition algorithms for parameter estimation of convolutional models. Our algorithm is based on the popular alternating least squares method, but with efficient projections onto the space of stacked circulant matrices. Our method is embarrassingly parallel and consists of simple operations such as fast Fourier transforms and matrix multiplications. Our algorithm converges to the dictionary much faster and more accurately compared to the alternating minimization over filters and activation maps

    Subsampled Blind Deconvolution via Nuclear Norm Minimization

    Get PDF
    Many phenomena can be modeled as systems that preform convolution, including negative effects on data like translation/motion blurs. Blind Deconvolution (BD) is a process used to reverse the negative effects of a system by effectively undoing the convolution. Not only can the signal be recovered, but the impulse response can as well. "Blind" signifies that there is incomplete knowledge of the impulse responses of an LTI system. Solutions exist for preforming BD but they assume data is fully sampled. In this project we start from an existing method [1] for BD then extend to the subsampled case. We show that this new formulation works under similar assumptions. Current results are empirical, but current and future work focuses providing theoretical guarantees for this algorithm.No embargoAcademic Major: Electrical and Computer Engineerin

    Convexity in source separation: Models, geometry, and algorithms

    Get PDF
    Source separation or demixing is the process of extracting multiple components entangled within a signal. Contemporary signal processing presents a host of difficult source separation problems, from interference cancellation to background subtraction, blind deconvolution, and even dictionary learning. Despite the recent progress in each of these applications, advances in high-throughput sensor technology place demixing algorithms under pressure to accommodate extremely high-dimensional signals, separate an ever larger number of sources, and cope with more sophisticated signal and mixing models. These difficulties are exacerbated by the need for real-time action in automated decision-making systems. Recent advances in convex optimization provide a simple framework for efficiently solving numerous difficult demixing problems. This article provides an overview of the emerging field, explains the theory that governs the underlying procedures, and surveys algorithms that solve them efficiently. We aim to equip practitioners with a toolkit for constructing their own demixing algorithms that work, as well as concrete intuition for why they work
    • …
    corecore