155 research outputs found

    Asymptotic Analysis of Inpainting via Universal Shearlet Systems

    Get PDF
    Recently introduced inpainting algorithms using a combination of applied harmonic analysis and compressed sensing have turned out to be very successful. One key ingredient is a carefully chosen representation system which provides (optimally) sparse approximations of the original image. Due to the common assumption that images are typically governed by anisotropic features, directional representation systems have often been utilized. One prominent example of this class are shearlets, which have the additional benefitallowing faithful implementations. Numerical results show that shearlets significantly outperform wavelets in inpainting tasks. One of those software packages, www.shearlab.org, even offers the flexibility of usingdifferent parameter for each scale, which is not yet covered by shearlet theory. In this paper, we first introduce universal shearlet systems which are associated with an arbitrary scaling sequence, thereby modeling the previously mentioned flexibility. In addition, this novel construction allows for a smooth transition between wavelets and shearlets and therefore enables us to analyze them in a uniform fashion. For a large class of such scaling sequences, we first prove that the associated universal shearlet systems form band-limited Parseval frames for L2(R2)L^2(\mathbb{R}^2) consisting of Schwartz functions. Secondly, we analyze the performance for inpainting of this class of universal shearlet systems within a distributional model situation using an â„“1\ell^1-analysis minimization algorithm for reconstruction. Our main result in this part states that, provided the scaling sequence is comparable to the size of the (scale-dependent) gap, nearly-perfect inpainting is achieved at sufficiently fine scales

    Zero-bias autoencoders and the benefits of co-adapting features

    Full text link
    Regularized training of an autoencoder typically results in hidden unit biases that take on large negative values. We show that negative biases are a natural result of using a hidden layer whose responsibility is to both represent the input data and act as a selection mechanism that ensures sparsity of the representation. We then show that negative biases impede the learning of data distributions whose intrinsic dimensionality is high. We also propose a new activation function that decouples the two roles of the hidden layer and that allows us to learn representations on data with very high intrinsic dimensionality, where standard autoencoders typically fail. Since the decoupled activation function acts like an implicit regularizer, the model can be trained by minimizing the reconstruction error of training data, without requiring any additional regularization

    Tight-frame-like Sparse Recovery Using Non-tight Sensing Matrices

    Full text link
    The choice of the sensing matrix is crucial in compressed sensing (CS). Gaussian sensing matrices possess the desirable restricted isometry property (RIP), which is crucial for providing performance guarantees on sparse recovery. Further, sensing matrices that constitute a Parseval tight frame result in minimum mean-squared-error (MSE) reconstruction given oracle knowledge of the support of the sparse vector. However, if the sensing matrix is not tight, could one achieve the reconstruction performance assured by a tight frame by suitably designing the reconstruction strategy? This is the key question that we address in this paper. We develop a novel formulation that relies on a generalized l2-norm-based data-fidelity loss that tightens the sensing matrix, along with the standard l1 penalty for enforcing sparsity. The optimization is performed using proximal gradient method, resulting in the tight-frame iterative shrinkage thresholding algorithm (TF-ISTA). We show that the objective convergence of TF-ISTA is linear akin to that of ISTA. Incorporating Nesterovs momentum into TF-ISTA results in a faster variant, namely, TF-FISTA, whose objective convergence is quadratic, akin to that of FISTA. We provide performance guarantees on the l2-error for the proposed formulation. Experimental results show that the proposed algorithms offer superior sparse recovery performance and faster convergence. Proceeding further, we develop the network variants of TF-ISTA and TF-FISTA, wherein a convolutional neural network is used as the sparsifying operator. On the application front, we consider compressed sensing image recovery (CSIR). Experimental results on Set11, BSD68, Urban100, and DIV2K datasets show that the proposed models outperform state-of-the-art sparse recovery methods, with performance measured in terms of peak signal-to-noise ratio (PSNR) and structural similarity index metric (SSIM).Comment: 33 pages, 12 figure
    • …
    corecore