1,793 research outputs found

    Analysis and Synthesis Prior Greedy Algorithms for Non-linear Sparse Recovery

    Full text link
    In this work we address the problem of recovering sparse solutions to non linear inverse problems. We look at two variants of the basic problem, the synthesis prior problem when the solution is sparse and the analysis prior problem where the solution is cosparse in some linear basis. For the first problem, we propose non linear variants of the Orthogonal Matching Pursuit (OMP) and CoSamp algorithms; for the second problem we propose a non linear variant of the Greedy Analysis Pursuit (GAP) algorithm. We empirically test the success rates of our algorithms on exponential and logarithmic functions. We model speckle denoising as a non linear sparse recovery problem and apply our technique to solve it. Results show that our method outperforms state of the art methods in ultrasound speckle denoising

    A combined first and second order variational approach for image reconstruction

    Full text link
    In this paper we study a variational problem in the space of functions of bounded Hessian. Our model constitutes a straightforward higher-order extension of the well known ROF functional (total variation minimisation) to which we add a non-smooth second order regulariser. It combines convex functions of the total variation and the total variation of the first derivatives. In what follows, we prove existence and uniqueness of minimisers of the combined model and present the numerical solution of the corresponding discretised problem by employing the split Bregman method. The paper is furnished with applications of our model to image denoising, deblurring as well as image inpainting. The obtained numerical results are compared with results obtained from total generalised variation (TGV), infimal convolution and Euler's elastica, three other state of the art higher-order models. The numerical discussion confirms that the proposed higher-order model competes with models of its kind in avoiding the creation of undesirable artifacts and blocky-like structures in the reconstructed images -- a known disadvantage of the ROF model -- while being simple and efficiently numerically solvable.Comment: 34 pages, 89 figure

    Dynamic sampling schemes for optimal noise learning under multiple nonsmooth constraints

    Full text link
    We consider the bilevel optimisation approach proposed by De Los Reyes, Sch\"onlieb (2013) for learning the optimal parameters in a Total Variation (TV) denoising model featuring for multiple noise distributions. In applications, the use of databases (dictionaries) allows an accurate estimation of the parameters, but reflects in high computational costs due to the size of the databases and to the nonsmooth nature of the PDE constraints. To overcome this computational barrier we propose an optimisation algorithm that by sampling dynamically from the set of constraints and using a quasi-Newton method, solves the problem accurately and in an efficient way

    Weighted Schatten pp-Norm Minimization for Image Denoising and Background Subtraction

    Full text link
    Low rank matrix approximation (LRMA), which aims to recover the underlying low rank matrix from its degraded observation, has a wide range of applications in computer vision. The latest LRMA methods resort to using the nuclear norm minimization (NNM) as a convex relaxation of the nonconvex rank minimization. However, NNM tends to over-shrink the rank components and treats the different rank components equally, limiting its flexibility in practical applications. We propose a more flexible model, namely the Weighted Schatten pp-Norm Minimization (WSNM), to generalize the NNM to the Schatten pp-norm minimization with weights assigned to different singular values. The proposed WSNM not only gives better approximation to the original low-rank assumption, but also considers the importance of different rank components. We analyze the solution of WSNM and prove that, under certain weights permutation, WSNM can be equivalently transformed into independent non-convex lpl_p-norm subproblems, whose global optimum can be efficiently solved by generalized iterated shrinkage algorithm. We apply WSNM to typical low-level vision problems, e.g., image denoising and background subtraction. Extensive experimental results show, both qualitatively and quantitatively, that the proposed WSNM can more effectively remove noise, and model complex and dynamic scenes compared with state-of-the-art methods.Comment: 13 pages, 11 figure

    Multiscale Adaptive Representation of Signals: I. The Basic Framework

    Full text link
    We introduce a framework for designing multi-scale, adaptive, shift-invariant frames and bi-frames for representing signals. The new framework, called AdaFrame, improves over dictionary learning-based techniques in terms of computational efficiency at inference time. It improves classical multi-scale basis such as wavelet frames in terms of coding efficiency. It provides an attractive alternative to dictionary learning-based techniques for low level signal processing tasks, such as compression and denoising, as well as high level tasks, such as feature extraction for object recognition. Connections with deep convolutional networks are also discussed. In particular, the proposed framework reveals a drawback in the commonly used approach for visualizing the activations of the intermediate layers in convolutional networks, and suggests a natural alternative
    • 

    corecore