99 research outputs found

    Solving a variational image restoration model which involves L∞ constraints

    Get PDF
    In this paper, we seek a solution to linear inverse problems arising in image restoration in terms of a recently posed optimization problem which combines total variation minimization and wavelet-thresholding ideas. The resulting nonlinear programming task is solved via a dual Uzawa method in its general form, leading to an efficient and general algorithm which allows for very good structure-preserving reconstructions. Along with a theoretical study of the algorithm, the paper details some aspects of the implementation, discusses the numerical convergence and eventually displays a few images obtained for some difficult restoration tasks

    Estimating the probability law of the codelength as a function of the approximation error in image compression

    Get PDF
    International audienceAfter a recollection on compression through a projection onto a polyhedral set (which generalizes the compression by coordinates quantization), we express, in this framework, the probability that an image is coded with KK coefficients as an explicit function of the approximation error

    Non-heuristic reduction of the graph in graph-cut optimization

    Get PDF
    During the last ten years, graph cuts had a growing impact in shape optimization. In particular, they are commonly used in applications of shape optimization such as image processing, computer vision and computer graphics. Their success is due to their ability to efficiently solve (apparently) difficult shape optimization problems which typically involve the perimeter of the shape. Nevertheless, solving problems with a large number of variables remains computationally expensive and requires a high memory usage since underlying graphs sometimes involve billion of nodes and even more edges. Several strategies have been proposed in the literature to improve graph-cuts in this regards. In this paper, we give a formal statement which expresses that a simple and local test performed on every node before its construction permits to avoid the construction of useless nodes for the graphs typically encountered in image processing and vision. A useless node is such that the value of the maximum flow in the graph does not change when removing the node from the graph. Such a test therefore permits to limit the construction of the graph to a band of useful nodes surrounding the final cut

    On the identifiability and stable recovery of deep/multi-layer structured matrix factorization

    Get PDF
    International audienceWe study a deep/multi-layer structured matrix factorization problem. It approximates a given matrix by the product of K matrices (called factors). Each factor is obtained by applying a fixed linear operator to a short vector of parameters (thus the name " structured "). We call the model deep or multi-layer because the number of factors is not limited. In the practical situations we have in mind, we typically have K = 10 or 20. We provide necessary and sufficient conditions for the identifiability of the factors (up to a scale rearrangement). We also provide a sufficient condition that guarantees that the recovery of the factors is stable. A practical example where the deep structured factorization is a convolutional tree is provided in an accompanying paper

    A Non-Heuristic Reduction Method For Graph Cut Optimization

    Get PDF
    Graph cuts optimization is now well established for their efficiency but remains limited to the minimization of some Markov Random Fields (MRF) over a small number of variables due to the large memory requirement for storing the graphs. An existing strategy to reduce the graph size consists in testing every node and to create the node satisfying a given local condition. The remaining nodes are typically located in a thin band around the object to segment. However, there does not exists any theoretical guarantee that this strategy permits to construct a global minimizer of the MRF. In this paper, we propose a local test similar to already existing test for reducing these graphs. A large part of this paper consists in proving that any node satisfying this new test can be safely removed from the non-reduced graph without modifying its max-flow value. The constructed solution is therefore guanranteed to be a global minimizer of the MRF. Afterwards, we present numerical experiments for segmenting grayscale and color images which confirm this property while globally having memory gains similar to ones obtained with the previous existing local test

    Average performance of the sparsest approximation in a dictionary

    No full text
    International audienceGiven data d ∈ RN, we consider its representation u* involving the least number of non-zero elements (denoted by ℓ0(u*)) using a dictionary A (represented by a matrix) under the constraint kAu − dk ≀ τ, for τ > 0 and a norm k.k. This (nonconvex) optimization problem leads to the sparsest approximation of d. We assume that data d are uniformly distributed in ΞBfd (1) where Ξ>0 and Bfd (1) is the unit ball for a norm fd. Our main result is to estimate the probability that the data d give rise to a K−sparse solution u*: we prove that P (ℓ0(u*) ≀ K) = CK( τ Ξ )(N−K) + o(( τ Ξ )(N−K)), where u* is the sparsest approximation of the data d and CK > 0. The constants CK are an explicit function of k.k, A, fd and K which allows us to analyze the role of these parameters for the obtention of a sparsest K−sparse approximation. Consequently, given fd and Ξ, we have a tool to build A and k.k in such a way that CK (and hence P (ℓ0(u*) ≀ K)) are as large as possible for K small. In order to obtain the above estimate, we give a precise characterization of the set [\zigma τK] of all data leading to a K−sparse result. The main difficulty is to estimate accurately the Lebesgue measure of the sets {[\zigma τ K] ∩ Bfd (Ξ)}. We sketch a comparative analysis between our Average Performance in Approximation (APA) methodology and the well known Nonlinear Approximation (NA) which also assess the performance in approximation

    Matching Pursuit Shrinkage in Hilbert Spaces

    Get PDF
    International audienceIn this paper, we study a variant of the Matching Pursuit named Matching Pursuit Shrinkage. Similarly to the Matching Pursuit it seeks for an approximation of a datum living in a Hilbert space by a sparse linear expansion in an enumerable set of atoms. The difference with the usual Matching Pursuit is that, once an atom has been selected, we do not erase all the information along the direction of this atom. Doing so, we can evolve slowly along that direction. The goal is to attenuate the negative impact of bad atom selections. We analyse the link between the shrinkage function used by the algorithm and the fact that the result belongs to an lp space

    A Predual Proximal Point Algorithm solving a Non Negative Basis Pursuit Denoising model

    No full text
    International audienceThis paper develops an implementation of a Predual Proximal Point Algorithm (PPPA) solving a Non Negative Basis Pursuit Denoising model. The model imposes a constraint on the l2 norm of the residual, instead of penalizing it. The PPPA solves the predual of the problem with a Proximal Point Algorithm (PPA). Moreover, the minimization that needs to be performed at each iteration of PPA is solved with a dual method. We can prove that these dual variables converge to a solution of the initial problem. Our analysis proves that we turn a constrained non differentiable con- vex problem into a short sequence of nice concave maximization problems. By nice, we mean that the functions which are maximized are differen- tiable and their gradient is Lipschitz. The algorithm is easy to implement, easier to tune and more general than the algorithms found in the literature. In particular, it can be ap- plied to the Basis Pursuit Denoising (BPDN) and the Non Negative Basis Pursuit Denoising (NNBPDN) and it does not make any assumption on the dictionary. We prove its convergence to the set of solutions of the model and provide some convergence rates. Experiments on image approximation show that the performances of the PPPA are at the current state of the art for the BPDN

    Reduced graphs for min-cut/max-flow approaches in image segmentation

    No full text
    International audienceIn few years, min-cut/max-flow approach has become a leading method for solving a wide range of problems in computer vision. However, min-cut/max-flow approaches involve the construction of huge graphs which sometimes do not fit in memory. Currently, most of the max-flow algorithms are impracticable to solve such large scale problems. In this paper, we introduce a new strategy for reducing exactly graphs in the image segmentation context. During the creation of the graph, we test if the node is really useful to the max-flow computation. Numerical experiments validate the relevance of this technique to segment large scale images
    • 

    corecore