403 research outputs found

    Solving a variational image restoration model which involves L∞ constraints

    Get PDF
    In this paper, we seek a solution to linear inverse problems arising in image restoration in terms of a recently posed optimization problem which combines total variation minimization and wavelet-thresholding ideas. The resulting nonlinear programming task is solved via a dual Uzawa method in its general form, leading to an efficient and general algorithm which allows for very good structure-preserving reconstructions. Along with a theoretical study of the algorithm, the paper details some aspects of the implementation, discusses the numerical convergence and eventually displays a few images obtained for some difficult restoration tasks

    A Discrete Radiosity Method

    Get PDF
    International audienceWe present a completely new principle of computation of radiosity values in a 3D scene. The method is based on a voxel approximation of the objects, and all occlusion calculations involve only integer arithmetics operation. The method is proved to converge. Some experimental results are presented

    Estimating the probability law of the codelength as a function of the approximation error in image compression

    Get PDF
    International audienceAfter a recollection on compression through a projection onto a polyhedral set (which generalizes the compression by coordinates quantization), we express, in this framework, the probability that an image is coded with KK coefficients as an explicit function of the approximation error

    Non-heuristic reduction of the graph in graph-cut optimization

    Get PDF
    During the last ten years, graph cuts had a growing impact in shape optimization. In particular, they are commonly used in applications of shape optimization such as image processing, computer vision and computer graphics. Their success is due to their ability to efficiently solve (apparently) difficult shape optimization problems which typically involve the perimeter of the shape. Nevertheless, solving problems with a large number of variables remains computationally expensive and requires a high memory usage since underlying graphs sometimes involve billion of nodes and even more edges. Several strategies have been proposed in the literature to improve graph-cuts in this regards. In this paper, we give a formal statement which expresses that a simple and local test performed on every node before its construction permits to avoid the construction of useless nodes for the graphs typically encountered in image processing and vision. A useless node is such that the value of the maximum flow in the graph does not change when removing the node from the graph. Such a test therefore permits to limit the construction of the graph to a band of useful nodes surrounding the final cut

    Convolutions on digital surfaces: on the way iterated convolutions behave and preliminary results about curvature estimation

    Get PDF
    In [FoureyMalgouyres09] the authors present a generalized convolution operator for functions defined on digital surfaces. We provide here some extra material related to this notion. Some about the relative isotropy of the way a convolution kernel (or mask) grows when the convolution operator is iterated. We also provide preliminary results about a way to estimate curvatures on a digital surface, using the same convolution operator

    On the identifiability and stable recovery of deep/multi-layer structured matrix factorization

    Get PDF
    International audienceWe study a deep/multi-layer structured matrix factorization problem. It approximates a given matrix by the product of K matrices (called factors). Each factor is obtained by applying a fixed linear operator to a short vector of parameters (thus the name " structured "). We call the model deep or multi-layer because the number of factors is not limited. In the practical situations we have in mind, we typically have K = 10 or 20. We provide necessary and sufficient conditions for the identifiability of the factors (up to a scale rearrangement). We also provide a sufficient condition that guarantees that the recovery of the factors is stable. A practical example where the deep structured factorization is a convolutional tree is provided in an accompanying paper

    Normals estimation for digital surfaces based on convolutions

    Get PDF
    International audienceIn this paper, we present a method that we call on-surface convolution which extends the classical notion of a 2D digital filter to the case of digital surfaces (following the cuberille model). We also define an averaging mask with local support which, when applied with the iterated convolution operator, behaves like an averaging with large support. The interesting property of the latter averaging is the way the resulting weights are distributed: given a digital surface obtained by discretization of a differentiable surface of R^3 , the masks isocurves are close to the Riemannian isodistance curves from the center of the mask. We eventually use the iterated averaging followed by convolutions with differentiation masks to estimate partial derivatives and then normal vectors over a surface. The number of iterations required to achieve a good estimate is determined experimentally on digitized spheres and tori. The precision of the normal estimation is also investigated according to the digitization step

    Topology Preservation Within Digital Surfaces

    Get PDF
    International audienceGiven two connected subsets Y X of the set of the surfels of a connected digital surface, we propose three equivalent ways to express that Y is homotopic to X. The rst characterization is based on sequential deletion of simple surfels. This characterization enables us to deene thinning algorithms within a digital Jordan surface. The second characterization is based on the Euler characteristics of sets of surfels. This characterization enables us, given two connected sets Y X of surfels, to decide whether Y is nhomotopic to X. The third characterization is based on the (digital) fundamental group

    Average performance of the sparsest approximation in a dictionary

    No full text
    International audienceGiven data d ∈ RN, we consider its representation u* involving the least number of non-zero elements (denoted by ℓ0(u*)) using a dictionary A (represented by a matrix) under the constraint kAu − dk ≀ τ, for τ > 0 and a norm k.k. This (nonconvex) optimization problem leads to the sparsest approximation of d. We assume that data d are uniformly distributed in ΞBfd (1) where Ξ>0 and Bfd (1) is the unit ball for a norm fd. Our main result is to estimate the probability that the data d give rise to a K−sparse solution u*: we prove that P (ℓ0(u*) ≀ K) = CK( τ Ξ )(N−K) + o(( τ Ξ )(N−K)), where u* is the sparsest approximation of the data d and CK > 0. The constants CK are an explicit function of k.k, A, fd and K which allows us to analyze the role of these parameters for the obtention of a sparsest K−sparse approximation. Consequently, given fd and Ξ, we have a tool to build A and k.k in such a way that CK (and hence P (ℓ0(u*) ≀ K)) are as large as possible for K small. In order to obtain the above estimate, we give a precise characterization of the set [\zigma τK] of all data leading to a K−sparse result. The main difficulty is to estimate accurately the Lebesgue measure of the sets {[\zigma τ K] ∩ Bfd (Ξ)}. We sketch a comparative analysis between our Average Performance in Approximation (APA) methodology and the well known Nonlinear Approximation (NA) which also assess the performance in approximation
    • 

    corecore