2,888 research outputs found

    Inpainting of Cyclic Data using First and Second Order Differences

    Full text link
    Cyclic data arise in various image and signal processing applications such as interferometric synthetic aperture radar, electroencephalogram data analysis, and color image restoration in HSV or LCh spaces. In this paper we introduce a variational inpainting model for cyclic data which utilizes our definition of absolute cyclic second order differences. Based on analytical expressions for the proximal mappings of these differences we propose a cyclic proximal point algorithm (CPPA) for minimizing the corresponding functional. We choose appropriate cycles to implement this algorithm in an efficient way. We further introduce a simple strategy to initialize the unknown inpainting region. Numerical results both for synthetic and real-world data demonstrate the performance of our algorithm.Comment: accepted Converence Paper at EMMCVPR'1

    Learning generative texture models with extended Fields-of-Experts

    Get PDF
    We evaluate the ability of the popular Field-of-Experts (FoE) to model structure in images. As a test case we focus on modeling synthetic and natural textures. We find that even for modeling single textures, the FoE provides insufficient flexibility to learn good generative models – it does not perform any better than the much simpler Gaussian FoE. We propose an extended version of the FoE (allowing for bimodal potentials) and demonstrate that this novel formulation, when trained with a better approximation of the likelihood gradient, gives rise to a more powerful generative model of specific visual structure that produces significantly better results for the texture task

    Multiple Texture Boltzmann Machines

    Get PDF
    We assess the generative power of the mPoTmodel of [10] with tiled-convolutional weight sharing as a model for visual textures by specifically training on this task, evaluating model performance on texture synthesis and inpainting tasks using quantitative metrics. We also analyze the relative importance of the mean and covariance parts of the mPoT model by comparing its performance to those of its subcomponents, tiled-convolutional versions of the PoT/FoE and Gaussian-Bernoulli restricted Boltzmann machine (GB-RBM). Our results suggest that while state-of-the-art or better performance can be achieved using the mPoT, similar performance can be achieved with the mean-only model. We then develop a model for multiple textures based on the GB-RBM, using a shared set of weights but texturespecific hidden unit biases. We show comparable performance of the multiple texture model to individually trained texture models.

    Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity

    Full text link
    A general framework for solving image inverse problems is introduced in this paper. The approach is based on Gaussian mixture models, estimated via a computationally efficient MAP-EM algorithm. A dual mathematical interpretation of the proposed framework with structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared to traditional sparse inverse problem techniques. This interpretation also suggests an effective dictionary motivated initialization for the MAP-EM algorithm. We demonstrate that in a number of image inverse problems, including inpainting, zooming, and deblurring, the same algorithm produces either equal, often significantly better, or very small margin worse results than the best published ones, at a lower computational cost.Comment: 30 page

    Propagating Confidences through CNNs for Sparse Data Regression

    Full text link
    In most computer vision applications, convolutional neural networks (CNNs) operate on dense image data generated by ordinary cameras. Designing CNNs for sparse and irregularly spaced input data is still an open problem with numerous applications in autonomous driving, robotics, and surveillance. To tackle this challenging problem, we introduce an algebraically-constrained convolution layer for CNNs with sparse input and demonstrate its capabilities for the scene depth completion task. We propose novel strategies for determining the confidence from the convolution operation and propagating it to consecutive layers. Furthermore, we propose an objective function that simultaneously minimizes the data error while maximizing the output confidence. Comprehensive experiments are performed on the KITTI depth benchmark and the results clearly demonstrate that the proposed approach achieves superior performance while requiring three times fewer parameters than the state-of-the-art methods. Moreover, our approach produces a continuous pixel-wise confidence map enabling information fusion, state inference, and decision support.Comment: To appear in the British Machine Vision Conference (BMVC2018

    Sobolev gradients and image interpolation

    Full text link
    We present here a new image inpainting algorithm based on the Sobolev gradient method in conjunction with the Navier-Stokes model. The original model of Bertalmio et al is reformulated as a variational principle based on the minimization of a well chosen functional by a steepest descent method. This provides an alternative of the direct solving of a high-order partial differential equation and, consequently, allows to avoid complicated numerical schemes (min-mod limiters or anisotropic diffusion). We theoretically analyze our algorithm in an infinite dimensional setting using an evolution equation and obtain global existence and uniqueness results as well as the existence of an ω\omega-limit. Using a finite difference implementation, we demonstrate using various examples that the Sobolev gradient flow, due to its smoothing and preconditioning properties, is an effective tool for use in the image inpainting problem
    corecore