26,525 research outputs found

    Fast Global Minimization of the Active Contour/Snake Model

    Get PDF
    The active contour/snake model is one of the most successful variational models in image segmentation. It consists of evolving a contour in images toward the boundaries of objects. Its success is based on strong mathematical properties and efficient numerical schemes based on the level set method. The only drawback of this model is the existence of local minima in the active contour energy, which makes the initial guess critical to get satisfactory results. In this paper, we propose to solve this problem by determining a global minimum of the active contour model. Our approach is based on the unification of image segmentation and image denoising tasks into a global minimization framework. More precisely, we propose to unify three well-known image variational models, namely the snake model, the Rudin-Osher-Fatemi denoising model and the Mumford-Shah segmentation model. We will establish theorems with proofs to determine the existence of a global minimum of the active contour model. From a numerical point of view, we propose a new practical way to solve the active contour propagation problem toward object boundaries through a dual formulation of the minimization problem. The dual formulation, easy to implement, allows us a fast global minimization of the snake energy. It avoids the usual drawback in the level set approach that consists of initializing the active contour in a distance function and re-initializing it periodically during the evolution, which is time-consuming. We apply our segmentation algorithms on synthetic and real-world images, such as texture images and medical images, to emphasize the performances of our model compared with other segmentation model

    When Image Denoising Meets High-Level Vision Tasks: A Deep Learning Approach

    Full text link
    Conventionally, image denoising and high-level vision tasks are handled separately in computer vision. In this paper, we cope with the two jointly and explore the mutual influence between them. First we propose a convolutional neural network for image denoising which achieves the state-of-the-art performance. Second we propose a deep neural network solution that cascades two modules for image denoising and various high-level tasks, respectively, and use the joint loss for updating only the denoising network via back-propagation. We demonstrate that on one hand, the proposed denoiser has the generality to overcome the performance degradation of different high-level vision tasks. On the other hand, with the guidance of high-level vision information, the denoising network can generate more visually appealing results. To the best of our knowledge, this is the first work investigating the benefit of exploiting image semantics simultaneously for image denoising and high-level vision tasks via deep learning. The code is available online https://github.com/Ding-Liu/DeepDenoising.Comment: the 27th International Joint Conference on Artificial Intelligence (2018

    Deep Burst Denoising

    Full text link
    Noise is an inherent issue of low-light image capture, one which is exacerbated on mobile devices due to their narrow apertures and small sensors. One strategy for mitigating noise in a low-light situation is to increase the shutter time of the camera, thus allowing each photosite to integrate more light and decrease noise variance. However, there are two downsides of long exposures: (a) bright regions can exceed the sensor range, and (b) camera and scene motion will result in blurred images. Another way of gathering more light is to capture multiple short (thus noisy) frames in a "burst" and intelligently integrate the content, thus avoiding the above downsides. In this paper, we use the burst-capture strategy and implement the intelligent integration via a recurrent fully convolutional deep neural net (CNN). We build our novel, multiframe architecture to be a simple addition to any single frame denoising model, and design to handle an arbitrary number of noisy input frames. We show that it achieves state of the art denoising results on our burst dataset, improving on the best published multi-frame techniques, such as VBM4D and FlexISP. Finally, we explore other applications of image enhancement by integrating content from multiple frames and demonstrate that our DNN architecture generalizes well to image super-resolution

    Image denoising with multi-layer perceptrons, part 1: comparison with existing algorithms and with bounds

    Full text link
    Image denoising can be described as the problem of mapping from a noisy image to a noise-free image. The best currently available denoising methods approximate this mapping with cleverly engineered algorithms. In this work we attempt to learn this mapping directly with plain multi layer perceptrons (MLP) applied to image patches. We will show that by training on large image databases we are able to outperform the current state-of-the-art image denoising methods. In addition, our method achieves results that are superior to one type of theoretical bound and goes a large way toward closing the gap with a second type of theoretical bound. Our approach is easily adapted to less extensively studied types of noise, such as mixed Poisson-Gaussian noise, JPEG artifacts, salt-and-pepper noise and noise resembling stripes, for which we achieve excellent results as well. We will show that combining a block-matching procedure with MLPs can further improve the results on certain images. In a second paper, we detail the training trade-offs and the inner mechanisms of our MLPs
    • …
    corecore