3,874 research outputs found
A Cosmic Watershed: the WVF Void Detection Technique
On megaparsec scales the Universe is permeated by an intricate filigree of
clusters, filaments, sheets and voids, the Cosmic Web. For the understanding of
its dynamical and hierarchical history it is crucial to identify objectively
its complex morphological components. One of the most characteristic aspects is
that of the dominant underdense Voids, the product of a hierarchical process
driven by the collapse of minor voids in addition to the merging of large ones.
In this study we present an objective void finder technique which involves a
minimum of assumptions about the scale, structure and shape of voids. Our void
finding method, the Watershed Void Finder (WVF), is based upon the Watershed
Transform, a well-known technique for the segmentation of images. Importantly,
the technique has the potential to trace the existing manifestations of a void
hierarchy. The basic watershed transform is augmented by a variety of
correction procedures to remove spurious structure resulting from sampling
noise. This study contains a detailed description of the WVF. We demonstrate
how it is able to trace and identify, relatively parameter free, voids and
their surrounding (filamentary and planar) boundaries. We test the technique on
a set of Kinematic Voronoi models, heuristic spatial models for a cellular
distribution of matter. Comparison of the WVF segmentations of low noise and
high noise Voronoi models with the quantitatively known spatial characteristics
of the intrinsic Voronoi tessellation shows that the size and shape of the
voids are succesfully retrieved. WVF manages to even reproduce the full void
size distribution function.Comment: 24 pages, 15 figures, MNRAS accepted, for full resolution, see
http://www.astro.rug.nl/~weygaert/tim1publication/watershed.pd
Guided patch-wise nonlocal SAR despeckling
We propose a new method for SAR image despeckling which leverages information
drawn from co-registered optical imagery. Filtering is performed by plain
patch-wise nonlocal means, operating exclusively on SAR data. However, the
filtering weights are computed by taking into account also the optical guide,
which is much cleaner than the SAR data, and hence more discriminative. To
avoid injecting optical-domain information into the filtered image, a
SAR-domain statistical test is preliminarily performed to reject right away any
risky predictor. Experiments on two SAR-optical datasets prove the proposed
method to suppress very effectively the speckle, preserving structural details,
and without introducing visible filtering artifacts. Overall, the proposed
method compares favourably with all state-of-the-art despeckling filters, and
also with our own previous optical-guided filter
Machine Learning And Image Processing For Noise Removal And Robust Edge Detection In The Presence Of Mixed Noise
The central goal of this dissertation is to design and model a smoothing filter based on the random single and mixed noise distribution that would attenuate the effect of noise while preserving edge details. Only then could robust, integrated and resilient edge detection methods be deployed to overcome the ubiquitous presence of random noise in images. Random noise effects are modeled as those that could emanate from impulse noise, Gaussian noise and speckle noise.
In the first step, evaluation of methods is performed based on an exhaustive review on the different types of denoising methods which focus on impulse noise, Gaussian noise and their related denoising filters. These include spatial filters (linear, non-linear and a combination of them), transform domain filters, neural network-based filters, numerical-based filters, fuzzy based filters, morphological filters, statistical filters, and supervised learning-based filters.
In the second step, switching adaptive median and fixed weighted mean filter (SAMFWMF) which is a combination of linear and non-linear filters, is introduced in order to detect and remove impulse noise. Then, a robust edge detection method is applied which relies on an integrated process including non-maximum suppression, maximum sequence, thresholding and morphological operations. The results are obtained on MRI and natural images.
In the third step, a combination of transform domain-based filter which is a combination of dual tree – complex wavelet transform (DT-CWT) and total variation, is introduced in order to detect and remove Gaussian noise as well as mixed Gaussian and Speckle noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on medical ultrasound and natural images.
In the fourth step, a smoothing filter, which is a feed-forward convolutional network (CNN) is introduced to assume a deep architecture, and supported through a specific learning algorithm, l2 loss function minimization, a regularization method, and batch normalization all integrated in order to detect and remove impulse noise as well as mixed impulse and Gaussian noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on natural images for both specific and non-specific noise-level
Perceptually-Driven Video Coding with the Daala Video Codec
The Daala project is a royalty-free video codec that attempts to compete with
the best patent-encumbered codecs. Part of our strategy is to replace core
tools of traditional video codecs with alternative approaches, many of them
designed to take perceptual aspects into account, rather than optimizing for
simple metrics like PSNR. This paper documents some of our experiences with
these tools, which ones worked and which did not. We evaluate which tools are
easy to integrate into a more traditional codec design, and show results in the
context of the codec being developed by the Alliance for Open Media.Comment: 19 pages, Proceedings of SPIE Workshop on Applications of Digital
Image Processing (ADIP), 201
Adaptive Edge-guided Block-matching and 3D filtering (BM3D) Image Denoising Algorithm
Image denoising is a well studied field, yet reducing noise from images is still a valid challenge. Recently proposed Block-matching and 3D filtering (BM3D) is the current state of the art algorithm for denoising images corrupted by Additive White Gaussian noise (AWGN). Though BM3D outperforms all existing methods for AWGN denoising, still its performance decreases as the noise level increases in images, since it is harder to find proper match for reference blocks in the presence of highly corrupted pixel values. It also blurs sharp edges and textures. To overcome these problems we proposed an edge guided BM3D with selective pixel restoration. For higher noise levels it is possible to detect noisy pixels form its neighborhoods gray level statistics. We exploited this property to reduce noise as much as possible by applying a pre-filter. We also introduced an edge guided pixel restoration process in the hard-thresholding step of BM3D to restore the sharpness of edges and textures. Experimental results confirm that our proposed method is competitive and outperforms the state of the art BM3D in all considered subjective and objective quality measurements, particularly in preserving edges, textures and image contrast
DISC: Deep Image Saliency Computing via Progressive Representation Learning
Salient object detection increasingly receives attention as an important
component or step in several pattern recognition and image processing tasks.
Although a variety of powerful saliency models have been intensively proposed,
they usually involve heavy feature (or model) engineering based on priors (or
assumptions) about the properties of objects and backgrounds. Inspired by the
effectiveness of recently developed feature learning, we provide a novel Deep
Image Saliency Computing (DISC) framework for fine-grained image saliency
computing. In particular, we model the image saliency from both the coarse- and
fine-level observations, and utilize the deep convolutional neural network
(CNN) to learn the saliency representation in a progressive manner.
Specifically, our saliency model is built upon two stacked CNNs. The first CNN
generates a coarse-level saliency map by taking the overall image as the input,
roughly identifying saliency regions in the global context. Furthermore, we
integrate superpixel-based local context information in the first CNN to refine
the coarse-level saliency map. Guided by the coarse saliency map, the second
CNN focuses on the local context to produce fine-grained and accurate saliency
map while preserving object details. For a testing image, the two CNNs
collaboratively conduct the saliency computing in one shot. Our DISC framework
is capable of uniformly highlighting the objects-of-interest from complex
background while preserving well object details. Extensive experiments on
several standard benchmarks suggest that DISC outperforms other
state-of-the-art methods and it also generalizes well across datasets without
additional training. The executable version of DISC is available online:
http://vision.sysu.edu.cn/projects/DISC.Comment: This manuscript is the accepted version for IEEE Transactions on
Neural Networks and Learning Systems (T-NNLS), 201
- …