4,143 research outputs found

    Artifact reduction for separable non-local means

    Full text link
    It was recently demonstrated [J. Electron. Imaging, 25(2), 2016] that one can perform fast non-local means (NLM) denoising of one-dimensional signals using a method called lifting. The cost of lifting is independent of the patch length, which dramatically reduces the run-time for large patches. Unfortunately, it is difficult to directly extend lifting for non-local means denoising of images. To bypass this, the authors proposed a separable approximation in which the image rows and columns are filtered using lifting. The overall algorithm is significantly faster than NLM, and the results are comparable in terms of PSNR. However, the separable processing often produces vertical and horizontal stripes in the image. This problem was previously addressed by using a bilateral filter-based post-smoothing, which was effective in removing some of the stripes. In this letter, we demonstrate that stripes can be mitigated in the first place simply by involving the neighboring rows (or columns) in the filtering. In other words, we use a two-dimensional search (similar to NLM), while still using one-dimensional patches (as in the previous proposal). The novelty is in the observation that one can use lifting for performing two-dimensional searches. The proposed approach produces artifact-free images, whose quality and PSNR are comparable to NLM, while being significantly faster.Comment: To appear in Journal of Electronic Imagin

    Fast Separable Non-Local Means

    Full text link
    We propose a simple and fast algorithm called PatchLift for computing distances between patches (contiguous block of samples) extracted from a given one-dimensional signal. PatchLift is based on the observation that the patch distances can be efficiently computed from a matrix that is derived from the one-dimensional signal using lifting; importantly, the number of operations required to compute the patch distances using this approach does not scale with the patch length. We next demonstrate how PatchLift can be used for patch-based denoising of images corrupted with Gaussian noise. In particular, we propose a separable formulation of the classical Non-Local Means (NLM) algorithm that can be implemented using PatchLift. We demonstrate that the PatchLift-based implementation of separable NLM is few orders faster than standard NLM, and is competitive with existing fast implementations of NLM. Moreover, its denoising performance is shown to be consistently superior to that of NLM and some of its variants, both in terms of PSNR/SSIM and visual quality

    Removing the texture feature response to object boundaries

    Get PDF
    Texture is a spatial property and thus any features used to describe it must be calculated within a neighbourhood. This process of integrating information over a neighbourhood leads to what we will refer to as the texture boundary response problem, where an unwanted response is observed at object boundaries. This response is due to features being extracted from a mixture of textures and/or an intensity edge between objects. If segmentation is performed using these raw features this will lead to the generation of unwanted classes along object boundaries. To overcome this, post processing of feature images must be performed to remove this response before a classification algorithm can be applied. To date this problem has received little attention with no evaluation of the alternative solutions available in the literature of which we are aware. In this work we perform an evaluation of known solutions to the boundary response problem and discover separable median filtering to be the curre nt best choice. An in depth evaluation of the separable median filtering approach shows that it fails to remove certain parts or types of object boundary response. To overcome this failing we propose two alternative techniques which involve either post processing of the separable median filtered result or an alternative filtering technique

    A superior edge preserving filter with a systematic analysis

    Get PDF
    A new, adaptive, edge preserving filter for use in image processing is presented. It had superior performance when compared to other filters. Termed the contiguous K-average, it aggregates pixels by examining all pixels contiguous to an existing cluster and adding the pixel closest to the mean of the existing cluster. The process is iterated until K pixels were accumulated. Rather than simply compare the visual results of processing with this operator to other filters, some approaches were developed which allow quantitative evaluation of how well and filter performs. Particular attention is given to the standard deviation of noise within a feature and the stability of imagery under iterative processing. Demonstrations illustrate the performance of several filters to discriminate against noise and retain edges, the effect of filtering as a preprocessing step, and the utility of the contiguous K-average filter when used with remote sensing data

    Temporal optimisation of image acquisition for land cover classification with random forest and MODIS time-series

    Get PDF
    The analysis and classification of land cover is one of the principal applications in terrestrial remote sensing. Due to the seasonal variability of different vegetation types and land surface characteristics, the ability to discriminate land cover types changes over time. Multi-temporal classification can help to improve the classification accuracies, but different constraints, such as financial restrictions or atmospheric conditions, may impede their application. The optimisation of image acquisition timing and frequencies can help to increase the effectiveness of the classification process. For this purpose, the Feature Importance (FI) measure of the state-of-the art machine learning method Random Forest was used to determine the optimal image acquisition periods for a general (Grassland, Forest, Water, Settlement, Peatland) and Grassland specific (Improved Grassland, Semi-Improved Grassland) land cover classification in central Ireland based on a 9-year time-series of MODIS Terra 16 day composite data (MOD13Q1). Feature Importances for each acquisition period of the Enhanced Vegetation Index (EVI) and Normalised Difference Vegetation Index (NDVI) were calculated for both classification scenarios. In the general land cover classification, the months December and January showed the highest, and July and August the lowest separability for both VIs over the entire nine-year period. This temporal separability was reflected in the classification accuracies, where the optimal choice of image dates outperformed the worst image date by 13% using NDVI and 5% using EVI on a mono-temporal analysis. With the addition of the next best image periods to the data input the classification accuracies converged quickly to their limit at around 8–10 images. The binary classification schemes, using two classes only, showed a stronger seasonal dependency with a higher intra-annual, but lower inter-annual variation. Nonetheless anomalous weather conditions, such as the cold winter of 2009/2010 can alter the temporal separability pattern significantly. Due to the extensive use of the NDVI for land cover discrimination, the findings of this study should be transferrable to data from other optical sensors with a higher spatial resolution. However, the high impact of outliers from the general climatic pattern highlights the limitation of spatial transferability to locations with different climatic and land cover conditions. The use of high-temporal, moderate resolution data such as MODIS in conjunction with machine-learning techniques proved to be a good base for the prediction of image acquisition timing for optimal land cover classification results

    On generalized adaptive neural filter

    Get PDF
    Linear filters have historically been used in the past as the most useful tools for suppressing noise in signal processing. It has been shown that the optimal filter which minimizes the mean square error (MSE) between the filter output and the desired output is a linear filter provided that the noise is additive white Gaussian noise (AWGN). However, in most signal processing applications, the noise in the channel through which a signal is transmitted is not AWGN; it is not stationary, and it may have unknown characteristics. To overcome the shortcomings of linear filters, nonlinear filters ranging from the median filters to stack filters have been developed. They have been successfully used in a number of applications, such as enhancing the signal-to-noise ratio of the telecommunication receivers, modeling the human vocal tract to synthesize speech in speech processing, and separating out the maternal and fetal electrocardiogram signals to diagnose prenatal ailments. In particular, stack filters have been shown to provide robust noise suppression, and are easily implementable in hardware, but configuring an optimal stack filter remains a challenge. This dissertation takes on this challenge by extending stack filters to a new class of nonlinear adaptive filters called generalized adaptive neural filters (GANFs). The objective of this work is to investigate their performance in terms of the mean absolute error criterion, to evaluate and predict the generalization of various discriminant functions employed for GANFs, and to address issues regarding their applications and implementation. It is shown that GANFs not only extend the class of stack filters, but also have better performance in terms of suppressing non-additive white Gaussian noise. Several results are drawn from the theoretical and experimental work: stack filters can be adaptively configured by neural networks; GANFs encompass a large class of nonlinear sliding-window filters which include stack filters; the mean absolute error (MAE) of the optimal GANF is upper-bounded by that of the optimal stack filter; a suitable class of discriminant functions can be determined before a training scheme is executed; VC dimension (VCdim) theory can be applied to determine the number of training samples; the algorithm presented in configuring GANFs is effective and robust
    • …
    corecore