2,680 research outputs found
Weighted Mean Curvature
In image processing tasks, spatial priors are essential for robust
computations, regularization, algorithmic design and Bayesian inference. In
this paper, we introduce weighted mean curvature (WMC) as a novel image prior
and present an efficient computation scheme for its discretization in practical
image processing applications. We first demonstrate the favorable properties of
WMC, such as sampling invariance, scale invariance, and contrast invariance
with Gaussian noise model; and we show the relation of WMC to area
regularization. We further propose an efficient computation scheme for
discretized WMC, which is demonstrated herein to process over 33.2
giga-pixels/second on GPU. This scheme yields itself to a convolutional neural
network representation. Finally, WMC is evaluated on synthetic and real images,
showing its superiority quantitatively to total-variation and mean curvature.Comment: 12 page
Accurate Light Field Depth Estimation with Superpixel Regularization over Partially Occluded Regions
Depth estimation is a fundamental problem for light field photography
applications. Numerous methods have been proposed in recent years, which either
focus on crafting cost terms for more robust matching, or on analyzing the
geometry of scene structures embedded in the epipolar-plane images. Significant
improvements have been made in terms of overall depth estimation error;
however, current state-of-the-art methods still show limitations in handling
intricate occluding structures and complex scenes with multiple occlusions. To
address these challenging issues, we propose a very effective depth estimation
framework which focuses on regularizing the initial label confidence map and
edge strength weights. Specifically, we first detect partially occluded
boundary regions (POBR) via superpixel based regularization. Series of
shrinkage/reinforcement operations are then applied on the label confidence map
and edge strength weights over the POBR. We show that after weight
manipulations, even a low-complexity weighted least squares model can produce
much better depth estimation than state-of-the-art methods in terms of average
disparity error rate, occlusion boundary precision-recall rate, and the
preservation of intricate visual features
Image Denoising by using Modified SGHP Algorithm
In real time applications, image denoising is a predominant task. This task makes adequate preparation for images looks prominent. But there are several denoising algorithms and every algorithm has its own distinctive attribute based upon different natural images. In this paper, we proposed a perspective that is modified parameter in S-Gradient Histogram Preservation denoising method. S-Gradient Histogram Preservation is a method to compute the structure gradient histogram from the noisy observation by taking different noise standard deviations of different images. The performance of this method is enumerated in terms of peak signal to noise ratio and structural similarity index of a particular image. In this paper, mainly focus on peak signal to noise ratio, structural similarity index, noise estimation and a measure of structure gradient histogram of a given image
- …