10,227 research outputs found
A new Edge Detector Based on Parametric Surface Model: Regression Surface Descriptor
In this paper we present a new methodology for edge detection in digital
images. The first originality of the proposed method is to consider image
content as a parametric surface. Then, an original parametric local model of
this surface representing image content is proposed. The few parameters
involved in the proposed model are shown to be very sensitive to
discontinuities in surface which correspond to edges in image content. This
naturally leads to the design of an efficient edge detector. Moreover, a
thorough analysis of the proposed model also allows us to explain how these
parameters can be used to obtain edge descriptors such as orientations and
curvatures.
In practice, the proposed methodology offers two main advantages. First, it
has high customization possibilities in order to be adjusted to a wide range of
different problems, from coarse to fine scale edge detection. Second, it is
very robust to blurring process and additive noise. Numerical results are
presented to emphasis these properties and to confirm efficiency of the
proposed method through a comparative study with other edge detectors.Comment: 21 pages, 13 figures and 2 table
Convolutional Deblurring for Natural Imaging
In this paper, we propose a novel design of image deblurring in the form of
one-shot convolution filtering that can directly convolve with naturally
blurred images for restoration. The problem of optical blurring is a common
disadvantage to many imaging applications that suffer from optical
imperfections. Despite numerous deconvolution methods that blindly estimate
blurring in either inclusive or exclusive forms, they are practically
challenging due to high computational cost and low image reconstruction
quality. Both conditions of high accuracy and high speed are prerequisites for
high-throughput imaging platforms in digital archiving. In such platforms,
deblurring is required after image acquisition before being stored, previewed,
or processed for high-level interpretation. Therefore, on-the-fly correction of
such images is important to avoid possible time delays, mitigate computational
expenses, and increase image perception quality. We bridge this gap by
synthesizing a deconvolution kernel as a linear combination of Finite Impulse
Response (FIR) even-derivative filters that can be directly convolved with
blurry input images to boost the frequency fall-off of the Point Spread
Function (PSF) associated with the optical blur. We employ a Gaussian low-pass
filter to decouple the image denoising problem for image edge deblurring.
Furthermore, we propose a blind approach to estimate the PSF statistics for two
Gaussian and Laplacian models that are common in many imaging pipelines.
Thorough experiments are designed to test and validate the efficiency of the
proposed method using 2054 naturally blurred images across six imaging
applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin
Quality Adaptive Least Squares Trained Filters for Video Compression Artifacts Removal Using a No-reference Block Visibility Metric
Compression artifacts removal is a challenging problem because videos can be compressed at different qualities. In this paper, a least squares approach that is self-adaptive to the visual quality of the input sequence is proposed. For compression artifacts, the visual quality of an image is measured by a no-reference block visibility metric. According to the blockiness visibility of an input image, an appropriate set of filter coefficients that are trained beforehand is selected for optimally removing coding artifacts and reconstructing object details. The performance of the proposed algorithm is evaluated on a variety of sequences compressed at different qualities in comparison to several other deblocking techniques. The proposed method outperforms the others significantly both objectively and subjectively
Performance Analysis of Cone Detection Algorithms
Many algorithms have been proposed to help clinicians evaluate cone density
and spacing, as these may be related to the onset of retinal diseases. However,
there has been no rigorous comparison of the performance of these algorithms.
In addition, the performance of such algorithms is typically determined by
comparison with human observers. Here we propose a technique to simulate
realistic images of the cone mosaic. We use the simulated images to test the
performance of two popular cone detection algorithms and we introduce an
algorithm which is used by astronomers to detect stars in astronomical images.
We use Free Response Operating Characteristic (FROC) curves to evaluate and
compare the performance of the three algorithms. This allows us to optimize the
performance of each algorithm. We observe that performance is significantly
enhanced by up-sampling the images. We investigate the effect of noise and
image quality on cone mosaic parameters estimated using the different
algorithms, finding that the estimated regularity is the most sensitive
parameter.
This paper was published in JOSA A and is made available as an electronic
reprint with the permission of OSA. The paper can be found at the following URL
on the OSA website: http://www.opticsinfobase.org/abstract.cfm?msid=224577.
Systematic or multiple reproduction or distribution to multiple locations via
electronic or other means is prohibited and is subject to penalties under law.Comment: 13 pages, 7 figures, 2 table
- …