111,372 research outputs found
An optimal factor analysis approach to improve the wavelet-based image resolution enhancement techniques
The existing wavelet-based image resolution enhancement techniques have many assumptions, such as limitation of the way to generate low-resolution images and the selection of wavelet functions, which limits their applications in different fields. This paper initially identifies the factors that effectively affect the performance of these techniques and quantitatively evaluates the impact of the existing assumptions. An approach called Optimal Factor Analysis employing the genetic algorithm is then introduced to increase the applicability and fidelity of the existing methods. Moreover, a new Figure of Merit is proposed to assist the selection of parameters and better measure the overall performance. The experimental results show that the proposed approach improves the performance of the selected image resolution enhancement methods and has potential to be extended to other methods
Interpolating point spread function anisotropy
Planned wide-field weak lensing surveys are expected to reduce the
statistical errors on the shear field to unprecedented levels. In contrast,
systematic errors like those induced by the convolution with the point spread
function (PSF) will not benefit from that scaling effect and will require very
accurate modeling and correction. While numerous methods have been devised to
carry out the PSF correction itself, modeling of the PSF shape and its spatial
variations across the instrument field of view has, so far, attracted much less
attention. This step is nevertheless crucial because the PSF is only known at
star positions while the correction has to be performed at any position on the
sky. A reliable interpolation scheme is therefore mandatory and a popular
approach has been to use low-order bivariate polynomials. In the present paper,
we evaluate four other classical spatial interpolation methods based on splines
(B-splines), inverse distance weighting (IDW), radial basis functions (RBF) and
ordinary Kriging (OK). These methods are tested on the Star-challenge part of
the GRavitational lEnsing Accuracy Testing 2010 (GREAT10) simulated data and
are compared with the classical polynomial fitting (Polyfit). We also test all
our interpolation methods independently of the way the PSF is modeled, by
interpolating the GREAT10 star fields themselves (i.e., the PSF parameters are
known exactly at star positions). We find in that case RBF to be the clear
winner, closely followed by the other local methods, IDW and OK. The global
methods, Polyfit and B-splines, are largely behind, especially in fields with
(ground-based) turbulent PSFs. In fields with non-turbulent PSFs, all
interpolators reach a variance on PSF systematics better than
the upper bound expected by future space-based surveys, with
the local interpolators performing better than the global ones
Image interpolation using Shearlet based iterative refinement
This paper proposes an image interpolation algorithm exploiting sparse
representation for natural images. It involves three main steps: (a) obtaining
an initial estimate of the high resolution image using linear methods like FIR
filtering, (b) promoting sparsity in a selected dictionary through iterative
thresholding, and (c) extracting high frequency information from the
approximation to refine the initial estimate. For the sparse modeling, a
shearlet dictionary is chosen to yield a multiscale directional representation.
The proposed algorithm is compared to several state-of-the-art methods to
assess its objective as well as subjective performance. Compared to the cubic
spline interpolation method, an average PSNR gain of around 0.8 dB is observed
over a dataset of 200 images
Elimination of Glass Artifacts and Object Segmentation
Many images nowadays are captured from behind the glasses and may have
certain stains discrepancy because of glass and must be processed to make
differentiation between the glass and objects behind it. This research paper
proposes an algorithm to remove the damaged or corrupted part of the image and
make it consistent with other part of the image and to segment objects behind
the glass. The damaged part is removed using total variation inpainting method
and segmentation is done using kmeans clustering, anisotropic diffusion and
watershed transformation. The final output is obtained by interpolation. This
algorithm can be useful to applications in which some part of the images are
corrupted due to data transmission or needs to segment objects from an image
for further processing
Distributed Deblurring of Large Images of Wide Field-Of-View
Image deblurring is an economic way to reduce certain degradations (blur and
noise) in acquired images. Thus, it has become essential tool in high
resolution imaging in many applications, e.g., astronomy, microscopy or
computational photography. In applications such as astronomy and satellite
imaging, the size of acquired images can be extremely large (up to gigapixels)
covering wide field-of-view suffering from shift-variant blur. Most of the
existing image deblurring techniques are designed and implemented to work
efficiently on centralized computing system having multiple processors and a
shared memory. Thus, the largest image that can be handle is limited by the
size of the physical memory available on the system. In this paper, we propose
a distributed nonblind image deblurring algorithm in which several connected
processing nodes (with reasonable computational resources) process
simultaneously different portions of a large image while maintaining certain
coherency among them to finally obtain a single crisp image. Unlike the
existing centralized techniques, image deblurring in distributed fashion raises
several issues. To tackle these issues, we consider certain approximations that
trade-offs between the quality of deblurred image and the computational
resources required to achieve it. The experimental results show that our
algorithm produces the similar quality of images as the existing centralized
techniques while allowing distribution, and thus being cost effective for
extremely large images.Comment: 16 pages, 10 figures, submitted to IEEE Trans. on Image Processin
Shift Estimation Algorithm for Dynamic Sensors With Frame-to-Frame Variation in Their Spectral Response
This study is motivated by the emergence of a new class of tunable infrared spectral-imaging sensors that offer the ability to dynamically vary the sensor\u27s intrinsic spectral response from frame to frame in an electronically controlled fashion. A manifestation of this is when a sequence of dissimilar spectral responses is periodically realized, whereby in every period of acquired imagery, each frame is associated with a distinct spectral band. Traditional scene-based global shift estimation algorithms are not applicable to such spectrally heterogeneous video sequences, as a pixel value may change from frame to frame as a result of both global motion and varying spectral response. In this paper, a novel algorithm is proposed and examined to fuse a series of coarse global shift estimates between periodically sampled pairs of nonadjacent frames to estimate motion between consecutive frames; each pair corresponds to two nonadjacent frames of the same spectral band. The proposed algorithm outperforms three alternative methods, with the average error being one half of that obtained by using an equal weights version of the proposed algorithm, one-fourth of that obtained by using a simple linear interpolation method, and one-twentieth of that obtained by using a naiÂżve correlation-based direct method
- âŠ