990 research outputs found
DIGITAL IMAGE RESTORATION USING LOCAL STATISTICS
This paper reports a new algorithm for the restoration of defocused and noised image. To overcome signal-to-noise ratio problems, a nonstationary image model was introduced. The restoration filter consists of two domain processings. The first processing is for an adaptive noise reduction in the image domain. The second is for the image restoration in frequency domain. The proposed algorithm have a merit that edges in the degraded image is restored sharp without loss of the effect of the noise suppression. The restoration effects of our method were visualized by computer simulations and compared with the ones by the popular Wiener filter. The introduction of fast Fourier transformation shortened the total time for the processings
2D Iterative MAP Detection: Principles and Applications in Image Restoration
The paper provides a theoretical framework for the two-dimensional iterative maximum a posteriori detection. This generalization is based on the concept of detection algorithms BCJR and SOVA, i.e., the classical (one-dimensional) iterative detectors used in telecommunication applications. We generalize the one-dimensional detection problem considering the spatial ISI kernel as a two-dimensional finite state machine (2D FSM) representing a network of the spatially concatenated elements. The cellular structure topology defines the design of the 2D Iterative decoding network, where each cell is a general combination-marginalization statistical element (SISO module) exchanging discrete probability density functions (information metrics) with neighboring cells. In this paper, we statistically analyse the performance of various topologies with respect to their application in the field of image restoration. The iterative detection algorithm was applied on the task of binarization of images taken from a CCD camera. The reconstruction includes suppression of the defocus caused by the lens, CCD sensor noise suppression and interpolation (demosaicing). The simulations prove that the algorithm provides satisfactory results even in the case of an input image that is under-sampled due to the Bayer mask
Defocus restoration for a full-field heterodyne ranger via multiple return separation
Full-field heterodyne time-of-flight range-imagers allow a large number of range measurements to be taken simultaneously across an entire scene; these range measurements may be corrupted due to limited depth of field. We propose a new method for deblurring heterodyne range images by identifying multiple signal returns within each pixel via deconvolution, thus reducing the spatially variant deblurring problem to a sequence of spatially invariant deconvolutions. We have applied this method to simulated data, showing significant improvement in the restored images
Image Reconstruction with Analytical Point Spread Functions
The image degradation produced by atmospheric turbulence and optical
aberrations is usually alleviated using post-facto image reconstruction
techniques, even when observing with adaptive optics systems. These techniques
rely on the development of the wavefront using Zernike functions and the
non-linear optimization of a certain metric. The resulting optimization
procedure is computationally heavy. Our aim is to alleviate this
computationally burden. To this aim, we generalize the recently developed
extended Zernike-Nijboer theory to carry out the analytical integration of the
Fresnel integral and present a natural basis set for the development of the
point spread function in case the wavefront is described using Zernike
functions. We present a linear expansion of the point spread function in terms
of analytic functions which, additionally, takes defocusing into account in a
natural way. This expansion is used to develop a very fast phase-diversity
reconstruction technique which is demonstrated through some applications. This
suggest that the linear expansion of the point spread function can be applied
to accelerate other reconstruction techniques in use presently and based on
blind deconvolution.Comment: 10 pages, 4 figures, accepted for publication in Astronomy &
Astrophysic
Simultaneous estimation of super-resolved scene and depth map from low resolution defocused observations
This paper presents a novel technique to simultaneously estimate the depth map and the focused image of a scene, both at a super-resolution, from its defocused observations. Super-resolution refers to the generation of high spatial resolution images from a sequence of low resolution images. Hitherto, the super-resolution technique has been restricted mostly to the intensity domain. In this paper, we extend the scope of super-resolution imaging to acquire depth estimates at high spatial resolution simultaneously. Given a sequence of low resolution, blurred, and noisy observations of a static scene, the problem is to generate a dense depth map at a resolution higher than one that can be generated from the observations as well as to estimate the true high resolution focused image. Both the depth and the image are modeled as separate Markov random fields (MRF) and a maximum a posteriori estimation method is used to recover the high resolution fields. Since there is no relative motion between the scene and the camera, as is the case with most of the super-resolution and structure recovery techniques, we do away with the correspondence problem
- …