2,125 research outputs found

    Project Tech Top study of lunar, planetary and solar topography Final report

    Get PDF
    Data acquisition techniques for information on lunar, planetary, and solar topograph

    Image processing in the human visual system

    Get PDF
    Journal ArticleThis work extends the multiplicative visual model to include image texture as suggested by experiments [Campbell, Weisel] linking a low resolution Fourier analysis with neurons in certain parts of the visual cortex. The new model takes image texture into account in the sense that weak texture is accentuated and strong, high contrast texture is attenuated. This model is then used as the basis for an improved image enhancement scheme and an unusually successful method for restoring blurred images. In addition, it is suggested how the model may provide new insights into the problem of finding a quantitatively correct image fidelity criterion. The structure of this model is described in relation to visual neurophysiology and examples are presented of images processed by the new techniques. The research described here also shows how the retinex [Land] can be implemented in a new way which allows the required computations to be carried out on a rectangualr grid

    Effect of kernel size on Wiener and Gaussian image filtering

    Get PDF
    In this paper, the effect of the kernel size of Wiener and Gaussian filters on their image restoration qualities has been studied and analyzed. Four sizes of such kernels, namely 3x3, 5x5, 7x7 and 9x9 were simulated. Two different types of noise with zero mean and several variances have been used: Gaussian noise and speckle noise. Several image quality measuring indices have been applied in the computer simulations. In particular, mean absolute error (MAE), mean square error (MSE) and structural similarity (SSIM) index were used. Many images were tested in the simulations; however the results of three of them are shown in this paper. The results show that the Gaussian filter has a superior performance over the Wiener filter for all values of Gaussian and speckle noise variances mainly as it uses the smallest kernel size. To obtain a similar performance in Wiener filtering, a larger kernel size is required which produces much more blur in the output mage. The Wiener filter shows poor performance using the smallest kernel size (3x3) while the Gaussian filter shows the best results in such case. With the Gaussian filter being used, similar results of those obtained with low noise could be obtained in the case of high noise variance but with a higher kernel size

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    Comparison of Computational Methods Developed to Address Depth-variant Imaging in Fluorescence Microscopy

    Get PDF
    In three-dimensional fluorescence microscopy, the image formation process is inherently depth variant (DV) due to the refractive index mismatch between imaging layers, which causes depth-induced spherical aberration (SA). In this study, we present a quantitative comparison among different image restoration techniques developed based on a DV imaging model for microscopy in order to assess their ability to correct SA and their impact on restoration. The imaging models approximate DV imaging by either stratifying the object space or image space. For the reconstruction purpose, we used regularized DV algorithms with object stratification method such as the Expectation Maximization (EM), Conjugate Gradient; Principal Component Analysis based expectation maximization (PCA-EM), and Inverse filtering (IF). Reconstructions from simulated data and measured data show that better restoration results are achieved with the DV PCA-EM method than the other DV algorithms in terms of execution time and restoration quality of the image

    Signal Processing and Restoration

    Get PDF

    Techniques in image restoration and enhancement

    Get PDF
    Includes bibliographical references.Image processing in its broad sense pervades many areas but it is convenient to group it into three main sections, viz: image coding, usually for image transmission over telecommunication links; pattern recognition for detecting the presence of a particular distribution in an image which is generally corrupted to some extent by noise; and image restoration, which aims to recover a faithful reproduction of a perfect image which has been degraded, and image enhancement which attempts to present an image in a form which will convey most readily the desired information to the human brain and takes account of the characteristics of vision. It is the object of this thesis to investigate some of the techniques of image restoration and enhancement. There are many different media for implementing the various processes, but digital computation and coherent optics are prevalent

    Image Restoration

    Get PDF
    This book represents a sample of recent contributions of researchers all around the world in the field of image restoration. The book consists of 15 chapters organized in three main sections (Theory, Applications, Interdisciplinarity). Topics cover some different aspects of the theory of image restoration, but this book is also an occasion to highlight some new topics of research related to the emergence of some original imaging devices. From this arise some real challenging problems related to image reconstruction/restoration that open the way to some new fundamental scientific questions closely related with the world we interact with

    Distributed Deblurring of Large Images of Wide Field-Of-View

    Full text link
    Image deblurring is an economic way to reduce certain degradations (blur and noise) in acquired images. Thus, it has become essential tool in high resolution imaging in many applications, e.g., astronomy, microscopy or computational photography. In applications such as astronomy and satellite imaging, the size of acquired images can be extremely large (up to gigapixels) covering wide field-of-view suffering from shift-variant blur. Most of the existing image deblurring techniques are designed and implemented to work efficiently on centralized computing system having multiple processors and a shared memory. Thus, the largest image that can be handle is limited by the size of the physical memory available on the system. In this paper, we propose a distributed nonblind image deblurring algorithm in which several connected processing nodes (with reasonable computational resources) process simultaneously different portions of a large image while maintaining certain coherency among them to finally obtain a single crisp image. Unlike the existing centralized techniques, image deblurring in distributed fashion raises several issues. To tackle these issues, we consider certain approximations that trade-offs between the quality of deblurred image and the computational resources required to achieve it. The experimental results show that our algorithm produces the similar quality of images as the existing centralized techniques while allowing distribution, and thus being cost effective for extremely large images.Comment: 16 pages, 10 figures, submitted to IEEE Trans. on Image Processin
    • …
    corecore