12 research outputs found

    A Tutorial on Speckle Reduction in Synthetic Aperture Radar Images

    Get PDF
    Speckle is a granular disturbance, usually modeled as a multiplicative noise, that affects synthetic aperture radar (SAR) images, as well as all coherent images. Over the last three decades, several methods have been proposed for the reduction of speckle, or despeckling, in SAR images. Goal of this paper is making a comprehensive review of despeckling methods since their birth, over thirty years ago, highlighting trends and changing approaches over years. The concept of fully developed speckle is explained. Drawbacks of homomorphic filtering are pointed out. Assets of multiresolution despeckling, as opposite to spatial-domain despeckling, are highlighted. Also advantages of undecimated, or stationary, wavelet transforms over decimated ones are discussed. Bayesian estimators and probability density function (pdf) models in both spatial and multiresolution domains are reviewed. Scale-space varying pdf models, as opposite to scale varying models, are promoted. Promising methods following non-Bayesian approaches, like nonlocal (NL) filtering and total variation (TV) regularization, are reviewed and compared to spatial- and wavelet-domain Bayesian filters. Both established and new trends for assessment of despeckling are presented. A few experiments on simulated data and real COSMO-SkyMed SAR images highlight, on one side the costperformance tradeoff of the different methods, on the other side the effectiveness of solutions purposely designed for SAR heterogeneity and not fully developed speckle. Eventually, upcoming methods based on new concepts of signal processing, like compressive sensing, are foreseen as a new generation of despeckling, after spatial-domain and multiresolution-domain method

    Blind image deconvolution: nonstationary Bayesian approaches to restoring blurred photos

    Get PDF
    High quality digital images have become pervasive in modern scientific and everyday life — in areas from photography to astronomy, CCTV, microscopy, and medical imaging. However there are always limits to the quality of these images due to uncertainty and imprecision in the measurement systems. Modern signal processing methods offer the promise of overcoming some of these problems by postprocessing these blurred and noisy images. In this thesis, novel methods using nonstationary statistical models are developed for the removal of blurs from out of focus and other types of degraded photographic images. The work tackles the fundamental problem blind image deconvolution (BID); its goal is to restore a sharp image from a blurred observation when the blur itself is completely unknown. This is a “doubly illposed” problem — extreme lack of information must be countered by strong prior constraints about sensible types of solution. In this work, the hierarchical Bayesian methodology is used as a robust and versatile framework to impart the required prior knowledge. The thesis is arranged in two parts. In the first part, the BID problem is reviewed, along with techniques and models for its solution. Observation models are developed, with an emphasis on photographic restoration, concluding with a discussion of how these are reduced to the common linear spatially-invariant (LSI) convolutional model. Classical methods for the solution of illposed problems are summarised to provide a foundation for the main theoretical ideas that will be used under the Bayesian framework. This is followed by an indepth review and discussion of the various prior image and blur models appearing in the literature, and then their applications to solving the problem with both Bayesian and nonBayesian techniques. The second part covers novel restoration methods, making use of the theory presented in Part I. Firstly, two new nonstationary image models are presented. The first models local variance in the image, and the second extends this with locally adaptive noncausal autoregressive (AR) texture estimation and local mean components. These models allow for recovery of image details including edges and texture, whilst preserving smooth regions. Most existing methods do not model the boundary conditions correctly for deblurring of natural photographs, and a Chapter is devoted to exploring Bayesian solutions to this topic. Due to the complexity of the models used and the problem itself, there are many challenges which must be overcome for tractable inference. Using the new models, three different inference strategies are investigated: firstly using the Bayesian maximum marginalised a posteriori (MMAP) method with deterministic optimisation; proceeding with the stochastic methods of variational Bayesian (VB) distribution approximation, and simulation of the posterior distribution using the Gibbs sampler. Of these, we find the Gibbs sampler to be the most effective way to deal with a variety of different types of unknown blurs. Along the way, details are given of the numerical strategies developed to give accurate results and to accelerate performance. Finally, the thesis demonstrates state of the art results in blind restoration of synthetic and real degraded images, such as recovering details in out of focus photographs

    Surface reflectance recognition and real-world illumination statistics

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2003.Includes bibliographical references (p. 141-150).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Humans distinguish materials such as metal, plastic, and paper effortlessly at a glance. Traditional computer vision systems cannot solve this problem at all. Recognizing surface reflectance properties from a single photograph is difficult because the observed image depends heavily on the amount of light incident from every direction. A mirrored sphere, for example, produces a different image in every environment. To make matters worse, two surfaces with different reflectance properties could produce identical images. The mirrored sphere simply reflects its surroundings, so in the right artificial setting, it could mimic the appearance of a matte ping-pong ball. Yet, humans possess an intuitive sense of what materials typically "look like" in the real world. This thesis develops computational algorithms with a similar ability to recognize reflectance properties from photographs under unknown, real-world illumination conditions. Real-world illumination is complex, with light typically incident on a surface from every direction. We find, however, that real-world illumination patterns are not arbitrary. They exhibit highly predictable spatial structure, which we describe largely in the wavelet domain. Although they differ in several respects from the typical photographs, illumination patterns share much of the regularity described in the natural image statistics literature. These properties of real-world illumination lead to predictable image statistics for a surface with given reflectance properties. We construct a system that classifies a surface according to its reflectance from a single photograph under unknown illumination. Our algorithm learns relationships between surface reflectance and certain statistics computed from the observed image.(cont.) Like the human visual system, we solve the otherwise underconstrained inverse problem of reflectance estimation by taking advantage of the statistical regularity of illumination. For surfaces with homogeneous reflectance properties and known geometry, our system rivals human performance.by Ron O. Dror.Ph.D

    Pixelwise-Adaptive Blind Optical Flow Assuming Nonstationary Statistics

    No full text
    In this paper, we address some of the major issues in optical flow within a new framework assuming nonstationary statistics for the motion field and for the errors. Problems addressed include the preservation of discontinuities, model/data errors, outliers, confidence measures, and performance evaluation. In solving these problems, we assume that the statistics of the motion field and the errors are not only spatially varying, but also unknown. We, thus, derive a blind adaptive technique based on generalized cross validation for estimating an independent regularization parameter for each pixel. Our formulation is pixelwise and combines existing first- and second-order constraints with a new second-order temporal constraint. We derive a new confidence measure for an adaptive rejection of erroneous and outlying motion vectors, and compare our results to other techniques in the literature. A new performance measure is also derived for estimating the signal-to-noise ratio for real sequences when the ground truth is unknown. © 2005 IEEE

    Surface Reflectance Recognition and Real-World Illumination Statistics

    Get PDF
    Humans distinguish materials such as metal, plastic, and paper effortlessly at a glance. Traditional computer vision systems cannot solve this problem at all. Recognizing surface reflectance properties from a single photograph is difficult because the observed image depends heavily on the amount of light incident from every direction. A mirrored sphere, for example, produces a different image in every environment. To make matters worse, two surfaces with different reflectance properties could produce identical images. The mirrored sphere simply reflects its surroundings, so in the right artificial setting, it could mimic the appearance of a matte ping-pong ball. Yet, humans possess an intuitive sense of what materials typically "look like" in the real world. This thesis develops computational algorithms with a similar ability to recognize reflectance properties from photographs under unknown, real-world illumination conditions. Real-world illumination is complex, with light typically incident on a surface from every direction. We find, however, that real-world illumination patterns are not arbitrary. They exhibit highly predictable spatial structure, which we describe largely in the wavelet domain. Although they differ in several respects from the typical photographs, illumination patterns share much of the regularity described in the natural image statistics literature. These properties of real-world illumination lead to predictable image statistics for a surface with given reflectance properties. We construct a system that classifies a surface according to its reflectance from a single photograph under unknown illuminination. Our algorithm learns relationships between surface reflectance and certain statistics computed from the observed image. Like the human visual system, we solve the otherwise underconstrained inverse problem of reflectance estimation by taking advantage of the statistical regularity of illumination. For surfaces with homogeneous reflectance properties and known geometry, our system rivals human performance

    Pixelwise-adaptive blind optical flow assuming nonstationary statistics

    No full text
    In this paper, we address some of the major issues in optical flow within a new framework assuming nonstationary statistics for the motion field and for the errors. Problems addressed include the preservation of discontinuities, model/data errors, outliers, confidence measures, and performance evaluation. In solving these problems, we assume that the statistics of the motion field and the errors are not only spatially varying, but also unknown. We, thus, derive a blind adaptive technique based on generalized cross validation for estimating an independent regularization parameter for each pixel. Our formulation is pixelwise and combines existing first- and second-order constraints with a new second-order temporal constraint. We derive a new confidence measure for an adaptive rejection of erroneous and outlying motion vectors, and compare our results to other techniques in the literature. A new performance measure is also derived for estimating the signal-to-noise ratio for real sequences when the ground truth is unknown

    Respiratory Motion Correction on 3D Positron Emission Tomography Images

    Full text link
    PET/CT Gräte erlauben gleichzeitige morphologische und anatomische Bildaufnahme des Körpers. Die Aufnahmemodalitäten bedingen, dass bei der Positronen-Emissions-Tomographie (PET) der Patient weiter Atmet. Bei der Computer Tomographie (CT) dagegen, die nur wenige Sekunden dauert, hält er seinen Atem. Aufgrund der Diskrepanz zwischen den Aufnahmen kommt es zu Artefakten bei der Gewichtung der PET-Daten durch die CT-Daten. Diese Gewichtung ist aber für Quantitative PET-Daten notwendig. Des Weiteren können kleine Tumore durch die Verschmierung der Daten im Rauschen untergehen. In dieser Arbeit wird eine Lösung des Problems vorgeschlagen die auf zwei Schritte beruht. Zunächst werden die PET-Daten in verschiedene Atemphasen unterteilt. Im zweiten Schritt werden die Daten verschiedener Phasen mit einer Zielphase in Übereinstimmung gebracht. Hierzu wird eine Optical Flow Methode benutzt. Die Ergebnisse auf Phantom und auf Patientendaten zeigen, dass das Problem erfolgreich gelöst worden ist

    Advances in Image Processing, Analysis and Recognition Technology

    Get PDF
    For many decades, researchers have been trying to make computers’ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches

    Study of the speckle noise effects over the eigen decomposition of polarimetric SAR data: a review

    No full text
    This paper is focused on considering the effects of speckle noise on the eigen decomposition of the co- herency matrix. Based on a perturbation analysis of the matrix, it is possible to obtain an analytical expression for the mean value of the eigenvalues and the eigenvectors, as well as for the Entropy, the Anisotroopy and the dif- ferent a angles. The analytical expressions are compared against simulated polarimetric SAR data, demonstrating the correctness of the different expressions.Peer ReviewedPostprint (published version
    corecore