24 research outputs found

    Image Restoration

    Get PDF
    This book represents a sample of recent contributions of researchers all around the world in the field of image restoration. The book consists of 15 chapters organized in three main sections (Theory, Applications, Interdisciplinarity). Topics cover some different aspects of the theory of image restoration, but this book is also an occasion to highlight some new topics of research related to the emergence of some original imaging devices. From this arise some real challenging problems related to image reconstruction/restoration that open the way to some new fundamental scientific questions closely related with the world we interact with

    Development of GPR data analysis algorithms for predicting thin asphalt concrete overlay thickness and density

    Get PDF
    Thin asphalt concrete (AC) overlay is a commonly used asphalt pavement maintenance strategy. The thickness and density of thin AC overlay are important to achieving proper pavement performance, which can be evaluated using ground-penetrating radar (GPR). The traditional methods for predicting pavement thickness and density relies on the accurate determination of electromagnetic (EM) signal reflection amplitude and time delay. Due to the limitation of GPR antenna bandwidth, the range resolution of the GPR signal is insufficient for thin pavement layer evaluation. To this end, the objective of this study is to develop signal processing techniques to increase the resolution of GPR signals, such that they can be applied to thin AC overlay evaluation. First, the generic GPR forward 2-D imaging scheme is discussed. Then two linear inversion techniques are proposed, including migration and sparse reconstruction. Both algorithms were validated on GPR signals reflected from buried pipes using finite difference time domain (FDTD) simulation. Second, as a special case of the 2-D GPR imaging and linear inversion reconstruction, regularized deconvolution was applied to GPR signals reflected from thin AC overlays. Four types of regularization methods, including Tikhonov regularization and total variation regularization, were compared in terms of accuracy in estimating thin pavement layer thickness. The L-curve method was used to identify the appropriate regularization parameter. A subspace method—a multiple signal classification (MUSIC) algorithm—was then utilized to increase the resolution of 3-D GPR signals. An extended common midpoint (XCMP) method was used to find the dielectric constant and the thickness of the thin AC overlay at a full-scale test section. The results show that the MUSIC algorithm is an effective approach for increasing the 3-D GPR signal range resolution when the XCMP method is applied on thin AC overlay. Furthermore, a non-linear inversion technique is proposed based on gradient descent. The proposed non-linear optimization algorithm was applied on real GPR data reflected from thin AC overlay and the thickness and density prediction results are accurate. Finally, a “modified reference scan” approach was developed to eliminate the effect of AC pavement surface moisture on GPR signals, such that the density of thin AC overlay can be monitored in real time during compaction

    Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data

    Get PDF
    This dissertation addresses the components necessary for simulation of an image-based recovery of the position of a target using orbital image sensors. Each component is considered in detail, focusing on the effect that design choices and system parameters have on the accuracy of the position estimate. Changes in sensor resolution, varying amounts of blur, differences in image noise level, selection of algorithms used for each component, and lag introduced by excessive processing time all contribute to the accuracy of the result regarding recovery of target coordinates using orbital sensor data. Using physical targets and sensors in this scenario would be cost-prohibitive in the exploratory setting posed, therefore a simulated target path is generated using Bezier curves which approximate representative paths followed by the targets of interest. Orbital trajectories for the sensors are designed on an elliptical model representative of the motion of physical orbital sensors. Images from each sensor are simulated based on the position and orientation of the sensor, the position of the target, and the imaging parameters selected for the experiment (resolution, noise level, blur level, etc.). Post-processing of the simulated imagery seeks to reduce noise and blur and increase resolution. The only information available for calculating the target position by a fully implemented system are the sensor position and orientation vectors and the images from each sensor. From these data we develop a reliable method of recovering the target position and analyze the impact on near-realtime processing. We also discuss the influence of adjustments to system components on overall capabilities and address the potential system size, weight, and power requirements from realistic implementation approaches

    Fundamental and Harmonic Ultrasound Image Joint Restoration

    Get PDF
    L'imagerie ultrasonore conserve sa place parmi les principales modalités d'imagerie en raison de ses capacités à révéler l'anatomie et à inspecter le mouvement des organes et le flux sanguin en temps réel, d'un manière non invasive et non ionisante, avec un faible coût, une facilité d'utilisation et une grande vitesse de reconstruction des images. Néanmoins, l'imagerie ultrasonore présente des limites intrinsèques en termes de résolution spatiale. L'amélioration de la résolution spatiale des images ultrasonores est un défi actuel et de nombreux travaux ont longtemps porté sur l'optimisation du dispositif d'acquisition. L'imagerie ultrasonore à haute résolution atteint cet objectif grâce à l'utilisation de sondes spécialisées, mais se confronte aujourd'hui à des limites physiques et technologiques. L'imagerie harmonique est la solution intuitive des spécialistes pour augmenter la résolution lors de l'acquisition. Cependant, elle souffre d'une atténuation en profondeur. Une solution alternative pour améliorer la résolution est de développer des techniques de post-traitement comme la restauration d'images ultrasonores. L'objectif de cette thèse est d'étudier la non-linéarité des échos ultrasonores dans le processus de restauration et de présenter l'intérêt d'incorporer des images US harmoniques dans ce processus. Par conséquent, nous présentons une nouvelle méthode de restauration d'images US qui utilise les composantes fondamentales et harmoniques de l'image observée. La plupart des méthodes existantes sont basées sur un modèle linéaire de formation d'image. Sous l'approximation de Born du premier ordre, l'image RF est supposée être une convolution 2D entre la fonction de réflectivité et la réponse impulsionelle du système. Par conséquent, un problème inverse résultant est formé et résolu en utilisant un algorithme de type ADMM. Plus précisément, nous proposons de récupérer la fonction de reflectivité inconnue en minimisant une fonction composée de deux termes de fidélité des données correspondant aux composantes linéaires (fondamentale) et non linéaires (première harmonique) de l'image observée, et d'un terme de régularisation basé sur la parcimonie afin de stabiliser la solution. Pour tenir compte de l'atténuation en profondeur des images harmoniques, un terme d'atténuation dans le modèle direct de l'image harmonique est proposé sur la base d'une analyse spectrale effectuée sur les signaux RF observés. La méthode proposée a d'abord été appliquée en deux étapes, en estimant d'abord la réponse impulsionelle, suivi par la fonction de réflectivité. Dans un deuxième temps, une solution pour estimer simultanément le réponse impulsionelle et la fonction de réflectivité est proposée, et une autre solution pour prendre en compte la variabilité spatiale du la réponse impulsionelle est présentée. L'intérêt de la méthode proposée est démontré par des résultats synthétiques et in vivo et comparé aux méthodes de restauration conventionnelles

    Variable Splitting as a Key to Efficient Image Reconstruction

    Get PDF
    The problem of reconstruction of digital images from their degraded measurements has always been a problem of central importance in numerous applications of imaging sciences. In real life, acquired imaging data is typically contaminated by various types of degradation phenomena which are usually related to the imperfections of image acquisition devices and/or environmental effects. Accordingly, given the degraded measurements of an image of interest, the fundamental goal of image reconstruction is to recover its close approximation, thereby "reversing" the effect of image degradation. Moreover, the massive production and proliferation of digital data across different fields of applied sciences creates the need for methods of image restoration which would be both accurate and computationally efficient. Developing such methods, however, has never been a trivial task, as improving the accuracy of image reconstruction is generally achieved at the expense of an elevated computational burden. Accordingly, the main goal of this thesis has been to develop an analytical framework which allows one to tackle a wide scope of image reconstruction problems in a computationally efficient manner. To this end, we generalize the concept of variable splitting, as a tool for simplifying complex reconstruction problems through their replacement by a sequence of simpler and therefore easily solvable ones. Moreover, we consider two different types of variable splitting and demonstrate their connection to a number of existing approaches which are currently used to solve various inverse problems. In particular, we refer to the first type of variable splitting as Bregman Type Splitting (BTS) and demonstrate its applicability to the solution of complex reconstruction problems with composite, cross-domain constraints. As specific applications of practical importance, we consider the problem of reconstruction of diffusion MRI signals from sub-critically sampled, incomplete data as well as the problem of blind deconvolution of medical ultrasound images. Further, we refer to the second type of variable splitting as Fuzzy Clustering Splitting (FCS) and show its application to the problem of image denoising. Specifically, we demonstrate how this splitting technique allows us to generalize the concept of neighbourhood operation as well as to derive a unifying approach to denoising of imaging data under a variety of different noise scenarios

    Blind image deconvolution: nonstationary Bayesian approaches to restoring blurred photos

    Get PDF
    High quality digital images have become pervasive in modern scientific and everyday life — in areas from photography to astronomy, CCTV, microscopy, and medical imaging. However there are always limits to the quality of these images due to uncertainty and imprecision in the measurement systems. Modern signal processing methods offer the promise of overcoming some of these problems by postprocessing these blurred and noisy images. In this thesis, novel methods using nonstationary statistical models are developed for the removal of blurs from out of focus and other types of degraded photographic images. The work tackles the fundamental problem blind image deconvolution (BID); its goal is to restore a sharp image from a blurred observation when the blur itself is completely unknown. This is a “doubly illposed” problem — extreme lack of information must be countered by strong prior constraints about sensible types of solution. In this work, the hierarchical Bayesian methodology is used as a robust and versatile framework to impart the required prior knowledge. The thesis is arranged in two parts. In the first part, the BID problem is reviewed, along with techniques and models for its solution. Observation models are developed, with an emphasis on photographic restoration, concluding with a discussion of how these are reduced to the common linear spatially-invariant (LSI) convolutional model. Classical methods for the solution of illposed problems are summarised to provide a foundation for the main theoretical ideas that will be used under the Bayesian framework. This is followed by an indepth review and discussion of the various prior image and blur models appearing in the literature, and then their applications to solving the problem with both Bayesian and nonBayesian techniques. The second part covers novel restoration methods, making use of the theory presented in Part I. Firstly, two new nonstationary image models are presented. The first models local variance in the image, and the second extends this with locally adaptive noncausal autoregressive (AR) texture estimation and local mean components. These models allow for recovery of image details including edges and texture, whilst preserving smooth regions. Most existing methods do not model the boundary conditions correctly for deblurring of natural photographs, and a Chapter is devoted to exploring Bayesian solutions to this topic. Due to the complexity of the models used and the problem itself, there are many challenges which must be overcome for tractable inference. Using the new models, three different inference strategies are investigated: firstly using the Bayesian maximum marginalised a posteriori (MMAP) method with deterministic optimisation; proceeding with the stochastic methods of variational Bayesian (VB) distribution approximation, and simulation of the posterior distribution using the Gibbs sampler. Of these, we find the Gibbs sampler to be the most effective way to deal with a variety of different types of unknown blurs. Along the way, details are given of the numerical strategies developed to give accurate results and to accelerate performance. Finally, the thesis demonstrates state of the art results in blind restoration of synthetic and real degraded images, such as recovering details in out of focus photographs

    A review of snapshot multidimensional optical imaging: Measuring photon tags in parallel

    Get PDF
    Multidimensional optical imaging has seen remarkable growth in the past decade. Rather than measuring only the two-dimensional spatial distribution of light, as in conventional photography, multidimensional optical imaging captures light in up to nine dimensions, providing unprecedented information about incident photons’ spatial coordinates, emittance angles, wavelength, time, and polarization. Multidimensional optical imaging can be accomplished either by scanning or parallel acquisition. Compared with scanning-based imagers, parallel acquisition–also dubbed snapshot imaging–has a prominent advantage in maximizing optical throughput, particularly when measuring a datacube of high dimensions. Here, we first categorize snapshot multidimensional imagers based on their acquisition and image reconstruction strategies, then highlight the snapshot advantage in the context of optical throughput, and finally we discuss their state-of-the-art implementations and applications

    Robust inversion and detection techniques for improved imaging performance

    Full text link
    Thesis (Ph.D.)--Boston UniversityIn this thesis we aim to improve the performance of information extraction from imaging systems through three thrusts. First, we develop improved image formation methods for physics-based, complex-valued sensing problems. We propose a regularized inversion method that incorporates prior information about the underlying field into the inversion framework for ultrasound imaging. We use experimental ultrasound data to compute inversion results with the proposed formulation and compare it with conventional inversion techniques to show the robustness of the proposed technique to loss of data. Second, we propose methods that combine inversion and detection in a unified framework to improve imaging performance. This framework is applicable for cases where the underlying field is label-based such that each pixel of the underlying field can only assume values from a discrete, limited set. We consider this unified framework in the context of combinatorial optimization and propose graph-cut based methods that would result in label-based images, thereby eliminating the need for a separate detection step. Finally, we propose a robust method of object detection from microscopic nanoparticle images. In particular, we focus on a portable, low cost interferometric imaging platform and propose robust detection algorithms using tools from computer vision. We model the electromagnetic image formation process and use this model to create an enhanced detection technique. The effectiveness of the proposed technique is demonstrated using manually labeled ground-truth data. In addition, we extend these tools to develop a detection based autofocusing algorithm tailored for the high numerical aperture interferometric microscope

    Multiresolution image models and estimation techniques

    Get PDF
    corecore