16 research outputs found

    A hybrid algorithm for spatial and wavelet domain image restoration

    Get PDF
    The recent algorithm ForWaRD based on the two steps: (i) the Fourier domain deblurring and (ii) wavelet domain denoising, shows better restoration results than those using traditional image restoration methods. In this paper, we study other deblurring schemes in ForWaRD and demonstrate such two-step approach is effective for image restoration.published_or_final_versionS P I E Conference on Visual Communications and Image Processing 2005, Beijing, China, 12-15 July 2005. In Proceedings Of Spie - The International Society For Optical Engineering, 2005, v. 5960 n. 4, p. 59605V-1 - 59605V-

    Defocused Image Restoration with Local Polynomial Regression and IWF

    Get PDF

    A SURE Approach for Digital Signal/Image Deconvolution Problems

    Get PDF
    In this paper, we are interested in the classical problem of restoring data degraded by a convolution and the addition of a white Gaussian noise. The originality of the proposed approach is two-fold. Firstly, we formulate the restoration problem as a nonlinear estimation problem leading to the minimization of a criterion derived from Stein's unbiased quadratic risk estimate. Secondly, the deconvolution procedure is performed using any analysis and synthesis frames that can be overcomplete or not. New theoretical results concerning the calculation of the variance of the Stein's risk estimate are also provided in this work. Simulations carried out on natural images show the good performance of our method w.r.t. conventional wavelet-based restoration methods

    Inverse Problems in Image Processing: Blind Image Restoration

    Get PDF
    Blind Image Restoration pertains to the estimation of degradation in an image, without any prior knowledge of the degradation system, and using this estimation to help restore the original image. Original Image, in this case, refers to that version of the image before it experienced degradation. In this thesis, after estimating the degradation system in the form of Gaussian blur and noise, we employ Deconvolution to help restore the original image. In this thesis, we use a Redundant Wavelet based technique to estimate blur in the image using high-frequency information in the image itself. Lipschitz exponent – a measure of local regularity of signals, is computed using the evolution of wavelet coefficients of singularities across scales. It has been shown before that this exponent is related to the blur in the image and we use it in this case to estimate the standard deviation of the Gaussian blur. The properties of wavelets enable us to compute the noise variance in the image. In this thesis, we employ two cases of deconvolution – A strictly Fourier domain Regularized Iterative Wiener filtering approach and A Fourier-Wavelet Cascaded approach with Regularized Iterative Wiener filtering - to compute an estimate of the image to be restored using the blur and noise variance information that was earlier computed. The estimated value of standard deviation of the blur helped obtain robust estimates with deconvolution. It can be observed from the results that Fourier domain Regularized Iterative Wiener filtering provides a more stable output estimate than the Iterative Filtering with Additive Correction methods, especially when the number of iterations employed is more. The Fourier-Wavelet Cascaded deconvolution seems to be image dependent with regards to performance although it outperforms the strictly Fourier domain deconvolution approach in some cases, as can be gauged from the visual quality and Mean Squared Error

    Aplicación en Scilab para el tratamiento de señales digitales con la transformación de Fourier fraccionaria: Filtro de Weiner fraccionario

    Get PDF
    La investigación tiene como propósito desarrollar una aplicación utilizando Scilab para realizar e implementar la transformación de Fourier fraccionaria y el filtro de Wiener fraccionario para el tratamiento de señales digitales. El teorema de Fourier no es solamente uno de los más hermosos resultados del análisis moderno, sino que se ha convertido en instrumento indispensable de la física moderna. En 1980 surgió una nueva operación para el tratamiento de señales inventada por V. Namias, desde ese momento surgieron operaciones como la convolución y correlación fraccionaria, el teorema de muestreo en dominios de Fourier fraccionarios y nuevos filtros en el dominio de esta transformación; también se encontraron sus propiedades y relaciones con otras funciones, de la cual surgió una de las más importantes, la cual es que la frft es una rotación de la distribución de Wigner que se encuentra asociada a una señal, esto ha servido como una excelente representación de señales en tiempo-frecuencia y han surgido muchas implementaciones para el tratamiento de señales de muchas categorías entre ellas las no estacionarias. En este documento se muestra la implementación de un algoritmo de la transformación de Fourier fraccionaria haciendo uso del teorema de muestreo en dominios fraccionarios, se demuestra la rotación de la distribución de Wigner asociada a una señal por medio de la frft, y se implementa el filtro de Wiener fraccionario utilizando la convolución fraccionaria. En el capítulo 2 se propone el tratamiento de señales con la frft implementada en las operaciones necesarias, con el fin de realizar el filtro de Wiener fraccionario, en el capítulo 3 los antecedentes de esta investigación, en el 4 un marco teórico de la frft, en el 5 la justificación, en el 6 los objetivos, en el capítulo 7 se formulan algunas ecuaciones para implementarlas computacionalmente, en 8 la Metodología XP como metodología de ciclo de vida de desarrollo de software, en el 9 y en el 10 el desarrollo de las funciones y la demostración por medio de simulación sobre la plataforma Scilab, en el capítulo 11 se exponen algunos inconvenientes y limitaciones durante la investigación y finalmente en el capítulo 12 las conclusiones

    Sparse and Redundant Representations for Inverse Problems and Recognition

    Get PDF
    Sparse and redundant representation of data enables the description of signals as linear combinations of a few atoms from a dictionary. In this dissertation, we study applications of sparse and redundant representations in inverse problems and object recognition. Furthermore, we propose two novel imaging modalities based on the recently introduced theory of Compressed Sensing (CS). This dissertation consists of four major parts. In the first part of the dissertation, we study a new type of deconvolution algorithm that is based on estimating the image from a shearlet decomposition. Shearlets provide a multi-directional and multi-scale decomposition that has been mathematically shown to represent distributed discontinuities such as edges better than traditional wavelets. We develop a deconvolution algorithm that allows for the approximation inversion operator to be controlled on a multi-scale and multi-directional basis. Furthermore, we develop a method for the automatic determination of the threshold values for the noise shrinkage for each scale and direction without explicit knowledge of the noise variance using a generalized cross validation method. In the second part of the dissertation, we study a reconstruction method that recovers highly undersampled images assumed to have a sparse representation in a gradient domain by using partial measurement samples that are collected in the Fourier domain. Our method makes use of a robust generalized Poisson solver that greatly aids in achieving a significantly improved performance over similar proposed methods. We will demonstrate by experiments that this new technique is more flexible to work with either random or restricted sampling scenarios better than its competitors. In the third part of the dissertation, we introduce a novel Synthetic Aperture Radar (SAR) imaging modality which can provide a high resolution map of the spatial distribution of targets and terrain using a significantly reduced number of needed transmitted and/or received electromagnetic waveforms. We demonstrate that this new imaging scheme, requires no new hardware components and allows the aperture to be compressed. Also, it presents many new applications and advantages which include strong resistance to countermesasures and interception, imaging much wider swaths and reduced on-board storage requirements. The last part of the dissertation deals with object recognition based on learning dictionaries for simultaneous sparse signal approximations and feature extraction. A dictionary is learned for each object class based on given training examples which minimize the representation error with a sparseness constraint. A novel test image is then projected onto the span of the atoms in each learned dictionary. The residual vectors along with the coefficients are then used for recognition. Applications to illumination robust face recognition and automatic target recognition are presented

    Signal processing for guided wave structural health monitoring

    No full text
    The importance of Structural Health Monitoring (SHM) in several industrial fields has been continuously growing in the last few years with the increasing need for the development of systems able to monitor continuously the integrity of complex structures. In order to be competitive with conventional non destructive evaluation techniques, SHM must be able to effectively detect the occurrence of damage in the structure, giving information regarding the damage location. Ultrasonic guided waves offer the possibility of inspecting large areas of structures from a small number of sensor positions. However, inspection of complex structures is difficult as the reflections from different features overlap. Therefore damage detection becomes an extremely challenging problem and robust signal processing is required in order to resolve strongly overlapping echoes. In our work we have considered at first the possibility of employing a deconvolution approach for enhancing the resolution of ultrasonic time traces and the potential and the limitations of this approach for reliable SHM applications have been shown. The effects of noise on the bandwidth of the typical signals in SHM and the effects of frequency dependent phase shifts are the main detrimental issues that strongly reduce the performance of deconvolution in SHM applications. The second part of this thesis is concerned with the evaluation of a subtraction approach for SHM when changes of environmental conditions are taken into account. Temperature changes result in imperfect subtraction even for an undamaged structure, since temperature changes modify the mechanical properties of the material and therefore the velocity of propagation of ultrasonic guided waves. Compensation techniques have previously been used effectively to overcome temperature effects, in order to reduce the residual in the subtraction. In this work the performance of temperature compensation techniques has been evaluated also in the presence of other detrimental effects, such as liquid loading and different temperature responses of materials in adhesive joints. Numerical simulations and experiments have been conducted and it has been shown that temperature compensation techniques can cope in principle with non temperature effects. It is concluded that subtraction approach represents a promising method for reliable Structural Health Monitoring. Nonetheless the feasibility of a subtraction approach for SHM depends on environmental conditions

    Bayesian Variational Regularisation for Dark Matter Reconstruction with Uncertainty Quantification

    Get PDF
    Despite the great wealth of cosmological knowledge accumulated since the early 20th century, the nature of dark-matter, which accounts for ~85% of the matter content of the universe, remains illusive. Unfortunately, though dark-matter is scientifically interesting, with implications for our fundamental understanding of the Universe, it cannot be directly observed. Instead, dark-matter may be inferred from e.g. the optical distortion (lensing) of distant galaxies which, at linear order, manifests as a perturbation to the apparent magnitude (convergence) and ellipticity (shearing). Ensemble observations of the shear are collected and leveraged to construct estimates of the convergence, which can directly be related to the universal dark-matter distribution. Imminent stage IV surveys are forecast to accrue an unprecedented quantity of cosmological information; a discriminative partition of which is accessible through the convergence, and is disproportionately concentrated at high angular resolutions, where the echoes of cosmological evolution under gravity are most apparent. Capitalising on advances in probability concentration theory, this thesis merges the paradigms of Bayesian inference and optimisation to develop hybrid convergence inference techniques which are scalable, statistically principled, and operate over the Euclidean plane, celestial sphere, and 3-dimensional ball. Such techniques can quantify the plausibility of inferences at one-millionth the computational overhead of competing sampling methods. These Bayesian techniques are applied to the hotly debated Abell-520 merging cluster, concluding that observational catalogues contain insufficient information to determine the existence of dark-matter self-interactions. Further, these techniques were applied to all public lensing catalogues, recovering the then largest global dark-matter mass-map. The primary methodological contributions of this thesis depend only on posterior log-concavity, paving the way towards a, potentially revolutionary, complete hybridisation with artificial intelligence techniques. These next-generation techniques are the first to operate over the full 3-dimensional ball, laying the foundations for statistically principled universal dark-matter cartography, and the cosmological insights such advances may provide

    Restauración de imágenes con desensibilización de estimaciones

    Get PDF
    El marco de esta tesis es la restauración digital de imágenes, esto es, el proceso por el cual se recupera una imagen original que ha sido degradada por las imperfecciones del sistema de adquisición: emborronamiento y ruido. Restaurar esta degradación es un problema mal condicionado pues la inversión directa por mínimos cuadrados amplifica el ruido en las altas frecuencias. Por ello, se utiliza la regularización matemática como medio para incluir información a priori de la imagen que consiga estabilizar la solución. Durante la primera parte de la memoria se hace un repaso de ciertos algoritmos del estado del arte, que se usarán posteriormente como métodos de comparación en los experimentos. Para resolver el problema de regularización, la restauración de imágenes tiene dos requisitos previos. En primer lugar, es necesario realizar hipótesis sobre el comportamiento de la imagen fuera de sus fronteras, debido a la propiedad no local de la convolución que modela la degradación. La ausencia de condiciones de frontera en la restauración da lugar al artefacto conocido como boundary ringing. En segundo lugar, los algoritmos de restauración dependen de un número importante de parámetros divididos en tres grupos: parámetros respecto al proceso de degradación, al ruido y a la imagen original. Todos ellos necesitan de una estimación a priori suficientemente precisa, pues pequeños errores respecto a sus valores reales producen importantes desviaciones en los resultados de restauración. El problema de frontera y la sensibilidad a estimaciones son los objetivos a resolver en esta tesis mediante dos algoritmos iterativos. El primero de los algoritmos afronta el problema de frontera partiendo de una imagen truncada en el campo de visión como observación real. Para resolver esta no linealidad, se utiliza una red neuronal que minimiza una función de coste definida principalmente por la regularización por variación total, pero sin incluir ningún tipo de información a priori sobre las fronteras ni requerir entrenamiento previo de la iv red. Como resultado, se obtiene una imagen restaurada sin efectos de ringing en el campo de visión y además las fronteras truncadas son reconstruidas hasta el tamaño original. El algoritmo se basa en la técnica de retro-propagación de energía, con lo que la red se convierte en un ciclo iterativo de dos procesos: forward y backward, que simulan una restauración y una degradación por cada iteración. Siguiendo el mismo concepto iterativo de restauración-degradación, se presenta un segundo algoritmo en el dominio de la frecuencia para reducir la dependencia respecto a las estimaciones de parámetros. Para ello, se diseña un nuevo filtro de restauración desensibilizado como resultado de aplicar un algoritmo iterativo sobre un filtro original. Estudiando las propiedades de sensibilidad de este filtro y estableciendo un criterio para el número de iteraciones, se llega a una expresión para el algoritmo de desensibilización particularizado a los filtros Wiener y Tikhonov. Los resultados de los experimentos demuestran el buen comportamiento del filtro respecto al error dependiente del ruido, con lo que la estimación que se hace más robusta es la correspondiente a los parámetros del ruido, si bien la desensibilización se extiende también al resto de estimaciones. Abstract The framework of this thesis is digital image restoration, that is to say, the process of recovering an original image which has been degraded due to the imperfections in the acquisition system: blurring and noise. Restoring this degradation is an ill-posed problem since the inverse solution using least-squares leads to excessive noise amplification. For that reason, mathematical regularization is used to include prior knowledge about the image which allows the stabilization of the solution in the face of noise. In the first part of the thesis, we provide a review of the state-of-the-art methods which will be used later in the experimental results. To deal with a regularization problem, image restoration imposes two main requirements. First, it is necessary to make assumptions about how the image behaves outside the field of view, as a result of the non-local property of the underlying convolution. The absence of boundary conditions in the restoration problem produces the so-called boundary ringing artifact. Secondly, the restoration methods depend on a wide set of parameters which can be largely grouped into three categories: parameters with respect to the degradation process, the noise and the original image. All parameters require an accurate prior estimation because small errors in their values lead to important deviations in the restoration results. The boundary problem and the sensitivity to estimations are the issues to resolve in this thesis by means of two iterative algorithms. The first algorithm copes with the boundary problem taking a truncated image in the field of view as a real observation. To resolve the nonlinearity in the observation, we use a neural network that minimizes a cost function mainly defined by the total variation regularization, but with neither prior assumption as regards the boundaries nor previous training in the net. It yields a restored image without ringing artifacts and, moreover, the truncated boundaries are reconstructed according to the original image size. The algorithm is based on the backpropagation method, which turns out an iterative cycle of two steps: forward and backward, simulating respectively restoration and degradation processes at each iteration. Following the same iterative concept of restoration-degradation, we present a second algorithm in the frequency domain to reduce the dependency on the estimation of parameters. Hence, a novel desensitized restoration filter is designed by applying an iterative algorithm over the original filter. Analyzing the sensitivity properties of this filter and setting a criterion to choose the number of iterations, we come up with an expression for the desensitized algorithm that is particularized to the Wiener and the Tikhonov filters. Experimental results demonstrate the desensitizing behavior with respect to the noise-dependent error and a consequent robustness to the noise parameters, although the desensitization also applies to the rest of the estimations
    corecore