144 research outputs found

    Image Restoration for Remote Sensing: Overview and Toolbox

    Full text link
    Remote sensing provides valuable information about objects or areas from a distance in either active (e.g., RADAR and LiDAR) or passive (e.g., multispectral and hyperspectral) modes. The quality of data acquired by remotely sensed imaging sensors (both active and passive) is often degraded by a variety of noise types and artifacts. Image restoration, which is a vibrant field of research in the remote sensing community, is the task of recovering the true unknown image from the degraded observed image. Each imaging sensor induces unique noise types and artifacts into the observed image. This fact has led to the expansion of restoration techniques in different paths according to each sensor type. This review paper brings together the advances of image restoration techniques with particular focuses on synthetic aperture radar and hyperspectral images as the most active sub-fields of image restoration in the remote sensing community. We, therefore, provide a comprehensive, discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to investigate the vibrant topic of data restoration by supplying sufficient detail and references. Additionally, this review paper accompanies a toolbox to provide a platform to encourage interested students and researchers in the field to further explore the restoration techniques and fast-forward the community. The toolboxes are provided in https://github.com/ImageRestorationToolbox.Comment: This paper is under review in GRS

    Machine Learning And Image Processing For Noise Removal And Robust Edge Detection In The Presence Of Mixed Noise

    Get PDF
    The central goal of this dissertation is to design and model a smoothing filter based on the random single and mixed noise distribution that would attenuate the effect of noise while preserving edge details. Only then could robust, integrated and resilient edge detection methods be deployed to overcome the ubiquitous presence of random noise in images. Random noise effects are modeled as those that could emanate from impulse noise, Gaussian noise and speckle noise. In the first step, evaluation of methods is performed based on an exhaustive review on the different types of denoising methods which focus on impulse noise, Gaussian noise and their related denoising filters. These include spatial filters (linear, non-linear and a combination of them), transform domain filters, neural network-based filters, numerical-based filters, fuzzy based filters, morphological filters, statistical filters, and supervised learning-based filters. In the second step, switching adaptive median and fixed weighted mean filter (SAMFWMF) which is a combination of linear and non-linear filters, is introduced in order to detect and remove impulse noise. Then, a robust edge detection method is applied which relies on an integrated process including non-maximum suppression, maximum sequence, thresholding and morphological operations. The results are obtained on MRI and natural images. In the third step, a combination of transform domain-based filter which is a combination of dual tree – complex wavelet transform (DT-CWT) and total variation, is introduced in order to detect and remove Gaussian noise as well as mixed Gaussian and Speckle noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on medical ultrasound and natural images. In the fourth step, a smoothing filter, which is a feed-forward convolutional network (CNN) is introduced to assume a deep architecture, and supported through a specific learning algorithm, l2 loss function minimization, a regularization method, and batch normalization all integrated in order to detect and remove impulse noise as well as mixed impulse and Gaussian noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on natural images for both specific and non-specific noise-level

    Nonlinear Adaptive Diffusion Models for Image Denoising

    Full text link
    Most of digital image applications demand on high image quality. Unfortunately, images often are degraded by noise during the formation, transmission, and recording processes. Hence, image denoising is an essential processing step preceding visual and automated analyses. Image denoising methods can reduce image contrast, create block or ring artifacts in the process of denoising. In this dissertation, we develop high performance non-linear diffusion based image denoising methods, capable to preserve edges and maintain high visual quality. This is attained by different approaches: First, a nonlinear diffusion is presented with robust M-estimators as diffusivity functions. Secondly, the knowledge of textons derived from Local Binary Patterns (LBP) which unify divergent statistical and structural models of the region analysis is utilized to adjust the time step of diffusion process. Next, the role of nonlinear diffusion which is adaptive to the local context in the wavelet domain is investigated, and the stationary wavelet context based diffusion (SWCD) is developed for performing the iterative shrinkage. Finally, we develop a locally- and feature-adaptive diffusion (LFAD) method, where each image patch/region is diffused individually, and the diffusivity function is modified to incorporate the Inverse Difference Moment as a local estimate of the gradient. Experiments have been conducted to evaluate the performance of each of the developed method and compare it to the reference group and to the state-of-the-art methods

    The General Flow-Adaptive Filter : With Applications to Ultrasound Image Sequences

    Get PDF
    While image filtering is limited to two dimensions, the filtering of image sequences can utilize three dimensions; two spatial and one temporal. Unfortunately, simple extensions of common two-dimensional filters into three dimensions yield undesirable motion blurring of the images. This thesis addresses this problem and introduces a novel filtering approach termed the general flow-adaptive filter. Most often a three-dimensional filter can be visualized as a cubic lattice shifted over the data, and at each point the element corresponding to the central coordinate is replaced with a new value based entirely on the values inside the lattice. The general principle of the flow-adaptive approach is to spatially adapt the entire filter lattice to possibly complex spatial movements in the temporal domain by incorporating local flow-field estimates. Results using the flow-adaptive technique on five filters the temporal discontinuity filter, a tensor-based adaptive filter, the average, the median and a Gaussianshaped convolution filter are presented. Both ultrasound image sequences and synthetic data sets were filtered. An edge-adaptive normalized mean-squared error is used as performance metric on the filtered synthetic sets, and the error is shown to be substantially reduced using the flow-adaptive technique, as much as halved in many instances. There are even indications that simple Gaussian-shaped convolution filters can outperform larger and more complex adaptive filters by implementing the flow-adaptive procedure. For the ultrasound image sequences, the filters adopting the flow-adaptive principles had outputs with less motion blur and sharper contrast compared to the outputs of the non-flow-adaptive filters. At the cost of flow estimation, the flow-adaptive approach substantially improves the performance of all the filters included in this study

    System Characterizations and Optimized Reconstruction Methods for Novel X-ray Imaging

    Get PDF
    In the past decade there have been many new emerging X-ray based imaging technologies developed for different diagnostic purposes or imaging tasks. However, there exist one or more specific problems that prevent them from being effectively or efficiently employed. In this dissertation, four different novel X-ray based imaging technologies are discussed, including propagation-based phase-contrast (PB-XPC) tomosynthesis, differential X-ray phase-contrast tomography (D-XPCT), projection-based dual-energy computed radiography (DECR), and tetrahedron beam computed tomography (TBCT). System characteristics are analyzed or optimized reconstruction methods are proposed for these imaging modalities. In the first part, we investigated the unique properties of propagation-based phase-contrast imaging technique when combined with the X-ray tomosynthesis. Fourier slice theorem implies that the high frequency components collected in the tomosynthesis data can be more reliably reconstructed. It is observed that the fringes or boundary enhancement introduced by the phase-contrast effects can serve as an accurate indicator of the true depth position in the tomosynthesis in-plane image. In the second part, we derived a sub-space framework to reconstruct images from few-view D-XPCT data set. By introducing a proper mask, the high frequency contents of the image can be theoretically preserved in a certain region of interest. A two-step reconstruction strategy is developed to mitigate the risk of subtle structures being oversmoothed when the commonly used total-variation regularization is employed in the conventional iterative framework. In the thirt part, we proposed a practical method to improve the quantitative accuracy of the projection-based dual-energy material decomposition. It is demonstrated that applying a total-projection-length constraint along with the dual-energy measurements can achieve a stabilized numerical solution of the decomposition problem, thus overcoming the disadvantages of the conventional approach that was extremely sensitive to noise corruption. In the final part, we described the modified filtered backprojection and iterative image reconstruction algorithms specifically developed for TBCT. Special parallelization strategies are designed to facilitate the use of GPU computing, showing demonstrated capability of producing high quality reconstructed volumetric images with a super fast computational speed. For all the investigations mentioned above, both simulation and experimental studies have been conducted to demonstrate the feasibility and effectiveness of the proposed methodologies

    Advanced Image Acquisition, Processing Techniques and Applications

    Get PDF
    "Advanced Image Acquisition, Processing Techniques and Applications" is the first book of a series that provides image processing principles and practical software implementation on a broad range of applications. The book integrates material from leading researchers on Applied Digital Image Acquisition and Processing. An important feature of the book is its emphasis on software tools and scientific computing in order to enhance results and arrive at problem solution

    Advances in Motion Estimators for Applications in Computer Vision

    Get PDF
    abstract: Motion estimation is a core task in computer vision and many applications utilize optical flow methods as fundamental tools to analyze motion in images and videos. Optical flow is the apparent motion of objects in image sequences that results from relative motion between the objects and the imaging perspective. Today, optical flow fields are utilized to solve problems in various areas such as object detection and tracking, interpolation, visual odometry, etc. In this dissertation, three problems from different areas of computer vision and the solutions that make use of modified optical flow methods are explained. The contributions of this dissertation are approaches and frameworks that introduce i) a new optical flow-based interpolation method to achieve minimally divergent velocimetry data, ii) a framework that improves the accuracy of change detection algorithms in synthetic aperture radar (SAR) images, and iii) a set of new methods to integrate Proton Magnetic Resonance Spectroscopy (1HMRSI) data into threedimensional (3D) neuronavigation systems for tumor biopsies. In the first application an optical flow-based approach for the interpolation of minimally divergent velocimetry data is proposed. The velocimetry data of incompressible fluids contain signals that describe the flow velocity. The approach uses the additional flow velocity information to guide the interpolation process towards reduced divergence in the interpolated data. In the second application a framework that mainly consists of optical flow methods and other image processing and computer vision techniques to improve object extraction from synthetic aperture radar images is proposed. The proposed framework is used for distinguishing between actual motion and detected motion due to misregistration in SAR image sets and it can lead to more accurate and meaningful change detection and improve object extraction from a SAR datasets. In the third application a set of new methods that aim to improve upon the current state-of-the-art in neuronavigation through the use of detailed three-dimensional (3D) 1H-MRSI data are proposed. The result is a progressive form of online MRSI-guided neuronavigation that is demonstrated through phantom validation and clinical application.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Chemical analysis of polymer blends via synchrotron X-ray tomography

    Get PDF
    Material properties of industrial polymer blends are of great importance. X-ray tomography has been used to obtain spatial chemical information about various polymer blends. The spatial images are acquired with synchrotron X-ray tomography because of its rapidity, good spatial resolution, large field-of-view, and elemental sensitivity. The spatial absorption data acquired from X-ray tomography experiments is converted to spatial chemical information via a linear least squares fit of multi-spectral X-ray absorption data. A fiberglass-reinforced polymer blend with a new-generation flame retardant is studied with multi-energy synchrotron X-ray tomography to assess the blend homogeneity. Relative to other composite materials, this sample is difficult to image due to low x-ray contrast between the fiberglass reinforcement and the polymer blend. To investigate chemical composition surrounding the glass fibers, new procedures were developed to find and mark the fiberglass, then assess the flame retardant distribution near the fiber. Another polymer blending experiment using three-dimensional chemical analysis techniques to look at a polymer additive problem called blooming was done. To investigate the chemical process of blooming, new procedures are developed to assess the flame retardant distribution as a function of annealing time in the sample. With the spatial chemical distribution we fit the concentrations to a diffusion equation to each time step in the annealing process. Finally the diffusion properties of a polymer blend composed of hexabromobenzene and o-terphenyl was studied. The diffusion properties were compared with computer simulations of the blend
    corecore