177 research outputs found

    Machine Learning And Image Processing For Noise Removal And Robust Edge Detection In The Presence Of Mixed Noise

    Get PDF
    The central goal of this dissertation is to design and model a smoothing filter based on the random single and mixed noise distribution that would attenuate the effect of noise while preserving edge details. Only then could robust, integrated and resilient edge detection methods be deployed to overcome the ubiquitous presence of random noise in images. Random noise effects are modeled as those that could emanate from impulse noise, Gaussian noise and speckle noise. In the first step, evaluation of methods is performed based on an exhaustive review on the different types of denoising methods which focus on impulse noise, Gaussian noise and their related denoising filters. These include spatial filters (linear, non-linear and a combination of them), transform domain filters, neural network-based filters, numerical-based filters, fuzzy based filters, morphological filters, statistical filters, and supervised learning-based filters. In the second step, switching adaptive median and fixed weighted mean filter (SAMFWMF) which is a combination of linear and non-linear filters, is introduced in order to detect and remove impulse noise. Then, a robust edge detection method is applied which relies on an integrated process including non-maximum suppression, maximum sequence, thresholding and morphological operations. The results are obtained on MRI and natural images. In the third step, a combination of transform domain-based filter which is a combination of dual tree – complex wavelet transform (DT-CWT) and total variation, is introduced in order to detect and remove Gaussian noise as well as mixed Gaussian and Speckle noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on medical ultrasound and natural images. In the fourth step, a smoothing filter, which is a feed-forward convolutional network (CNN) is introduced to assume a deep architecture, and supported through a specific learning algorithm, l2 loss function minimization, a regularization method, and batch normalization all integrated in order to detect and remove impulse noise as well as mixed impulse and Gaussian noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on natural images for both specific and non-specific noise-level

    Real-Time Quantum Noise Suppression In Very Low-Dose Fluoroscopy

    Get PDF
    Fluoroscopy provides real-time X-ray screening of patient's organs and of various radiopaque objects, which make it an invaluable tool for many interventional procedures. For this reason, the number of fluoroscopy screenings has experienced a consistent growth in the last decades. However, this trend has raised many concerns about the increase in X-ray exposure, as even low-dose procedures turned out to be not as safe as they were considered, thus demanding a rigorous monitoring of the X-ray dose delivered to the patients and to the exposed medical staff. In this context, the use of very low-dose protocols would be extremely beneficial. Nonetheless, this would result in very noisy images, which need to be suitably denoised in real-time to support interventional procedures. Simple smoothing filters tend to produce blurring effects that undermines the visibility of object boundaries, which is essential for the human eye to understand the imaged scene. Therefore, some denoising strategies embed noise statistics-based criteria to improve their denoising performances. This dissertation focuses on the Noise Variance Conditioned Average (NVCA) algorithm, which takes advantage of the a priori knowledge of quantum noise statistics to perform noise reduction while preserving the edges and has already outperformed many state-of-the-art methods in the denoising of images corrupted by quantum noise, while also being suitable for real-time hardware implementation. Different issues are addressed that currently limit the actual use of very low-dose protocols in clinical practice, e.g. the evaluation of actual performances of denoising algorithms in very low-dose conditions, the optimization of tuning parameters to obtain the best denoising performances, the design of an index to properly measure the quality of X-ray images, and the assessment of an a priori noise characterization approach to account for time-varying noise statistics due to changes of X-ray tube settings. An improved NVCA algorithm is also presented, along with its real-time hardware implementation on a Field Programmable Gate Array (FPGA). The novel algorithm provides more efficient noise reduction performances also for low-contrast moving objects, thus relaxing the trade-off between noise reduction and edge preservation, while providing a further reduction of hardware complexity, which allows for low usage of logic resources also on small FPGA platforms. The results presented in this dissertation provide the means for future studies aimed at embedding the NVCA algorithm in commercial fluoroscopic devices to accomplish real-time denoising of very low-dose X-ray images, which would foster their actual use in clinical practice

    Investigating Potential Combinations of Visual Features towards Improvement of Full-Reference and No-Reference Image Quality Assessment

    Get PDF
    Objective assessment of image quality is the process of automatic assignment of a scalar score to an image such that the rating or score corresponds to the score provided by the Human Visual System (HVS). Despite extensive studies since the last two decades, it remains a challenging problem in image processing due to the presence of different types of distortions and limited knowledge of the HVS. Existing approaches for assessing the perceptual quality of images have relied on a number of methodologies that directly apply known properties of the HVS, construct hypotheses considering the HVS as a blackbox and use hybrid approaches that apply both of the techniques. All of these methodologies have relied on different types of visual features for Image Quality Assessment (IQA). In this dissertation, we have studied the problem of different types of IQA from the feature extraction point of view and showed that effective combinations of simple visual features can be used to develop IQA approaches having competitive performance with the state-of-the-art. Our work is divided into four parts each having the final goal to bring about performance improvement in the areas of Full-Reference (FR) and No-Reference (NR)-IQA. We have gradually moved from FR to NR-IQA in the works presented in this dissertation. First, we propose improvements in two existing FR-IQA techniques by introducing changes in the features used. Next, we propose a new FR-IQA technique by extracting image saliency as global features and combining them with the local features of gradient and variance to improve the performance. For NR-IQA, we propose a novel technique for sharpness detection in natural images using simple features. The performance of this method provides improvement over the existing methods. After working with the specific purpose NR-IQA, we propose a general purpose technique using suitable features such that no training with pristine or distorted images or subjective quality scores is required. This technique, despite having no reliance on training, provides competitive performance with the state-of-the-art techniques. The main contribution of the dissertation lies in identification and analysis of effective features and their combinations for improving three different sub-areas of IQA

    Motion robust acquisition and reconstruction of quantitative T2* maps in the developing brain

    Get PDF
    The goal of the research presented in this thesis was to develop methods for quantitative T2* mapping of the developing brain. Brain maturation in the early period of life involves complex structural and physiological changes caused by synaptogenesis, myelination and growth of cells. Molecular structures and biological processes give rise to varying levels of T2* relaxation time, which is an inherent contrast mechanism in magnetic resonance imaging. The knowledge of T2* relaxation times in the brain can thus help with evaluation of pathology by establishing its normative values in the key areas of the brain. T2* relaxation values are a valuable biomarker for myelin microstructure and iron concentration, as well as an important guide towards achievement of optimal fMRI contrast. However, fetal MR imaging is a significant step up from neonatal or adult MR imaging due to the complexity of the acquisition and reconstruction techniques that are required to provide high quality artifact-free images in the presence of maternal respiration and unpredictable fetal motion. The first contribution of this thesis, described in Chapter 4, presents a novel acquisition method for measurement of fetal brain T2* values. At the time of publication, this was the first study of fetal brain T2* values. Single shot multi-echo gradient echo EPI was proposed as a rapid method for measuring fetal T2* values by effectively freezing intra-slice motion. The study concluded that fetal T2* values are higher than those previously reported for pre-term neonates and decline with a consistent trend across gestational age. The data also suggested that longer than usual echo times or direct T2* measurement should be considered when performing fetal fMRI in order to reach optimal BOLD sensitivity. For the second contribution, described in Chapter 5, measurements were extended to a higher field strength of 3T and reported, for the first time, both for fetal and neonatal subjects at this field strength. The technical contribution of this work is a fully automatic segmentation framework that propagates brain tissue labels onto the acquired T2* maps without the need for manual intervention. The third contribution, described in Chapter 6, proposed a new method for performing 3D fetal brain reconstruction where the available data is sparse and is therefore limited in the use of current state of the art techniques for 3D brain reconstruction in the presence of motion. To enable a high resolution reconstruction, a generative adversarial network was trained to perform image to image translation between T2 weighted and T2* weighted data. Translated images could then be served as a prior for slice alignment and super resolution reconstruction of 3D brain image.Open Acces

    Multisensor Concealed Weapon Detection Using the Image Fusion Approach

    Get PDF
    Detection of concealed weapons is an increasingly important problem for both military and police since global terrorism and crime have grown as threats over the years. This work presents two image fusion algorithms, one at pixel level and another at feature level, for efficient concealed weapon detection application. Both the algorithms presented in this work are based on the double-density dual-tree complex wavelet transform (DDDTCWT). In the pixel level fusion scheme, the fusion of low frequency band coefficients is determined by the local contrast, while the high frequency band fusion rule is developed with consideration of both texture feature of the human visual system (HVS) and local energy basis. In the feature level fusion algorithm, features are exacted using Gaussian Mixture model (GMM) based multiscale segmentation approach and the fusion rules are developed based on region activity measurement. Experiment results demonstrate the robustness and efficiency of the proposed algorithms

    Utility of High Resolution Human Settlement Data for Assessment of Electricity Usage Patterns

    Get PDF
    Electricity is vital for modern human civilization, and its demands are expected to significantly rise due to urban growth, transportation modernization, and increasing industrialization and energy accessibility. Meeting the present and future demands while minimizing the environmental degradation from electricity generation pathways presents a significant sustainability challenge. Urban areas consume around 75% of global energy supply yet urban energy statistics are scarce all over the world, creating a severe hindrance for the much-needed energy sustainability studies. This work explores the scope of geospatial data-driven analysis and modeling to address this challenge. Identification and measurements of human habitats, a key measure, is severely misconceived. A multi-scale analysis of high, medium, and coarse resolution datasets in Egypt and Taiwan illustrates the increasing discrepancies from global to local scales. Analysis of urban morphology revealed that high-resolution datasets could perform much better at all scales in diverse geographies while the power of other datasets rapidly diminishes from the urban core to peripheries. A functional inventory of urban settlements was developed for three cities in the developing world using very high-resolution images and texture analysis. Analysis of correspondence between nighttime lights emission, a proxy of electricity consumption, and the settlement inventory was the conducted. The results highlight the statistically significant relationship between functional settlement types and corresponding light emission, and underline the potential of remote sensing data-driven methods in urban energy usage assessment. Lastly, the lack of urban electricity data was addressed by a geospatial modeling approach in the United States. The estimated urban electricity consumption was externally validated and subsequently used to quantify the effects of urbanization on electricity consumption. The results indicate a 23% lowering of electricity consumption corresponding to a 100% increase in urban population. The results highlight the potential of urbanization in lowering per-capita energy usage. The opportunity and limits to such energy efficiency were identified with regards to urban population density. The findings from this work validate the applicability of geospatial data in urban energy studies and provide unique insights into the relationship between urbanization and electricity demands. The insights from this work could be useful for other sustainability studies

    Design and Optimization of Graph Transform for Image and Video Compression

    Get PDF
    The main contribution of this thesis is the introduction of new methods for designing adaptive transforms for image and video compression. Exploiting graph signal processing techniques, we develop new graph construction methods targeted for image and video compression applications. In this way, we obtain a graph that is, at the same time, a good representation of the image and easy to transmit to the decoder. To do so, we investigate different research directions. First, we propose a new method for graph construction that employs innovative edge metrics, quantization and edge prediction techniques. Then, we propose to use a graph learning approach and we introduce a new graph learning algorithm targeted for image compression that defines the connectivities between pixels by taking into consideration the coding of the image signal and the graph topology in rate-distortion term. Moreover, we also present a new superpixel-driven graph transform that uses clusters of superpixel as coding blocks and then computes the graph transform inside each region. In the second part of this work, we exploit graphs to design directional transforms. In fact, an efficient representation of the image directional information is extremely important in order to obtain high performance image and video coding. In this thesis, we present a new directional transform, called Steerable Discrete Cosine Transform (SDCT). This new transform can be obtained by steering the 2D-DCT basis in any chosen direction. Moreover, we can also use more complex steering patterns than a single pure rotation. In order to show the advantages of the SDCT, we present a few image and video compression methods based on this new directional transform. The obtained results show that the SDCT can be efficiently applied to image and video compression and it outperforms the classical DCT and other directional transforms. Along the same lines, we present also a new generalization of the DFT, called Steerable DFT (SDFT). Differently from the SDCT, the SDFT can be defined in one or two dimensions. The 1D-SDFT represents a rotation in the complex plane, instead the 2D-SDFT performs a rotation in the 2D Euclidean space
    • …
    corecore