1,410 research outputs found

    Modelling and assessment of signal-dependent noise for image de-noising

    Get PDF
    Publication in the conference proceedings of EUSIPCO, Toulouse, France, 200

    Pattern identification of biomedical images with time series: contrasting THz pulse imaging with DCE-MRIs

    Get PDF
    Objective We provide a survey of recent advances in biomedical image analysis and classification from emergent imaging modalities such as terahertz (THz) pulse imaging (TPI) and dynamic contrast-enhanced magnetic resonance images (DCE-MRIs) and identification of their underlining commonalities. Methods Both time and frequency domain signal pre-processing techniques are considered: noise removal, spectral analysis, principal component analysis (PCA) and wavelet transforms. Feature extraction and classification methods based on feature vectors using the above processing techniques are reviewed. A tensorial signal processing de-noising framework suitable for spatiotemporal association between features in MRI is also discussed. Validation Examples where the proposed methodologies have been successful in classifying TPIs and DCE-MRIs are discussed. Results Identifying commonalities in the structure of such heterogeneous datasets potentially leads to a unified multi-channel signal processing framework for biomedical image analysis. Conclusion The proposed complex valued classification methodology enables fusion of entire datasets from a sequence of spatial images taken at different time stamps; this is of interest from the viewpoint of inferring disease proliferation. The approach is also of interest for other emergent multi-channel biomedical imaging modalities and of relevance across the biomedical signal processing community

    Arterial spin labelling magnetic resonance imaging of the brain: techniques and development

    Get PDF
    This thesis centres on the development of arterial spin labelling (ASL) MRI, a non-invasive technique to image cerebral perfusion. In the first chapter I explain the principles of cerebral blood flow (CBF) quantification using ASL beginning with the original implementation through to the most recent advances. I proceed to describe the established theory behind the key additional MRI contrast mechanisms and techniques that underpin the novel experiments described in this thesis (T2 and T1 relaxation, diffusion imaging and half-Fourier acquisition and reconstruction). In Chapter 2 I describe work undertaken to sample the transverse relaxation of the ASL perfusion-weighted and control images acquired with and without vascular crusher gradients at a range of post-labelling delay times and tagging durations, to estimate the intra-vascular, intra-cellular and extra-cellular distribution of labelled water in the rat cortex. The results provide evidence for rapid exchange of labelled water into the intra-cellular space relative to the transit-time through the vascular bed, and provide a more solid foundation for CBF quantification using ASL techniques. In Chapter 3 the performance of image de-noising techniques for reducing errors in ASL CBF and arterial transit time estimates is investigated. I show that noise reduction methods can suppress random and systematic errors, improving both the precision and accuracy of CBF measurements and the precision of transit time maps. In Chapter 4 I present the first in-vivo demonstration of Hadamard-encoded continuous ASL (H-CASL); an efficient method of imaging small volumes of labelled blood water in the brain at multiple post labelling delay times. I present evidence that H-CASL is viable for in-vivo application and can improve the precision of δa estimation in 2/3 of the imaging time required for standard multi post labelling delay continuous ASL

    Prediction of NOx Emissions from a Biomass Fired Combustion Process Based on Flame Radical Imaging and Deep Learning Techniques

    Get PDF
    This article presents a methodology for predicting NOx emissions from a biomass combustion process through flame radical imaging and deep learning (DL). The dataset was established experimentally from flame radical images captured on a biomass-gas fired test rig. Morphological component analysis is undertaken to improve the quality of the dataset, and the region-of-interest extraction is introduced to extract the flame radical part and rescale the image size. The developed DL-based prediction model contains three successive stages for implementing the feature extraction, feature fusion, and emission prediction. The fine-tuning based on the prediction is introduced to adjust the process of the feature fusion. The effects of the feature fusion and fine-tuning are discussed in detail. A comparison between various image- and machine-learning-based prediction models show that the proposed DL prediction model outperforms other models in terms of root mean square error criteria. The predicted NOx emissions are in good agreement with the measurement results

    Edge Detection Techniques for Quantifying Spatial Imaging System Performance and Image Quality

    Get PDF
    Measuring camera system performance and associating it directly to image quality is very relevant, whether images are aimed for viewing, or as input to machine learning and automated recognition algorithms. The Modulation Transfer Function (MTF) is a well- established measure for evaluating this performance. This study proposes a novel methodology for measuring system MTFs directly from natural scenes, by adapting the standardized Slanted Edge Method (ISO 12233). The method involves edge detection techniques, to select and extract suitable step edges from pictorial images. The scene MTF aims to account for camera non-linear scene dependent processes. This measure is more relevant to image quality modelling than the traditionally measured MTFs. Preliminary research results indicate that the proposed method can provide reliable MTFs, following the trends of the ISO 12233. Further development and validation are required before it is proposed as a universal camera measuring technique

    Statistical Properties and Applications of Empirical Mode Decomposition

    Get PDF
    Signal analysis is key to extracting information buried in noise. The decomposition of signal is a data analysis tool for determining the underlying physical components of a processed data set. However, conventional signal decomposition approaches such as wavelet analysis, Wagner-Ville, and various short-time Fourier spectrograms are inadequate to process real world signals. Moreover, most of the given techniques require \emph{a prior} knowledge of the processed signal, to select the proper decomposition basis, which makes them improper for a wide range of practical applications. Empirical Mode Decomposition (EMD) is a non-parametric and adaptive basis driver that is capable of breaking-down non-linear, non-stationary signals into an intrinsic and finite components called Intrinsic Mode Functions (IMF). In addition, EMD approximates a dyadic filter that isolates high frequency components, e.g. noise, in higher index IMFs. Despite of being widely used in different applications, EMD is an ad hoc solution. The adaptive performance of EMD comes at the expense of formulating a theoretical base. Therefore, numerical analysis is usually adopted in literature to interpret the behavior. This dissertation involves investigating statistical properties of EMD and utilizing the outcome to enhance the performance of signal de-noising and spectrum sensing systems. The novel contributions can be broadly summarized in three categories: a statistical analysis of the probability distributions of the IMFs and a suggestion of Generalized Gaussian distribution (GGD) as a best fit distribution; a de-noising scheme based on a null-hypothesis of IMFs utilizing the unique filter behavior of EMD; and a novel noise estimation approach that is used to shift semi-blind spectrum sensing techniques into fully-blind ones based on the first IMF. These contributions are justified statistically and analytically and include comparison with other state of art techniques

    Experimental assessment of presumed filtered density function models

    Get PDF
    Measured filtered density functions (FDFs) as well as assumed beta distribution model of mixture fraction and “subgrid” scale (SGS) scalar variance, used typically in large eddy simulations, were studied by analysing experimental data, obtained from two-dimensional planar, laser induced fluorescence measurements in isothermal swirling turbulent flows at a constant Reynolds number of 29 000 for different swirl numbers (0.3, 0.58, and 1.07)
    • …
    corecore