10 research outputs found

    A COMPUTATIONAL FRAMEWORK FOR EDGE-PRESERVING REGULARIZATION IN DYNAMIC INVERSE PROBLEMS

    Get PDF
    We devise efficient methods for dynamic inverse problems, where both the quantities of interest and the forward operator (measurement process) may change in time. Our goal is to solve for all the quantities of interest simultaneously. We consider large-scale ill-posed problems made more challenging by their dynamic nature and, possibly, by the limited amount of available data per measurement step. To alleviate these difficulties, we apply a unified class of regularization methods that enforce simultaneous regularization in space and time (such as edge enhancement at each time instant and proximity at consecutive time instants) and achieve this with low computational cost and enhanced accuracy. More precisely, we develop iterative methods based on a majorization-minimization (MM) strategy with quadratic tangent majorant, which allows the resulting least-squares problem with a total variation regularization term to be solved with a generalized Krylov subspace (GKS) method; the regularization parameter can be determined automatically and efficiently at each iteration. Numerical examples from a wide range of applications, such as limited-angle computerized tomography (CT), space-time image deblurring, and photoacoustic tomography (PAT), illustrate the effectiveness of the described approaches.</p

    Detecting and Evaluating Therapy Induced Changes in Radiomics Features Measured from Non-Small Cell Lung Cancer to Predict Patient Outcomes

    Get PDF
    The purpose of this study was to investigate whether radiomics features measured from weekly 4-dimensional computed tomography (4DCT) images of non-small cell lung cancers (NSCLC) change during treatment and if those changes are prognostic for patient outcomes or dependent on treatment modality. Radiomics features are quantitative metrics designed to evaluate tumor heterogeneity from routine medical imaging. Features that are prognostic for patient outcome could be used to monitor tumor response and identify high-risk patients for adaptive treatment. This would be especially valuable for NSCLC due to the high prevalence and mortality of this disease. A novel process was designed to select feature-specific image preprocessing and remove features that were not robust to differences in CT model or tumor volumes. These features were then measured from weekly 4DCT images. These features were evaluated to determine at which point in treatment they first begin changing if those changes were different for patients treated with protons versus photons. A subset of features demonstrated significant changes by the second or third week of treatment, however changes were never significantly different between patient groups. Delta-radiomics features were defined as relative net changes, linear regression slopes, and end of treatment feature values. Features were then evaluated in univariate and multivariate models for overall survival, distant metastases, and local-regional recurrence. In general, the delta-radiomics features were not more prognostic than models built using clinical factors or features at pre-treatment. However one shape descriptor measured at pre-treatment significantly improved model fit and performance for overall survival and distant metastases. Additionally for local-regional recurrence, the only significant covariate was texture strength measured at the end of treatment. A separate study characterized radiomics feature variability in cone-beam CT images to increased scatter, increased motion, and different scanners. Features were affected by all three parameters and specifically by motion amplitudes greater than 1 cm. This study resulted in strong evidence that a set of robust radiomics features change significantly during treatment. While these changes were not prognostic or dependent on treatment modality, future studies may benefit from the methodologies described here to explore delta-radiomics in alternative tumor sites or imaging modalities

    Textural Difference Enhancement based on Image Component Analysis

    Get PDF
    In this thesis, we propose a novel image enhancement method to magnify the textural differences in the images with respect to human visual characteristics. The method is intended to be a preprocessing step to improve the performance of the texture-based image segmentation algorithms. We propose to calculate the six Tamura's texture features (coarseness, contrast, directionality, line-likeness, regularity and roughness) in novel measurements. Each feature follows its original understanding of the certain texture characteristic, but is measured by some local low-level features, e.g., direction of the local edges, dynamic range of the local pixel intensities, kurtosis and skewness of the local image histogram. A discriminant texture feature selection method based on principal component analysis (PCA) is then proposed to find the most representative characteristics in describing textual differences in the image. We decompose the image into pairwise components representing the texture characteristics strongly and weakly, respectively. A set of wavelet-based soft thresholding methods are proposed as the dictionaries of morphological component analysis (MCA) to sparsely highlight the characteristics strongly and weakly from the image. The wavelet-based thresholding methods are proposed in pair, therefore each of the resulted pairwise components can exhibit one certain characteristic either strongly or weakly. We propose various wavelet-based manipulation methods to enhance the components separately. For each component representing a certain texture characteristic, a non-linear function is proposed to manipulate the wavelet coefficients of the component so that the component is enhanced with the corresponding characteristic accentuated independently while having little effect on other characteristics. Furthermore, the above three methods are combined into a uniform framework of image enhancement. Firstly, the texture characteristics differentiating different textures in the image are found. Secondly, the image is decomposed into components exhibiting these texture characteristics respectively. Thirdly, each component is manipulated to accentuate the corresponding texture characteristics exhibited there. After re-combining these manipulated components, the image is enhanced with the textural differences magnified with respect to the selected texture characteristics. The proposed textural differences enhancement method is used prior to both grayscale and colour image segmentation algorithms. The convincing results of improving the performance of different segmentation algorithms prove the potential of the proposed textural difference enhancement method

    Textural Difference Enhancement based on Image Component Analysis

    Get PDF
    In this thesis, we propose a novel image enhancement method to magnify the textural differences in the images with respect to human visual characteristics. The method is intended to be a preprocessing step to improve the performance of the texture-based image segmentation algorithms. We propose to calculate the six Tamura's texture features (coarseness, contrast, directionality, line-likeness, regularity and roughness) in novel measurements. Each feature follows its original understanding of the certain texture characteristic, but is measured by some local low-level features, e.g., direction of the local edges, dynamic range of the local pixel intensities, kurtosis and skewness of the local image histogram. A discriminant texture feature selection method based on principal component analysis (PCA) is then proposed to find the most representative characteristics in describing textual differences in the image. We decompose the image into pairwise components representing the texture characteristics strongly and weakly, respectively. A set of wavelet-based soft thresholding methods are proposed as the dictionaries of morphological component analysis (MCA) to sparsely highlight the characteristics strongly and weakly from the image. The wavelet-based thresholding methods are proposed in pair, therefore each of the resulted pairwise components can exhibit one certain characteristic either strongly or weakly. We propose various wavelet-based manipulation methods to enhance the components separately. For each component representing a certain texture characteristic, a non-linear function is proposed to manipulate the wavelet coefficients of the component so that the component is enhanced with the corresponding characteristic accentuated independently while having little effect on other characteristics. Furthermore, the above three methods are combined into a uniform framework of image enhancement. Firstly, the texture characteristics differentiating different textures in the image are found. Secondly, the image is decomposed into components exhibiting these texture characteristics respectively. Thirdly, each component is manipulated to accentuate the corresponding texture characteristics exhibited there. After re-combining these manipulated components, the image is enhanced with the textural differences magnified with respect to the selected texture characteristics. The proposed textural differences enhancement method is used prior to both grayscale and colour image segmentation algorithms. The convincing results of improving the performance of different segmentation algorithms prove the potential of the proposed textural difference enhancement method

    Compressed Sensing for Open-ended Waveguide Non-Destructive Testing and Evaluation

    Get PDF
    Ph. D. ThesisNon-destructive testing and evaluation (NDT&E) systems using open-ended waveguide (OEW) suffer from critical challenges. In the sensing stage, data acquisition is time-consuming by raster scan, which is difficult for on-line detection. Sensing stage also disregards demand for the latter feature extraction process, leading to an excessive amount of data and processing overhead for feature extraction. In the feature extraction stage, efficient and robust defect region segmentation in the obtained image is challenging for a complex image background. Compressed sensing (CS) demonstrates impressive data compression ability in various applications using sparse models. How to develop CS models in OEW NDT&E that jointly consider sensing & processing for fast data acquisition, data compression, efficient and robust feature extraction is remaining challenges. This thesis develops integrated sensing-processing CS models to address the drawbacks in OEW NDT systems and carries out their case studies in low-energy impact damage detection for carbon fibre reinforced plastics (CFRP) materials. The major contributions are: (1) For the challenge of fast data acquisition, an online CS model is developed to offer faster data acquisition and reduce data amount without any hardware modification. The images obtained with OEW are usually smooth which can be sparsely represented with discrete cosine transform (DCT) basis. Based on this information, a customised 0/1 Bernoulli matrix for CS measurement is designed for downsampling. The full data is reconstructed with orthogonal matching pursuit algorithm using the downsampling data, DCT basis, and the customised 0/1 Bernoulli matrix. It is hard to determine the sampling pixel numbers for sparse reconstruction when lacking training data, to address this issue, an accumulated sampling and recovery process is developed in this CS model. The defect region can be extracted with the proposed histogram threshold edge detection (HTED) algorithm after each recovery, which forms an online process. A case study in impact damage detection on CFRP materials is carried out for validation. The results show that the data acquisition time is reduced by one order of magnitude while maintaining equivalent image quality and defect region as raster scan. (2) For the challenge of efficient data compression that considers the later feature extraction, a feature-supervised CS data acquisition method is proposed and evaluated. It reserves interested features while reducing the data amount. The frequencies which reveal the feature only occupy a small part of the frequency band, this method finds these sparse frequency range firstly to supervise the later sampling process. Subsequently, based on joint sparsity of neighbour frame and the extracted frequency band, an aligned spatial-spectrum sampling scheme is proposed. The scheme only samples interested frequency range for required features by using a customised 0/1 Bernoulli measurement matrix. The interested spectral-spatial data are reconstructed jointly, which has much faster speed than frame-by-frame methods. The proposed feature-supervised CS data acquisition is implemented and compared with raster scan and the traditional CS reconstruction in impact damage detection on CFRP materials. The results show that the data amount is reduced greatly without compromising feature quality, and the gain in reconstruction speed is improved linearly with the number of measurements. (3) Based on the above CS-based data acquisition methods, CS models are developed to directly detect defect from CS data rather than using the reconstructed full spatial data. This method is robust to texture background and more time-efficient that HTED algorithm. Firstly, based on the histogram is invariant to down-sampling using the customised 0/1 Bernoulli measurement matrix, a qualitative method which only gives binary judgement of defect is developed. High probability of detection and accuracy is achieved compared to other methods. Secondly, a new greedy algorithm of sparse orthogonal matching pursuit (spOMP)-based defect region segmentation method is developed to quantitatively extract the defect region, because the conventional sparse reconstruction algorithms cannot properly use the sparse character of correlation between the measurement matrix and CS data. The proposed algorithms are faster and more robust to interference than other algorithms.China Scholarship Counci

    Image processing and analysis methods in quantitative endothelial cell biology

    Get PDF
    This thesis details the development of computerised image processing and analysis pipelines for quantitative evaluation of microscope image data acquired in endothelial vascular biology experimentation. The overarching objective of this work was to advance our understanding of the cell biology of cardiovascular processes; principally involving haemostasis, thrombosis, and inflammation. Bioinformatics techniques are increasingly necessary to extract and evaluate information from biological experimentation. In cell biology advances in microscopy and the increased acquisition of large scale digital image data sets have created a need for automated image processing and data analysis. The development, testing, and evaluation of three computerised workflows for analysis of microscopy images investigating cardiovascular cell biology are described here. The first image analysis pipeline extracts morphometric features from high-throughput experiments imaging endothelial cells and organelles. Segmentation of endothelial cells and their organelles followed by extraction of morphometric features provides a rich quantitative data set to investigate haemostatic mechanisms. A second image processing workflow was applied to platelet images obtained from super-resolution microscopy, and used in a proof-of-principle study of a new platelet dense-granule deficiency diagnostic method. The method was able to efficiently differentiate between healthy volunteers and three patients with Hermansky-Pudlak syndrome. This was achieved by segmenting and counting the number of CD63-positive structures per platelet, allowing for the differentiation of patients from control volunteers with 99\% confidence. The final workflow described is a video analysis method that quantifies interactions of leukocytes with an endothelial monolayer. Phase contrast microscopy videos were analysed with a Haar-like features object detection and custom tracking method to quantify the dynamic interaction of rolling leukocytes. This technique provides much more information than a manual evaluation and was found to give a tracking accuracy of 92\%. These three methodologies provide a toolkit to further biological understanding of multiple facets of cardiovascular behaviour
    corecore