15 research outputs found

    Hierarchical clustering-based segmentation (HCS) aided interpretation of the DCE MR Images of the Prostate

    Get PDF
    In Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) for prostate cancer, there is early intense enhancement and rapid washout of contrast material, due to the heterogeneous and leaky characteristics of the tumour angiogenesis. These characteristics can be demonstrated by the quantitative measurement of signal enhancement with time (Time Intensity Curve). The TIC is plotted for the pixels', averaged intensity value, within a user drawn Region of Interest (ROI). The ROI, normally chosen within an area of the largest enhancement, may enclose tissues of different enhancement pattern. Hence the averaged TIC from the ROI may not represent the actual characteristics of the enclosed tissue of interest. Hierarchical Clustering-based Segmentation (HCS) is an approach to Computer Aided Monitoring (CAM) that generates a hierarchy of segmentation results to highlight the varied dissimilarities in images. As a diagnostic aid for the analysis of DCE-MR image data, the process starts with the HCS process applied to all the DCE-MR temporal frames of a slice. HCS process output provides heat map images based on the normalised average pixel value of the various dissimilar regions. TIC of the contrast wash-in, wash-out process are then plotted for suspicious regions confirmed by the user. In this paper we have demonstrated how the HCS process as asemi-quantitative analytical tool to analyse the DCE MR images of the Prostate complements the radiologist's interpretation of DCE MR images

    Learning spatiotemporal features for esophageal abnormality detection from endoscopic videos

    Get PDF
    Esophageal cancer is categorized as a type of disease with a high mortality rate. Early detection of esophageal abnormalities (i.e. precancerous and early can- cerous) can improve the survival rate of the patients. Re- cent deep learning-based methods for selected types of esophageal abnormality detection from endoscopic images have been proposed. However, no methods have been introduced in the literature to cover the detection from endoscopic videos, detection from challenging frames and detection of more than one esophageal abnormality type. In this paper, we present an efficient method to automat- ically detect different types of esophageal abnormalities from endoscopic videos. We propose a novel 3D Sequen- tial DenseConvLstm network that extracts spatiotemporal features from the input video. Our network incorporates 3D Convolutional Neural Network (3DCNN) and Convolu- tional Lstm (ConvLstm) to efficiently learn short and long term spatiotemporal features. The generated feature map is utilized by a region proposal network and ROI pooling layer to produce a bounding box that detects abnormal- ity regions in each frame throughout the video. Finally, we investigate a post-processing method named Frame Search Conditional Random Field (FS-CRF) that improves the overall performance of the model by recovering the missing regions in neighborhood frames within the same clip. We extensively validate our model on an endoscopic video dataset that includes a variety of esophageal ab- normalities. Our model achieved high performance using different evaluation metrics showing 93.7% recall, 92.7% precision, and 93.2% F-measure. Moreover, as no results have been reported in the literature for the esophageal abnormality detection from endoscopic videos, to validate the robustness of our model, we have tested the model on a publicly available colonoscopy video dataset, achieving the polyp detection performance in a recall of 81.18%, precision of 96.45% and F-measure 88.16%, compared to the state-of-the-art results of 78.84% recall, 90.51% preci- sion and 84.27% F-measure using the same dataset. This demonstrates that the proposed method can be adapted to different gastrointestinal endoscopic video applications with a promising performance

    Extraction of arterial and venous trees from disconnected vessel segments in fundus images

    Get PDF
    The accurate automated extraction of arterial and venous (AV) trees in fundus images subserves investigation into the correlation of global features of the retinal vasculature with retinal abnormalities. The accurate extraction of AV trees also provides the opportunity to analyse the physiology and hemodynamic of blood flow in retinal vessel trees. A number of common diseases, including Diabetic Retinopathy, Cardiovascular and Cerebrovascular diseases, directly affect the morphology of the retinal vasculature. Early detection of these pathologies may prevent vision loss and reduce the risk of other life-threatening diseases. Automated extraction of AV trees requires complete segmentation and accurate classification of retinal vessels. Unfortunately, the available segmentation techniques are susceptible to a number of complications including vessel contrast, fuzzy edges, variable image quality, media opacities, and vessel overlaps. Due to these sources of errors, the available segmentation techniques produce partially segmented vascular networks. Thus, extracting AV trees by accurately connecting and classifying the disconnected segments is extremely complex. This thesis provides a novel graph-based technique for accurate extraction of AV trees from a network of disconnected and unclassified vessel segments in fundus viii images. The proposed technique performs three major tasks: junction identification, local configuration, and global configuration. A probabilistic approach is adopted that rigorously identifies junctions by examining the mutual associations of segment ends. These associations are determined by dynamically specifying regions at both ends of all segments. A supervised Naïve Bayes inference model is developed that estimates the probability of each possible configuration at a junction. The system enumerates all possible configurations and estimates posterior probability of each configuration. The likelihood function estimates the conditional probability of the configuration using the statistical parameters of distribution of colour and geometrical features of joints. The parameters of feature distributions and priors of configuration are obtained through supervised learning phases. A second Naïve Bayes classifier estimates class probabilities of each vessel segment utilizing colour and spatial properties of segments. The global configuration works by translating the segment network into an STgraph (a specialized form of dependency graph) representing the segments and their possible connective associations. The unary and pairwise potentials for ST-graph are estimated using the class and configuration probabilities obtained earlier. This translates the classification and configuration problems into a general binary labelling graph problem. The ST-graph is interpreted as a flow network for energy minimization a minimum ST-graph cut is obtained using the Ford-Fulkerson algorithm, from which the estimated AV trees are extracted. The performance is evaluated by implementing the system on test images of DRIVE dataset and comparing the obtained results with the ground truth data. The ground truth data is obtained by establishing a new dataset for DRIVE images with manually classified vessels. The system outperformed benchmark methods and produced excellent results

    Supervised learning-based multimodal MRI brain image analysis

    Get PDF
    Medical imaging plays an important role in clinical procedures related to cancer, such as diagnosis, treatment selection, and therapy response evaluation. Magnetic resonance imaging (MRI) is one of the most popular acquisition modalities which is widely used in brain tumour analysis and can be acquired with different acquisition protocols, e.g. conventional and advanced. Automated segmentation of brain tumours in MR images is a difficult task due to their high variation in size, shape and appearance. Although many studies have been conducted, it still remains a challenging task and improving accuracy of tumour segmentation is an ongoing field. The aim of this thesis is to develop a fully automated method for detection and segmentation of the abnormal tissue associated with brain tumour (tumour core and oedema) from multimodal MRI images. In this thesis, firstly, the whole brain tumour is segmented from fluid attenuated inversion recovery (FLAIR) MRI, which is commonly acquired in clinics. The segmentation is achieved using region-wise classification, in which regions are derived from superpixels. Several image features including intensity-based, Gabor textons, fractal analysis and curvatures are calculated from each superpixel within the entire brain area in FLAIR MRI to ensure a robust classification. Extremely randomised trees (ERT) classifies each superpixel into tumour and non-tumour. Secondly, the method is extended to 3D supervoxel based learning for segmentation and classification of tumour tissue subtypes in multimodal MRI brain images. Supervoxels are generated using the information across the multimodal MRI data set. This is then followed by a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. The information from the advanced protocols of diffusion tensor imaging (DTI), i.e. isotropic (p) and anisotropic (q) components is also incorporated to the conventional MRI to improve segmentation accuracy. Thirdly, to further improve the segmentation of tumour tissue subtypes, the machine-learned features from fully convolutional neural network (FCN) are investigated and combined with hand-designed texton features to encode global information and local dependencies into feature representation. The score map with pixel-wise predictions is used as a feature map which is learned from multimodal MRI training dataset using the FCN. The machine-learned features, along with hand-designed texton features are then applied to random forests to classify each MRI image voxel into normal brain tissues and different parts of tumour. The methods are evaluated on two datasets: 1) clinical dataset, and 2) publicly available Multimodal Brain Tumour Image Segmentation Benchmark (BRATS) 2013 and 2017 dataset. The experimental results demonstrate the high detection and segmentation performance of the III single modal (FLAIR) method. The average detection sensitivity, balanced error rate (BER) and the Dice overlap measure for the segmented tumour against the ground truth for the clinical data are 89.48%, 6% and 0.91, respectively; whilst, for the BRATS dataset, the corresponding evaluation results are 88.09%, 6% and 0.88, respectively. The corresponding results for the tumour (including tumour core and oedema) in the case of multimodal MRI method are 86%, 7%, 0.84, for the clinical dataset and 96%, 2% and 0.89 for the BRATS 2013 dataset. The results of the FCN based method show that the application of the RF classifier to multimodal MRI images using machine-learned features based on FCN and hand-designed features based on textons provides promising segmentations. The Dice overlap measure for automatic brain tumor segmentation against ground truth for the BRATS 2013 dataset is 0.88, 0.80 and 0.73 for complete tumor, core and enhancing tumor, respectively, which is competitive to the state-of-the-art methods. The corresponding results for BRATS 2017 dataset are 0.86, 0.78 and 0.66 respectively. The methods demonstrate promising results in the segmentation of brain tumours. This provides a close match to expert delineation across all grades of glioma, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management. In the experiments, texton has demonstrated its advantages of providing significant information to distinguish various patterns in both 2D and 3D spaces. The segmentation accuracy has also been largely increased by fusing information from multimodal MRI images. Moreover, a unified framework is present which complementarily integrates hand-designed features with machine-learned features to produce more accurate segmentation. The hand-designed features from shallow network (with designable filters) encode the prior-knowledge and context while the machine-learned features from a deep network (with trainable filters) learn the intrinsic features. Both global and local information are combined using these two types of networks that improve the segmentation accuracy

    Novel approaches for image analysis of in vitro epithelial cultures with application to silver nanoparticle toxicity

    Get PDF
    A novel imaging approach was developed for the purpose of counting cells from phase contrast microscopy images of laboratory grown (in vitro} cultures of epithelial cells. Validation through comparison with standard laboratory cell counting techniques showed this approach provided consistent and comparable results, whilst overcoming limitations of these existing techniques, such as operator variability and sample destruction. The imaging approach was subsequently applied to investigate the effects of silver nanoparticles (AgNP} on H400 oral keratinocytes. Concurrent investigations into antimicrobial effects of AgNP were performed on Escherichia coli, Staphylococcus aureus and Streptococcus mutans to provide models for Gram-positive and Gram-negative infection, and to compare with the literature and oral keratinocyte toxicity. It was found that AgNP elicit size-, dose- and time-dependent growth inhibition in both human cells and bacteria, although bacterial inhibition was not achieved without significant cytotoxicity at the same concentrations
    corecore