10 research outputs found

    Classification of Sagittal Lumbar Spine MRI for Lumbar Spinal Stenosis Detection using Transfer Learning of a Deep Convolutional Neural Network

    Get PDF
    Analysis of sagittal lumbar spine MRI images remains an important step in automated detection and diagnosis of lumbar spinal stenosis. There are nu-merous algorithms proposed in the literature that can measure the condition of lumbar intervertebral discs through analysis of the lumbar spine in the sagittal view. However, these algorithms rely on using suitable sagittal im-ages as their inputs. Since an MRI data repository contains more than just these specific images, it is, therefore, necessary to employ an algorithm that can automatically select such images from the entire repository. In this pa-per, we demonstrate the application of an image classification method using deep convolutional neural networks for this purpose. Specifically, we use a pre-trained Inception-ResNet-v2 model and retrain it using two sets of T1-weighted and T2-weighted images. Through our experiment, we can con-clude that this method can reach a performance level of 0.91 and 0.93 on the T1 and T2 datasets, respectively when measured using the accuracy, preci-sion, recall, and f1-score metrics. We also show that the difference in per-formance between using the two modalities is statistically significant and us-ing T2-weighted images is preferred over using T1-weighted images

    Localisation in 3D Images Using Cross-features Correlation Learning

    Get PDF
    Object detection and segmentation have evolved drastically over the past two decades thanks to the continuous advancement in the field of deep learning. Substantial research efforts have been dedicated towards integrating object detection techniques into a wide range of real-world prob-lems. Most existing methods take advantage of the successful application and representational ability of convolutional neural networks (CNNs). Generally, these methods target mainstream applications that are typically based on 2D imaging scenarios. Additionally, driven by the strong correlation between the quality of the feature embedding and the performance in CNNs, most works focus on design characteristics of CNNs, e.g., depth and width, to enhance their modelling capacity and discriminative ability. Limited research was directed towards exploiting feature-level dependencies, which can be feasibly used to enhance the performance of CNNs. More-over, directly adopting such approaches into more complex imaging domains that target data of higher dimensions (e.g., 3D multi-modal and volumetric images) is not straightforwardly appli-cable due to the different nature and complexity of the problem. In this thesis, we explore the possibility of incorporating feature-level correspondence and correlations into object detection and segmentation contexts that target the localisation of 3D objects from 3D multi-modal and volumetric image data. Accordingly, we first explore the detection problem of 3D solar active regions in multi-spectral solar imagery where different imaging bands correspond to different 2D layers (altitudes) in the 3D solar atmosphere.We propose a joint analysis approach in which information from different imaging bands is first individually analysed using band-specific network branches to extract inter-band features that are then dynamically cross-integrated and jointly analysed to investigate spatial correspon-dence and co-dependencies between the different bands. The aggregated embeddings are further analysed using band-specific detection network branches to predict separate sets of results (one for each band). Throughout our study, we evaluate different types of feature fusion, using convo-lutional embeddings of different semantic levels, as well as the impact of using different numbers of image bands inputs to perform the joint analysis. We test the proposed approach over different multi-modal datasets (multi-modal solar images and brain MRI) and applications. The proposed joint analysis based framework consistently improves the CNN’s performance when detecting target regions in contrast to single band based baseline methods.We then generalise our cross-band joint analysis detection scheme into the 3D segmentation problem using multi-modal images. We adopt the joint analysis principles into a segmentation framework where cross-band information is dynamically analysed and cross-integrated at vari-ous semantic levels. The proposed segmentation network also takes advantage of band-specific skip connections to maximise the inter-band information and assist the network in capturing fine details using embeddings of different spatial scales. Furthermore, a recursive training strat-egy, based on weak labels (e.g., bounding boxes), is proposed to overcome the difficulty of producing dense labels to train the segmentation network. We evaluate the proposed segmen-tation approach using different feature fusion approaches, over different datasets (multi-modal solar images, brain MRI, and cloud satellite imagery), and using different levels of supervisions. Promising results were achieved and demonstrate an improved performance in contrast to single band based analysis and state-of-the-art segmentation methods.Additionally, we investigate the possibility of explicitly modelling objective driven feature-level correlations, in a localised manner, within 3D medical imaging scenarios (3D CT pul-monary imaging) to enhance the effectiveness of the feature extraction process in CNNs and subsequently the detection performance. Particularly, we present a framework to perform the 3D detection of pulmonary nodules as an ensemble of two stages, candidate proposal and a false positive reduction. We propose a 3D channel attention block in which cross-channel informa-tion is incorporated to infer channel-wise feature importance with respect to the target objective. Unlike common attention approaches that rely on heavy dimensionality reduction and computa-tionally expensive multi-layer perceptron networks, the proposed approach utilises fully convo-lutional networks to allow directly exploiting rich 3D descriptors and performing the attention in an efficient manner. We also propose a fully convolutional 3D spatial attention approach that elevates cross-sectional information to infer spatial attention. We demonstrate the effectiveness of the proposed attention approaches against a number of popular channel and spatial attention mechanisms. Furthermore, for the False positive reduction stage, in addition to attention, we adopt a joint analysis based approach that takes into account the variable nodule morphology by aggregating spatial information from different contextual levels. We also propose a Zoom-in convolutional path that incorporates semantic information of different spatial scales to assist the network in capturing fine details. The proposed detection approach demonstrates considerable gains in performance in contrast to state-of-the-art lung nodule detection methods.We further explore the possibility of incorporating long-range dependencies between arbi-trary positions in the input features using Transformer networks to infer self-attention, in the context of 3D pulmonary nodule detection, in contrast to localised (convolutional based) atten-tion . We present a hybrid 3D detection approach that takes advantage of both, the Transformers ability in modelling global context and correlations and the spatial representational characteris-tics of convolutional neural networks, providing complementary information and subsequently improving the discriminative ability of the detection model. We propose two hybrid Transformer CNN variants where we investigate the impact of exploiting a deeper Transformer design –in which more Transformer layers and trainable parameters are incorporated– is used along with high-level convolutional feature inputs of a single spatial resolution, in contrast to a shallower Transformer design –of less Transformer layers and trainable parameters– while exploiting con-volutional embeddings of different semantic levels and relatively higher resolution.Extensive quantitative and qualitative analyses are presented for the proposed methods in this thesis and demonstrate the feasibility of exploiting feature-level relations, either implicitly or explicitly, in different detection and segmentation problems

    A Novel Fuzzy c -Means Clustering Algorithm Using Adaptive Norm

    Get PDF
    Abstract(#br)The fuzzy c -means (FCM) clustering algorithm is an unsupervised learning method that has been widely applied to cluster unlabeled data automatically instead of artificially, but is sensitive to noisy observations due to its inappropriate treatment of noise in the data. In this paper, a novel method considering noise intelligently based on the existing FCM approach, called adaptive-FCM and its extended version (adaptive-REFCM) in combination with relative entropy, are proposed. Adaptive-FCM, relying on an inventive integration of the adaptive norm, benefits from a robust overall structure. Adaptive-REFCM further integrates the properties of the relative entropy and normalized distance to preserve the global details of the dataset. Several experiments are carried out,..

    Automated segmentation and characterisation of white matter hyperintensities

    Get PDF
    Neuroimaging has enabled the observation of damage to the white matter that occurs frequently in elderly population and is depicted as hyperintensities in specific magnetic resonance images. Since the pathophysiology underlying the existence of these signal abnormalities and the association with clinical risk factors and outcome is still investigated, a robust and accurate quantification and characterisation of these observations is necessary. In this thesis, I developed a data-driven split and merge model selection framework that results in the joint modelling of normal appearing and outlier observations in a hierarchical Gaussian mixture model. The resulting model can then be used to segment white matter hyperintensities (WMH) in a post-processing step. The validity of the method in terms of robustness to data quality, acquisition protocol and preprocessing and its comparison to the state of the art is evaluated in both simulated and clinical settings. To further characterise the lesions, a subject-specific coordinate frame that divides the WM region according to the relative distance between the ventricular surface and the cortical sheet and to the lobar location is introduced. This coordinate frame is used for the comparison of lesion distributions in a population of twin pairs and for the prediction and standardisation of visual rating scales. Lastly the cross-sectional method is extended into a longitudinal framework, in which a Gaussian Mixture model built on an average image is used to constrain the representation of the individual time points. The method is validated through a purpose-build longitudinal lesion simulator and applied to the investigation of the relationship between APOE genetic status and lesion load progression

    Multi-Agent Systems

    Get PDF
    A multi-agent system (MAS) is a system composed of multiple interacting intelligent agents. Multi-agent systems can be used to solve problems which are difficult or impossible for an individual agent or monolithic system to solve. Agent systems are open and extensible systems that allow for the deployment of autonomous and proactive software components. Multi-agent systems have been brought up and used in several application domains

    Thickness estimation, automated classification and novelty detection in ultrasound images of the plantar fascia tissues

    Get PDF
    The plantar fascia (PF) tissue plays an important role in the movement and the stability of the foot during walking and running. Thus it is possible for the overuse and the associated medical problems to cause injuries and some severe common diseases. Ultrasound (US) imaging offers significant potential in diagnosis of PF injuries and monitoring treatments. Despite the advantages of US, the generated PF images are difficult to interpret during medical assessment. This is partly due to the size and position of the PF in relation to the adjacent tissues. This limits the use of US in clinical practice and therefore impacts on patient services for what is a common problem and a major cause of foot pain and discomfort. It is therefore a requirement to devise an automated system that allows better and easier interpretation of PF US images during diagnosis. This study is concerned with developing a computer-based system using a combination of medical image processing techniques whereby different PF US images can be visually improved, segmented, analysed and classified as normal or abnormal, so as to provide more information to the doctors and the clinical treatment department for early diagnosis and the detection of the PF associated medical problems. More specifically, this study is required to investigate the possibility of a proposed model for localizing and estimating the PF thickness a cross three different sections (rearfoot, midfoot and forefoot) using a supervised ANN segmentation technique. The segmentation method uses RBF artificial neural network module in order to classify small overlapping patches into PF and non-PF tissue. Feature selection technique was performed as a post-processing step for feature extraction to reduce the number of the extracted features. Then the trained RBF-ANN is used to segment the desired PF region. The PF thickness was calculated using two different methods: distance transformation and a proposed area-length calculation algorithm. Additionally, different machine learning approaches were investigated and applied to the segmented PF region in order to distinguish between symptomatic and asymptomatic PF subjects using the best normalized and selected feature set. This aims to facilitate the characterization and the classification of the PF area for the identification of patients with inferior heel pain at risk of plantar fasciitis. Finally, a novelty detection framework for detecting the symptomatic PF samples (with plantar fasciitis disorder) using only asymptomatic samples is proposed. This model implies the following: feature analysis, building a normality model by training the one-class SVDD classifier using only asymptomatic PF training datasets, and computing novelty scores using the trained SVDD classifier, training and testing asymptomatic datasets, and testing symptomatic datasets of the PF dataset. The performance evaluation results showed that the proposed approaches used in this study obtained favourable results compared to other methods reported in the literature
    corecore