125 research outputs found
Deep convolutional neural networks for multi-planar lung nodule detection: improvement in small nodule identification
Objective: In clinical practice, small lung nodules can be easily overlooked
by radiologists. The paper aims to provide an efficient and accurate detection
system for small lung nodules while keeping good performance for large nodules.
Methods: We propose a multi-planar detection system using convolutional neural
networks. The 2-D convolutional neural network model, U-net++, was trained by
axial, coronal, and sagittal slices for the candidate detection task. All
possible nodule candidates from the three different planes are combined. For
false positive reduction, we apply 3-D multi-scale dense convolutional neural
networks to efficiently remove false positive candidates. We use the public
LIDC-IDRI dataset which includes 888 CT scans with 1186 nodules annotated by
four radiologists. Results: After ten-fold cross-validation, our proposed
system achieves a sensitivity of 94.2% with 1.0 false positive/scan and a
sensitivity of 96.0% with 2.0 false positives/scan. Although it is difficult to
detect small nodules (i.e. < 6 mm), our designed CAD system reaches a
sensitivity of 93.4% (95.0%) of these small nodules at an overall false
positive rate of 1.0 (2.0) false positives/scan. At the nodule candidate
detection stage, results show that a multi-planar method is capable to detect
more nodules compared to using a single plane. Conclusion: Our approach
achieves good performance not only for small nodules, but also for large
lesions on this dataset. This demonstrates the effectiveness and efficiency of
our developed CAD system for lung nodule detection. Significance: The proposed
system could provide support for radiologists on early detection of lung
cancer
Deep learning for lung cancer on computed tomography:early detection and prognostic prediction
Lung cancer is one of the most fatal cancers in the world, the leading cause of death among both men and women. The five-year survival rate for lung cancer patients is only between 10 and 20%. However, the mortality rate can be reduced if lung cancer is diagnosed at an early stage and treated promptly. Screening trials have been established in many countries to improve early detetion of lung cancer, but it results in numerous scans that need to be evaluated, which is labor-intensive. On the other hand, when lung cancer is diagnosed at an early stage in screening, the clinical response after the treatment can vary between patients. Therefore, strong needs exist for accurate early detection and prognostic prediction of lung cancer.Deep learning recently has achieved great success in medical image analysis, especially for lung cancer. The results described in this thesis show that combining clinical procedures, deep learning techniques are feasible to assist radiologists with pulmonary nodule detection and rule out most negative scans in lung cancer screening. Besides, by integrating clinical factors and imaging features, deep learning can identify high mortality risk lung cancer patients who could benefit from adjuvant chemotherapy. With the implementation of lung cancer screening programs, more imaging and clinical data will be available, which enables deep learning to further boost the efficiency of screening procedures and lower the lung cancer mortality in the future
Recommended from our members
Transfer learning from T1‐weighted to T2‐weighted Magnetic resonance sequences for brain image segmentation
Data availability: Research data are not shared.Copyright © 2023 The Authors. Magnetic resonance (MR) imaging is a widely employed medical imaging technique that produces detailed anatomical images of the human body. The segmentation of MR images plays a crucial role in medical image analysis, as it enables accurate diagnosis, treatment planning, and monitoring of various diseases and conditions. Due to the lack of sufficient medical images, it is challenging to achieve an accurate segmentation, especially with the application of deep learning networks. The aim of this work is to study transfer learning from T1-weighted (T1-w) to T2-weighted (T2-w) MR sequences to enhance bone segmentation with minimal required computation resources. With the use of an excitation-based convolutional neural networks, four transfer learning mechanisms are proposed: transfer learning without fine tuning, open fine tuning, conservative fine tuning, and hybrid transfer learning. Moreover, a multi-parametric segmentation model is proposed using T2-w MR as an intensity-based augmentation technique. The novelty of this work emerges in the hybrid transfer learning approach that overcomes the overfitting issue and preserves the features of both modalities with minimal computation time and resources. The segmentation results are evaluated using 14 clinical 3D brain MR and CT images. The results reveal that hybrid transfer learning is superior for bone segmentation in terms of performance and computation time with DSCs of 0.5393 ± 0.0007. Although T2-w-based augmentation has no significant impact on the performance of T1-w MR segmentation, it helps in improving T2-w MR segmentation and developing a multi-sequences segmentation model.Swiss National Science Foundation. Grant Number: SNSF 320030_176052;
Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung. Grant Number: 320030_17605
The Discrete Analysis of the Tissue Biopsy Images with Metamaterial Formalization:Identifying Tumor Locus
Herein, we develop an enhanced and automated methodology for detection of the tumour cells in fixed biopsy samples. Metamaterial formalism (MMF) approach allowing recognition of tumour areas in tissue samples is enhanced by providing an advanced technique to digitize mouse biopsy images. Thus, a colour-based segmentation technique based on the K-means clustering method is used allowing for a precise segmentation of the cells composing the biological tissue sample. Errors occurring at the tissue digitization steps are detected by applying MMF. Doing so, we end up with the robust, fully automated approach with no needs of the human intervention, ready for the clinical applications. The proposed methodology consists of three major steps, i. e. digitization of the biopsy image, analysis of the biopsy image, modelling of the disordered metamaterial. It is worthwhile mentioning, that the technique under consideration allows for the cancer stage detection. Moreover, early stage cancer diagnosis is possible by applying MMF
Brain Tumor Characterization Using Radiogenomics in Artificial Intelligence Framework
Brain tumor characterization (BTC) is the process of knowing the underlying cause of brain tumors and their characteristics through various approaches such as tumor segmentation, classification, detection, and risk analysis. The substantial brain tumor characterization includes the identification of the molecular signature of various useful genomes whose alteration causes the brain tumor. The radiomics approach uses the radiological image for disease characterization by extracting quantitative radiomics features in the artificial intelligence (AI) environment. However, when considering a higher level of disease characteristics such as genetic information and mutation status, the combined study of “radiomics and genomics” has been considered under the umbrella of “radiogenomics”. Furthermore, AI in a radiogenomics’ environment offers benefits/advantages such as the finalized outcome of personalized treatment and individualized medicine. The proposed study summarizes the brain tumor’s characterization in the prospect of an emerging field of research, i.e., radiomics and radiogenomics in an AI environment, with the help of statistical observation and risk-of-bias (RoB) analysis. The PRISMA search approach was used to find 121 relevant studies for the proposed review using IEEE, Google Scholar, PubMed, MDPI, and Scopus. Our findings indicate that both radiomics and radiogenomics have been successfully applied aggressively to several oncology applications with numerous advantages. Furthermore, under the AI paradigm, both the conventional and deep radiomics features have made an impact on the favorable outcomes of the radiogenomics approach of BTC. Furthermore, risk-of-bias (RoB) analysis offers a better understanding of the architectures with stronger benefits of AI by providing the bias involved in them
Recommended from our members
Deep learning assisted MRI guided attenuation correction in PET
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonPositron emission tomography (PET) is a unique imaging modality that provides physiological
and functional details of the tissue at the molecular level. However, the acquired PET images
have some limitations such as the attenuation. PET attenuation correction is an essential step to
obtain the full potential of PET quantification. With the wide use of hybrid PET/MR scanners,
magnetic resonance (MR) images are used to address the problem of PET attenuation correction.
The MR images segmentation is one simple and robust approach to create pseudo computed
tomography (CT) images, which are used to generate attenuation coefficient maps to correct the
PET attenuation. Recently, deep learning has been proposed and used as a promising technique
to efficiently perform MR and various medical images segmentation.
In this research work, deep learning guided segmentation approaches have been proposed
to enhance the bone class segmentation of MR brain images in order to generate accurate
pseudo-CT images. The first approach has introduced the combination of handcrafted features
with deep learning features to enrich the set of features. Multiresolution analysis techniques,
which generate multiscale and multidirectional coefficients of an image such as contourlet and
shearlet transforms, are applied and combined with deep convolutional neural network (CNN)
features. Different experiments have been conducted to investigate the number of selected
coefficients and the insertion location of the handcrafted features.
The second approach aims at reducing the segmentation algorithm’s complexity while
maintaining the segmentation performance. An attention based convolutional encode-decoder
network has been proposed to adaptively recalibrate the deep network features. This attention based
network consists of two different squeeze and excitation blocks that excite the features
spatially and channel wise. The two blocks are combined sequentially to decrease the number
of network’s parameters and reduces the model complexity. The third approach has been focuses on the application of transfer learning from different MR sequences such as T1 weighted (T1-w) and T2 weighted (T2-w) images. A
pretrained model with T1-w MR sequences is fine tuned to perform the segmentation of T2-w
images. Multiple fine tuning approaches and experiments have been conducted to study the best
fine tuning mechanism that is able to build an efficient segmentation model for both T1-w and
T2-w segmentation. Clinical datasets of fifty patients with different conditions and diagnosis have been
used to carry an objective evaluation to measure the segmentation performance of the results
obtained by the three proposed methods. The first and second approaches have been validated
with other studies in the literature that applied deep network based segmentation technique to
perform MR based attenuation correction for PET images. The proposed methods have shown
an enhancement in the bone segmentation with an increase of dice similarity coefficient (DSC)
from 0.6179 to 0.6567 using an ensemble of CNNs with an improvement percentage of 6.3%.
The proposed excitation-based CNN has decreased the model complexity by decreasing the
number of trainable parameters by more than 46% where less computing resources are required
to train the model. The proposed hybrid transfer learning method has shown its superiority to
build a multi-sequences (T1-w and T2-w) segmentation approach compared to other applied
transfer learning methods especially with the bone class where the DSC is increased from 0.3841
to 0.5393. Moreover, the hybrid transfer learning approach requires less computing time than
transfer learning using open and conservative fine tuning
Characterization of alar ligament on 3.0T MRI: a cross-sectional study in IIUM Medical Centre, Kuantan
INTRODUCTION: The main purpose of the study is to compare the normal anatomy of alar
ligament on MRI between male and female. The specific objectives are to assess the prevalence
of alar ligament visualized on MRI, to describe its characteristics in term of its course, shape and
signal homogeneity and to find differences in alar ligament signal intensity between male and
female. This study also aims to determine the association between the heights of respondents
with alar ligament signal intensity and dimensions.
MATERIALS & METHODS: 50 healthy volunteers were studied on 3.0T MR scanner
Siemens Magnetom Spectra using 2-mm proton density, T2 and fat-suppression sequences. Alar
ligament is depicted in 3 planes and the visualization and variability of the ligament courses,
shapes and signal intensity characteristics were determined. The alar ligament dimensions were
also measured.
RESULTS: Alar ligament was best depicted in coronal plane, followed by sagittal and axial
planes. The orientations were laterally ascending in most of the subjects (60%), predominantly
oval in shaped (54%) and 67% showed inhomogenous signal. No significant difference of alar
ligament signal intensity between male and female respondents. No significant association was
found between the heights of the respondents with alar ligament signal intensity and dimensions.
CONCLUSION: Employing a 3.0T MR scanner, the alar ligament is best portrayed on coronal
plane, followed by sagittal and axial planes. However, tremendous variability of alar ligament as
depicted in our data shows that caution needs to be exercised when evaluating alar ligament,
especially during circumstances of injury
Case series of breast fillers and how things may go wrong: radiology point of view
INTRODUCTION: Breast augmentation is a procedure opted by women to overcome sagging
breast due to breastfeeding or aging as well as small breast size. Recent years have shown the
emergence of a variety of injectable materials on market as breast fillers. These injectable
breast fillers have swiftly gained popularity among women, considering the minimal
invasiveness of the procedure, nullifying the need for terrifying surgery. Little do they know
that the procedure may pose detrimental complications, while visualization of breast
parenchyma infiltrated by these fillers is also deemed substandard; posing diagnostic
challenges. We present a case series of three patients with prior history of hyaluronic acid and
collagen breast injections.
REPORT: The first patient is a 37-year-old lady who presented to casualty with worsening
shortness of breath, non-productive cough, central chest pain; associated with fever and chills
for 2-weeks duration. The second patient is a 34-year-old lady who complained of cough, fever
and haemoptysis; associated with shortness of breath for 1-week duration. CT in these cases
revealed non thrombotic wedge-shaped peripheral air-space densities.
The third patient is a 37‐year‐old female with right breast pain, swelling and redness for 2-
weeks duration. Previous collagen breast injection performed 1 year ago had impeded
sonographic visualization of the breast parenchyma. MRI breasts showed multiple non-
enhancing round and oval shaped lesions exhibiting fat intensity.
CONCLUSION: Radiologists should be familiar with the potential risks and hazards as well
as limitations of imaging posed by breast fillers such that MRI is required as problem-solving
tool
- …