11 research outputs found

    CT-LungNet: A Deep Learning Framework for Precise Lung Tissue Segmentation in 3D Thoracic CT Scans

    Full text link
    Segmentation of lung tissue in computed tomography (CT) images is a precursor to most pulmonary image analysis applications. Semantic segmentation methods using deep learning have exhibited top-tier performance in recent years, however designing accurate and robust segmentation models for lung tissue is challenging due to the variations in shape, size, and orientation. Additionally, medical image artifacts and noise can affect lung tissue segmentation and degrade the accuracy of downstream analysis. The practicality of current deep learning methods for lung tissue segmentation is limited as they require significant computational resources and may not be easily deployable in clinical settings. This paper presents a fully automatic method that identifies the lungs in three-dimensional (3D) pulmonary CT images using deep networks and transfer learning. We introduce (1) a novel 2.5-dimensional image representation from consecutive CT slices that succinctly represents volumetric information and (2) a U-Net architecture equipped with pre-trained InceptionV3 blocks to segment 3D CT scans while maintaining the number of learnable parameters as low as possible. Our method was quantitatively assessed using one public dataset, LUNA16, for training and testing and two public datasets, namely, VESSEL12 and CRPF, only for testing. Due to the low number of learnable parameters, our method achieved high generalizability to the unseen VESSEL12 and CRPF datasets while obtaining superior performance over Luna16 compared to existing methods (Dice coefficients of 99.7, 99.1, and 98.8 over LUNA16, VESSEL12, and CRPF datasets, respectively). We made our method publicly accessible via a graphical user interface at medvispy.ee.kntu.ac.ir

    Extracting Lungs from CT Images using Fully Convolutional Networks

    Full text link
    Analysis of cancer and other pathological diseases, like the interstitial lung diseases (ILDs), is usually possible through Computed Tomography (CT) scans. To aid this, a preprocessing step of segmentation is performed to reduce the area to be analyzed, segmenting the lungs and removing unimportant regions. Generally, complex methods are developed to extract the lung region, also using hand-made feature extractors to enhance segmentation. With the popularity of deep learning techniques and its automated feature learning, we propose a lung segmentation approach using fully convolutional networks (FCNs) combined with fully connected conditional random fields (CRF), employed in many state-of-the-art segmentation works. Aiming to develop a generalized approach, the publicly available datasets from University Hospitals of Geneva (HUG) and VESSEL12 challenge were studied, including many healthy and pathological CT scans for evaluation. Experiments using the dataset individually, its trained model on the other dataset and a combination of both datasets were employed. Dice scores of 98.67%ยฑ0.94%98.67\%\pm0.94\% for the HUG-ILD dataset and 99.19%ยฑ0.37%99.19\%\pm0.37\% for the VESSEL12 dataset were achieved, outperforming works in the former and obtaining similar state-of-the-art results in the latter dataset, showing the capability in using deep learning approaches.Comment: Accepted for presentation at the International Joint Conference on Neural Networks (IJCNN) 201

    Temporal - spatial recognizer for multi-label data

    Get PDF
    Pattern recognition is an important artificial intelligence task with practical applications in many fields such as medical and species distribution. Such application involves overlapping data points which are demonstrated in the multi- label dataset. Hence, there is a need for a recognition algorithm that can separate the overlapping data points in order to recognize the correct pattern. Existing recognition methods suffer from sensitivity to noise and overlapping points as they could not recognize a pattern when there is a shift in the position of the data points. Furthermore, the methods do not implicate temporal information in the process of recognition, which leads to low quality of data clustering. In this study, an improved pattern recognition method based on Hierarchical Temporal Memory (HTM) is proposed to solve the overlapping in data points of multi- label dataset. The imHTM (Improved HTM) method includes improvement in two of its components; feature extraction and data clustering. The first improvement is realized as TS-Layer Neocognitron algorithm which solves the shift in position problem in feature extraction phase. On the other hand, the data clustering step, has two improvements, TFCM and cFCM (TFCM with limit- Chebyshev distance metric) that allows the overlapped data points which occur in patterns to be separated correctly into the relevant clusters by temporal clustering. Experiments on five datasets were conducted to compare the proposed method (imHTM) against statistical, template and structural pattern recognition methods. The results showed that the percentage of success in recognition accuracy is 99% as compared with the template matching method (Featured-Based Approach, Area-Based Approach), statistical method (Principal Component Analysis, Linear Discriminant Analysis, Support Vector Machines and Neural Network) and structural method (original HTM). The findings indicate that the improved HTM can give an optimum pattern recognition accuracy, especially the ones in multi- label dataset

    Classification of Chest CT Lung Nodules Using Collaborative Deep Learning Model

    Get PDF
    Khalaf Alshamrani,1,2 Hassan A Alshamrani1 1Radiological Sciences Department, Najran University, Najran, Saudi Arabia; 2Department of Oncology and Metabolism, University of Sheffield, Sheffield, UKCorrespondence: Khalaf Alshamrani, Department of Oncology and Metabolism, University of Sheffield, Sheffield, UK, Email [email protected]; [email protected]: Early detection of lung cancer through accurate diagnosis of malignant lung nodules using chest CT scans offers patients the highest chance of successful treatment and survival. Despite advancements in computer vision through deep learning algorithms, the detection of malignant nodules faces significant challenges due to insufficient training datasets.Methods: This study introduces a model based on collaborative deep learning (CDL) to differentiate between cancerous and non-cancerous nodules in chest CT scans with limited available data. The model dissects a nodule into its constituent parts using six characteristics, allowing it to learn detailed features of lung nodules. It utilizes a CDL submodel that incorporates six types of feature patches to fine-tune a network previously trained with ResNet-50. An adaptive weighting method learned through error backpropagation enhances the process of identifying lung nodules, incorporating these CDL submodels for improved accuracy.Results: The CDL model demonstrated a high level of performance in classifying lung nodules, achieving an accuracy of 93.24%. This represents a significant improvement over current state-of-the-art methods, indicating the effectiveness of the proposed approach.Conclusion: The findings suggest that the CDL model, with its unique structure and adaptive weighting method, offers a promising solution to the challenge of accurately detecting malignant lung nodules with limited data. This approach not only improves diagnostic accuracy but also contributes to the early detection and treatment of lung cancer, potentially saving lives.Keywords: CT images, lung cancer, nodules, logistic regression, collaborative deep learning, standard deviation, radial lengt

    Statistical deformation reconstruction using multi-organ shape features for pancreatic cancer localization

    Get PDF
    Respiratory motion and the associated deformations of abdominal organs and tumors are essential information in clinical applications. However, inter- and intra-patient multi-organ deformations are complex and have not been statistically formulated, whereas single organ deformations have been widely studied. In this paper, we introduce a multi-organ deformation library and its application to deformation reconstruction based on the shape features of multiple abdominal organs. Statistical multi-organ motion/deformation models of the stomach, liver, left and right kidneys, and duodenum were generated by shape matching their region labels defined on four-dimensional computed tomography images. A total of 250 volumes were measured from 25 pancreatic cancer patients. This paper also proposes a per-region-based deformation learning using the non-linear kernel model to predict the displacement of pancreatic cancer for adaptive radiotherapy. The experimental results show that the proposed concept estimates deformations better than general per-patient-based learning models and achieves a clinically acceptable estimation error with a mean distance of 1.2 ยฑ 0.7 mm and a Hausdorff distance of 4.2 ยฑ 2.3 mm throughout the respiratory motion

    Computer-aided detection of lung nodules: A review

    Get PDF
    We present an in-depth review and analysis of salient methods for computer-aided detection of lung nodules. We evaluate the current methods for detecting lung nodules using literature searches with selection criteria based on validation dataset types, nodule sizes, numbers of cases, types of nodules, extracted features in traditional feature-based classifiers, sensitivity, and false positives (FP)/scans. Our review shows that current detection systems are often optimized for particular datasets and can detect only one or two types of nodules. We conclude that, in addition to achieving high sensitivity and reduced FP/scans, strategies for detecting lung nodules must detect a variety of nodules with high precision to improve the performances of the radiologists. To the best of our knowledge, ours is the first review of the effectiveness of feature extraction using traditional feature-based classifiers. Moreover, we discuss deep-learning methods in detail and conclude that features must be appropriately selected to improve the overall accuracy of the system. We present an analysis of current schemes and highlight constraints and future research areas

    A novel MRA-based framework for the detection of changes in cerebrovascular blood pressure.

    Get PDF
    Background: High blood pressure (HBP) affects 75 million adults and is the primary or contributing cause of mortality in 410,000 adults each year in the United States. Chronic HBP leads to cerebrovascular changes and is a significant contributor for strokes, dementia, and cognitive impairment. Non-invasive measurement of changes in cerebral vasculature and blood pressure (BP) may enable physicians to optimally treat HBP patients. This manuscript describes a method to non-invasively quantify changes in cerebral vasculature and BP using Magnetic Resonance Angiography (MRA) imaging. Methods: MRA images and BP measurements were obtained from patients (n=15, M=8, F=7, Age= 49.2 ยฑ 7.3 years) over a span of 700 days. A novel segmentation algorithm was developed to identify brain vasculature from surrounding tissue. The data was processed to calculate the vascular probability distribution function (PDF); a measure of the vascular diameters in the brain. The initial (day 0) PDF and final (day 700) PDF were used to correlate the changes in cerebral vasculature and BP. Correlation was determined by a mixed effects linear model analysis. Results: The segmentation algorithm had a 99.9% specificity and 99.7% sensitivity in identifying and delineating cerebral vasculature. The PDFs had a statistically significant correlation to BP changes below the circle of Willis (p-value = 0.0007), but not significant (p-value = 0.53) above the circle of Willis, due to smaller blood vessels. Conclusion: Changes in cerebral vasculature and pressure can be non-invasively obtained through MRA image analysis, which may be a useful tool for clinicians to optimize medical management of HBP

    A 3D Reconstruction Method of Bone Shape from Un-calibrated Radiographs

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2019. 2. ์ด์ œํฌ.์ „์‚ฐํ™” ๋‹จ์ธต์ดฌ์˜(CT)์€ ๊ณจํ˜•์ƒ์„ 3์ฐจ์›์œผ๋กœ ์‹œ๊ฐํ™”ํ•˜์—ฌ ์ง๊ด€์ ์œผ๋กœ ๋ณ‘์ฆ์„ ์ง„๋‹จํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ์žฅ์ ์ด ์žˆ๋‹ค. ํ•˜์ง€๋งŒ ๊ณผ๋„ํ•œ ๋ฐฉ์‚ฌ์„  ๋…ธ์ถœ๋กœ ์ธํ•ด ์•”๊ณผ ๊ฐ™์€ ๋ถ€์ž‘์šฉ์˜ ์šฐ๋ ค๊ฐ€ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— EOSยฎ์™€ ๊ฐ™์ด ์ €์„ ๋Ÿ‰์œผ๋กœ 3์ฐจ์› ์žฌ๊ฑด์„ ํ•  ์ˆ˜ ์žˆ๋Š” ์–‘ํŒ ์ดฌ์˜ ์‹œ์Šคํ…œ์ด ๋Œ€์•ˆ์œผ๋กœ ์ œ์‹œ๋˜์—ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด ์‹œ์Šคํ…œ์€ ๋„์ž… ๋น„์šฉ์ด ๋†’๊ธฐ ๋•Œ๋ฌธ์— ์ผ๋ถ€ ๋ณ‘์›์ด๋‚˜ ๊ตญ๊ฐ€์—์„œ๋Š” ์‚ฌ์šฉ์ด ์–ด๋ ต๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์–ด๋Š ํ™˜๊ฒฝ์˜ ๋ณ‘์›์—์„œ๋“ ์ง€ 3์ฐจ์› ์ง„๋‹จ์„ ํ•  ์ˆ˜ ์žˆ๋„๋ก ๋‹จ์ˆœ ๋ฐฉ์‚ฌ์„  ์˜์ƒ๋งŒ์œผ๋กœ 3์ฐจ์› ์žฌ๊ฑด์„ ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ์ด ๋ฐฉ๋ฒ•์€ ๋ณ„๋„์˜ ๊ธˆ์† ๋ณด์ •๋ฌผ์ฒด ์—†์ด ๋‹จ์ˆœ ์˜์ƒ์„ ์ž๊ฐ€ ๋ณด์ •ํ•˜๋Š” ๊ฒƒ์— ์ฐธ์‹ ์„ฑ์ด ์žˆ๋‹ค. ๊ธฐ์ˆ ์ ์œผ๋กœ๋Š” ํ†ต๊ณ„ํ˜•์ƒ์˜ ๋ชจ๋ธ๊ณผ ์˜์ƒ๋‚ด์˜ ์œค๊ณฝ์„ ์ด ์ผ์น˜ํ•˜๋„๋ก ๋ณด์ •๊ณผ ํ˜•์ƒ์žฌ๊ฑด์„ ๋ฒˆ๊ฐˆ์•„ ๋ฐ˜๋ณตํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•œ๋‹ค. ๋˜ํ•œ, ์œ„์˜ ๊ธฐ์ˆ ์„ ์˜๋ฃŒํ™˜๊ฒฝ์— ์ ์šฉํ•˜๊ธฐ ์œ„ํ•ด ์ œ์•ˆํ•œ ์žฌ๊ฑด ๋ฐฉ๋ฒ•์„ ํฌํ•จํ•˜๋Š” ๋ชจ๋ฐ”์ผ ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ์ œ์‹œํ•œ๋‹ค. ์‚ฌ์šฉ์ž๋Š” ๋ชจ๋‹ˆํ„ฐ ํ™”๋ฉด์ด๋‚˜ ํ•„๋ฆ„์„ ๋ชจ๋ฐ”์ผ ๊ธฐ๊ธฐ์˜ ์นด๋ฉ”๋ผ๋กœ ์ดฌ์˜ํ•˜์—ฌ ์ž…๋ ฅํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๊ทธ๋ž˜ํ”„ ์ปท ๋ฐฉ๋ฒ•์„ ํ†ตํ•ด ์œค๊ณฝ์„ ์„ ์ž๋™์œผ๋กœ ์ถ”์ถœํ•˜๊ณ  ํ„ฐ์น˜ ์ž…๋ ฅ์œผ๋กœ ์†์‰ฝ๊ฒŒ ์œค๊ณฝ์„ ์„ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ์šฐ์„ ์ ์œผ๋กœ ๋Œ€ํ‡ด๊ณจ ์žฌ๊ฑด ๋ชจ๋ฐ”์ผ ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ๊ฐœ๋ฐœํ•˜์˜€๊ณ , ๋Œ€ํ‡ด๊ณจ ์—ผ์ „๊ฐ ์ง„๋‹จ์— ์žˆ์–ด์„œ ํ›Œ๋ฅญํ•œ ์‹ ๋ขฐ๋„์™€ CT์™€ ๋†’์€ ๋™์‹œ ํƒ€๋‹น๋„๋ฅผ ๊ฐ–๋Š” ๊ฒƒ์œผ๋กœ ์ธก์ •๋˜์—ˆ๋‹ค.Computed tomography (CT) provides benefits in accurate diagnosis of bone deformity. However, the potential adverse effects of radiation exposure in CT has become a concern. To reduce radiation dose while maintaining the accuracy of diagnosis, the EOSยฎ system has been proposed to reconstruct 3D bony shapes from calibrated bi-planar(stereo) X-ray images. However, this system requires another apparatus in addition to the conventional radiographic system, and the cost for the device installation is high and the space occupied is substantial. Purchasing an EOSยฎ system only for 3D reconstruction may hence not be appropriate in some hospitals or countries. In this thesis, we propose a new method to reconstruct 3D bone shape only from conventional radiograph so that hospitals in any environment can perform 3D diagnosis. It has novelty in self-calibrating conventional radiograph without metal calibration object. Technically, the calibration and reconstruction were optimized by minimizing the difference between the projected contour of the bone shape and the contour of the radiographic image. To apply the above technology to a medical situation, we present a mobile application including the reconstruction method. The user can easily input a printed film or a digital image displayed on a monitor screen by taking a photograph using an embedded camera. The application provides automatic contouring with a graph-cut algorithm, but also an intuitive touch interface for modifying the contour of a radiograph. We first developed a femur reconstruction, and the measurement of femoral anteversion with the mobile application showed excellent concurrent validity and reliability.์ œ 1์žฅ ์„œ๋ก  1 ์ œ 2์žฅ ๋ฐฐ๊ฒฝ์ง€์‹ 5 2.1 ์ „์‚ฐํ™”๋‹จ์ธต ์ดฌ์˜ 3์ฐจ์› ์žฌ๊ฑด ๊ธฐ๋ฒ• 5 2.2 ์ž„์ƒ์—์„œ 3์ฐจ์› ํ˜•์ƒ์˜ ์šฉ๋„ 8 2.3 ์–‘ํŒ(bi-planar) 3์ฐจ์› ์žฌ๊ฑด ์‹œ์Šคํ…œ 11 2.4 ๋‹จ์ˆœ ๋ฐฉ์‚ฌ์„  ์ดฌ์˜ 15 ์ œ 3์žฅ ํ†ต๊ณ„ํ˜•์ƒ๋ชจ๋ธ(statistical shape model) 17 3.1 ์—ฐ๊ด€ ๋…ผ๋ฌธ 18 3.2 ํ˜•์ƒ์˜ ํ‘œํ˜„๋ฐฉ๋ฒ• 19 3.3 ๋‘ 3์ฐจ์› ํ‘œ๋ฉด์˜ ๋Œ€์‘์  ์ฐพ๊ธฐ 22 3.3.1 ๋ฏธ๋ถ„ ๊ธฐํ•˜ 25 3.3.2 ์—ด ํ™•์‚ฐ ๋ฐฉ์ •์‹๊ณผ ํ˜•์ƒ์˜ ํŠน์ง• 35 3.4 ์ฃผ์„ฑ๋ถ„ ๋ถ„์„ ๊ธฐ๋ฒ•(PCA) 38 3.5 ๋ผˆ์˜ ํ†ต๊ณ„ํ˜•์ƒ ๋ชจ๋ธ ์ƒ์„ฑ ๋ฐฉ๋ฒ• 40 3.5.1 ๋งค๋„๋Ÿฌ์šด ํ‘œ๋ฉด์˜ ํŠน์ง•์„ ์ฐพ๋Š” ๋ฐฉ๋ฒ• 41 3.5.2 ๋Œ€ํ‡ด๊ณจ์˜ ์  ๋ถ„ํฌ ๋ชจ๋ธ์— ์™ธ์ ๋Œ€์‘ ๋ฐฉ์‹์˜ ์ ์šฉ 42 3.5.3 ๊ฑฐ๊ณจ๊ณผ ๋Œ€ํ‡ด๊ณจ์˜ ์  ๋ถ„ํฌ ๋ชจ๋ธ์— ๋‚ด์  ๋Œ€์‘ ๋ฐฉ์‹์˜ ์ ์šฉ 43 ์ œ4์žฅ ๋ฐฉ์‚ฌ์„  ์˜์ƒ์—์„œ ๊ณจ ์œค๊ณฝ์„ ์˜ ์ถ”์ถœ 48 4.1 ์—ฐ๊ด€ ๋…ผ๋ฌธ 49 4.2 ๊ทธ๋ž˜ํ”„ ์ ˆ๋‹จ์„ ์ด์šฉํ•œ ์˜์ƒ ๋ถ„ํ•  49 4.2.1 ์‚ฌ์šฉ์ž ์ž…๋ ฅ๊ณผ ์ตœ์†Œ ์ ˆ๋‹จ์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ• 50 4.2.2 ์ •๊ทœํ™” ์ ˆ๋‹จ(Normalized cut) 51 4.3 ๊ณจ ๊ฒฝ๊ณ„์„  ์ถ”์ถœ ๋ฐฉ๋ฒ• 53 4.3.1 ๊ทธ๋ž˜ํ”„ ์ตœ์†Œ์ ˆ๋‹จ ๋ฐฉ์‹ ์˜์ƒ ๋ถ„ํ•  54 4.3.2 ์ง€์ •ํ•œ ์ ์„ ์ง€๋‚˜๋Š” ์ตœ์†Œ ๊ฐ€์ค‘์น˜ ๊ฒฝ๋กœ ๋ฐฉ์‹ 54 4.3.3 ๋ฐฉํ–ฅ์„ฑ ๊ทธ๋ž˜ํ”„์˜ ๊ฒฝ๋กœ์ฐพ๊ธฐ ๋ฐฉ์‹ 58 ์ œ5์žฅ ๋‹จ์ˆœ ๋ฐฉ์‚ฌ์„  ์˜์ƒ์˜ ์ž๊ฐ€๋ณด์ • 65 5.1 Perspective N Points(PNP)๋ฌธ์ œ์˜ ๋ฐ˜๋ณต์  ํ•ด 65 5.2 ํ†ต๊ณ„ํ˜•์ƒ๋ชจ๋ธ์„ ๋ณด์ •๋ฌผ์ฒด๋กœ ์‚ฌ์šฉํ•˜๋Š” ์ž๊ฐ€๋ณด์ • ๋ฐฉ๋ฒ• 68 ์ œ6์žฅ ๋‹จ์ˆœ ๋ฐฉ์‚ฌ์„  3์ฐจ์› ์žฌ๊ฑด ์‹œ์Šคํ…œ 71 6.1 ํ†ต๊ณ„ํ˜•์ƒ์˜ ๋ณ€ํ˜• 72 6.2 ๋ณผ๋ฅจ ๋ Œ๋”๋ง์„ ์ด์šฉํ•œ ๊ฐ€์ƒ ์—‘์Šค๋ ˆ์ด ์ƒ์„ฑ 73 6.3 ๋Œ€ํ‡ด๊ณจ ์žฌ๊ฑด ์‹œ์Šคํ…œ 75 6.3.1 ์ž๊ฐ€๋ณด์ • ์ •ํ™•๋„ ์‹คํ—˜ 76 6.3.2 ์žฌ๊ฑด ํ˜•์ƒ์˜ ๊ฑฐ๋ฆฌ ์˜ค์ฐจ 78 6.4 ๋Œ€ํ‡ด๊ณจ ์žฌ๊ฑด ๋ชจ๋ฐ”์ผ ์•ฑ 79 6.5 ๊ฒฝ๋น„๋กœ ์žฌ๊ฑด ๋ชจ๋ฐ”์ผ ์•ฑ 83 ์ œ7์žฅ ๊ฒฐ๋ก  86 Abstract 96Docto

    Diffusion-weighted magnetic resonance imaging in diagnosing graft dysfunction : a non-invasive alternative to renal biopsy.

    Get PDF
    The thesis is divided into three parts. The first part focuses on background information including how the kidney functions, diseases, and available kidney disease treatment strategies. In addition, the thesis provides information on imaging instruments and how they can be used to diagnose renal graft dysfunction. The second part focuses on elucidating the parameters linked with highly accurate diagnosis of rejection. Four parameters categories were tested: clinical biomarkers alone, individual mean apparent diffusion coefficient (ADC) at 11-different b- values, mean ADCs of certain groups of b-value, and fusion of clinical biomarkers and all b-values. The most accurate model was found to be when the b-value of b=100 s/mm2 and b=700 s/mm2 were fused. The third part of this thesis focuses on a study that uses Diffusion-Weighted MRI to diagnose and differentiate two types of renal rejection. The system was found to correctly differentiate the two types of rejection with a 98% accuracy. The last part of this thesis concludes the work that has been done and states the possible trends and future avenues
    corecore