32 research outputs found
Lung cancer medical images classification using hybrid CNN-SVM
Lung cancer is one of the leading causes of death worldwide. Early detection of this disease increases the chances of survival. Computer-Aided Detection (CAD) has been used to process CT images of the lung to determine whether an image has traces of cancer. This paper presents an image classification method based on the hybrid Convolutional Neural Network (CNN) algorithm and Support Vector Machine (SVM). This algorithm is capable of automatically classifying and analyzing each lung image to check if there is any presence of cancer cells or not. CNN is easier to train and has fewer parameters compared to a fully connected network with the same number of hidden units. Moreover, SVM has been utilized to eliminate useless information that affects accuracy negatively. In recent years, Convolutional Neural Networks (CNNs) have achieved excellent performance in many computer visions tasks. In this study, the performance of this algorithm is evaluated, and the results indicated that our proposed CNN-SVM algorithm has been succeed in classifying lung images with 97.91% accuracy. This has shown the method's merit and its ability to classify lung cancer in CT images accurately
Lung nodules identification in CT scans using multiple instance learning.
Computer Aided Diagnosis (CAD) systems for lung nodules diagnosis aim to classify nodules into benign or malignant based on images obtained from diverse imaging modalities such as Computer Tomography (CT). Automated CAD systems are important in medical domain applications as they assist radiologists in the time-consuming and labor-intensive diagnosis process. However, most available methods require a large collection of nodules that are segmented and annotated by radiologists. This process is labor-intensive and hard to scale to very large datasets. More recently, some CAD systems that are based on deep learning have emerged. These algorithms do not require the nodules to be segmented, and radiologists need to only provide the center of mass of each nodule. The training image patches are then extracted from volumes of fixed-sized centered at the provided nodule\u27s center. However, since the size of nodules can vary significantly, one fixed size volume may not represent all nodules effectively. This thesis proposes a Multiple Instance Learning (MIL) approach to address the above limitations. In MIL, each nodule is represented by a nested sequence of volumes centered at the identified center of the nodule. We extract one feature vector from each volume. The set of features for each nodule are combined and represented by a bag. Next, we investigate and adapt some existing algorithms and develop new ones for this application. We start by applying benchmark MIL algorithms to traditional Gray Level Co-occurrence Matrix (GLCM) engineered features. Then, we design and train simple Convolutional Neural Networks (CNNs) to learn and extract features that characterize lung nodules. These extracted features are then fed to a benchmark MIL algorithm to learn a classification model. Finally, we develop new algorithms (MIL-CNN) that combine feature learning and multiple instance classification in a single network. These algorithms generalize the CNN architecture to multiple instance data. We design and report the results of three experiments applied on both generative (GLCM) and learned (CNN) features using two datasets (The Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) \cite{armato2011lung} and the National Lung Screening Trial (NLST) \cite{national2011reduced}). Two of these experiments perform five-fold cross-validations on the same dataset (NLST or LIDC). The third experiment trains the algorithms on one collection (NLST dataset) and tests it on the other (LIDC dataset). We designed our experiments to compare the different features, compare MIL versus Single Instance Learning (SIL) where a single feature vector represents a nodule, and compare our proposed end-to-end MIL approaches to existing benchmark MIL methods. We demonstrate that our proposed MIL-CNN frameworks are more accurate for the lung nodules diagnosis task. We also show that MIL representation achieves better results than SIL applied on the ground truth region of each nodule
Lung Nodules Classification Using Convolutional Neural Network with Transfer Learning
Healthcare industry plays a vital role in improving daily life.
Machine learning and deep neural networks have contributed a lot to benefit
various industries nowadays. Agriculture, healthcare, machinery, aviation,
management, and even education have all benefited from the development and
implementation of machine learning. Deep neural networks provide insight and
assistance in improving daily activities. Convolutional neural network (CNN),
one of the deep neural network methods, has had a significant impact in the
field of computer vision. CNN has long been known for its ability to improve
detection and classification in images. With the implementation of deep
learning, more deep knowledge can be gathered and help healthcare workers to
know more about a patient’s disease. Deep neural networks and machine
learning are increasingly being used in healthcare. The benefit they provide in
terms of improved detection and classification has a positive impact on
healthcare. CNNs are widely used in the detection and classification of imaging
tasks like CT and MRI scans. Although CNN has advantages in this industry,
the algorithm must be trained with a large number of data sets in order to
achieve high accuracy and performance. Large medical datasets are always
unavailable due to a variety of factors such as ethical concerns, a scarcity of
expert explanatory notes and labelled data, and a general scarcity of disease
images. In this paper, lung nodules classification using CNN with transfer
learning is proposed to help in classifying benign and malignant lung nodules
from CT scan images. The objectives of this study are to pre-process lung
nodules data, develop a CNN with transfer learning algorithm, and analyse the
effectiveness of CNN with transfer learning compared to standard of other
methods. According to the findings of this study, CNN with transfer learning
outperformed standard CNN without transfer learning
Analysis of U-Net Neural Network Training Parameters for Tomographic Images Segmentation
Image segmentation is one of the main resources in computer vision. Nowadays, this procedure can be made with high precision using Deep Learning, and this fact is important to applications of several research areas including medical image analysis. Image segmentation is currently applied to find tumors, bone defects and other elements that are crucial to achieve accurate diagnoses. The objective of the present work is to verify the influence of parameters variation on U-Net, a Deep Convolutional Neural Network with Deep Learning for biomedical image segmentation. The dataset was obtained from Kaggle website (www.kaggle.com) and contains 267 volumes of lung computed tomography scans, which are composed of the 2D images and their respective masks (ground truth). The dataset was subdivided in 80% of the volumes for training and 20% for testing. The results were evaluated using the Dice Similarity Coefficient as metric and the value 84% was the mean obtained for the testing set, applying the best parameters considered
A proposed methodology for detecting the malignant potential of pulmonary nodules in sarcoma using computed tomographic imaging and artificial intelligence-based models
The presence of lung metastases in patients with primary malignancies is an important criterion for treatment management and prognostication. Computed tomography (CT) of the chest is the preferred method to detect lung metastasis. However, CT has limited efficacy in differentiating metastatic nodules from benign nodules (e.g., granulomas due to tuberculosis) especially at early stages (<5 mm). There is also a significant subjectivity associated in making this distinction, leading to frequent CT follow-ups and additional radiation exposure along with financial and emotional burden to the patients and family. Even 18F-fluoro-deoxyglucose positron emission technology-computed tomography (18F-FDG PET-CT) is not always confirmatory for this clinical problem. While pathological biopsy is the gold standard to demonstrate malignancy, invasive sampling of small lung nodules is often not clinically feasible. Currently, there is no non-invasive imaging technique that can reliably characterize lung metastases. The lung is one of the favored sites of metastasis in sarcomas. Hence, patients with sarcomas, especially from tuberculosis prevalent developing countries, can provide an ideal platform to develop a model to differentiate lung metastases from benign nodules. To overcome the lack of optimal specificity of CT scan in detecting pulmonary metastasis, a novel artificial intelligence (AI)-based protocol is proposed utilizing a combination of radiological and clinical biomarkers to identify lung nodules and characterize it as benign or metastasis. This protocol includes a retrospective cohort of nearly 2,000–2,250 sample nodules (from at least 450 patients) for training and testing and an ambispective cohort of nearly 500 nodules (from 100 patients; 50 patients each from the retrospective and prospective cohort) for validation. Ground-truth annotation of lung nodules will be performed using an in-house-built segmentation tool. Ground-truth labeling of lung nodules (metastatic/benign) will be performed based on histopathological results or baseline and/or follow-up radiological findings along with clinical outcome of the patient. Optimal methods for data handling and statistical analysis are included to develop a robust protocol for early detection and classification of pulmonary metastasis at baseline and at follow-up and identification of associated potential clinical and radiological markers
Deep Functional Mapping For Predicting Cancer Outcome
The effective understanding of the biological behavior and prognosis of cancer subtypes is becoming very important in-patient administration. Cancer is a diverse disorder in which a significant medical progression and diagnosis for each subtype can be observed and characterized. Computer-aided diagnosis for early detection and diagnosis of many kinds of diseases has evolved in the last decade. In this research, we address challenges associated with multi-organ disease diagnosis and recommend numerous models for enhanced analysis. We concentrate on evaluating the Magnetic Resonance Imaging (MRI), Computed Tomography (CT), and Positron Emission Tomography (PET) for brain, lung, and breast scans to detect, segment, and classify types of cancer from biomedical images. Moreover, histopathological, and genomic classification of cancer prognosis has been considered for multi-organ disease diagnosis and biomarker recommendation. We considered multi-modal, multi-class classification during this study. We are proposing implementing deep learning techniques based on Convolutional Neural Network and Generative Adversarial Network.
In our proposed research we plan to demonstrate ways to increase the performance of the disease diagnosis by focusing on a combined diagnosis of histology, image processing, and genomics. It has been observed that the combination of medical imaging and gene expression can effectively handle the cancer detection situation with a higher diagnostic rate rather than considering the individual disease diagnosis. This research puts forward a blockchain-based system that facilitates interpretations and enhancements pertaining to automated biomedical systems. In this scheme, a secured sharing of the biomedical images and gene expression has been established. To maintain the secured sharing of the biomedical contents in a distributed system or among the hospitals, a blockchain-based algorithm is considered that generates a secure sequence to identity a hash key. This adaptive feature enables the algorithm to use multiple data types and combines various biomedical images and text records. All data related to patients, including identity, pathological records are encrypted using private key cryptography based on blockchain architecture to maintain data privacy and secure sharing of the biomedical contents
Analysis of U-Net Neural Network Training Parameters for Tomographic Images Segmentation
Image segmentation is one of the main resources in computer vision. Nowadays, this procedure can be made with high precision using Deep Learning, and this fact is important to applications of several research areas including medical image analysis. Image segmentation is currently applied to find tumors, bone defects and other elements that are crucial to achieve accurate diagnoses. The objective of the present work is to verify the influence of parameters variation on U-Net, a Deep Convolutional Neural Network with Deep Learning for biomedical image segmentation. The dataset was obtained from Kaggle website (www.kaggle.com) and contains 267 volumes of lung computed tomography scans, which are composed of the 2D images and their respective masks (ground truth). The dataset was subdivided in 80% of the volumes for training and 20% for testing. The results were evaluated using the Dice Similarity Coefficient as metric and the value 84% was the mean obtained for the testing set, applying the best parameters considered
Recommended from our members
Planning the Radiology Workforce for Cancer Diagnostics
YesThe publication of the Delivery plan for tackling the COVID-10 backlog of elective care (NHSE/I, 2022:5)
contained a number of ambitions, including that, by March 2024, 75% of patients who have been
urgently referred by their GP for suspected cancer are diagnosed or have had cancer ruled out within
28 days. By March 2025, waits of longer than a year for elective care should be eliminated and 95% of
patients needing a diagnostic test should receive it within six weeks. The report acknowledged the
need to grow the workforce to achieve these ambitions and ensure a timely cancer diagnosis, while
also proposing the use of digital technology and data systems to free up capacity.
To assist West Yorkshire National Health Service (NHS) organisations to meet these ambitions, this
report presents the findings of a ‘deep dive’ that focuses on the role of radiology in meeting the
ambitions of providing timely cancer diagnosis.
Aims
1. To understand current and projected demand for radiology expertise in cancer diagnosis in
West Yorkshire.
2. To understand the current and projected radiology workforce in West Yorkshire
and determine the gap between the projected radiology workforce and the required radiology
workforce.
3. To identify possible solutions to assist in providing the radiology workforce required for West
Yorkshire and explore their acceptability and potential impact.
Methods
A range of sources of data and methods were utilised. We examined publicly available quantitative
data concerning cancer waiting times and diagnostic waiting times and activity and used this to
forecast future cancer waiting times and diagnostic waiting times and activity. We examined data from
Health Education England (HEE) regarding radiologists’ and radiographers’ workforce profile data for
West Yorkshire, the number of radiologists completing training, and the number of radiographers
graduating, and data submitted by West Yorkshire Trusts to HEE regarding their plans for growing their
radiology and radiographer workforce. Interviews (N=15) conducted with radiology service managers,
university academics and key strategic and operational stakeholders delivering radiology services
were used to understand the current and future issues around strategic workforce planning,
workforce changes and transformation, workforce roles and skills, training and education and service
changes. A rapid review of the literature examining the impacts of artificial intelligence (AI) on the
workload of radiology services was also undertaken. To put this work in context, we also reviewed
relevant policy documents and reports. Alongside this, we consulted with the Yorkshire Imaging
Collaborative (YIC) and the West Yorkshire Cancer Alliance (WYCA) and attended a series of workshops
run by the Yorkshire Imaging Collaborative.
Results
Overall, the findings show that demand for radiology services is increasing and that both cancer
waiting times and the waiting times for diagnostic tests increased, with a concurrent downward trend
in activity that, if all else stays the same, is forecast to continue up to 2025. The cancer waiting times
data indicate that patients were waiting longer and that their needs were not being met. Moreover,
3
the proportion of people treated within accepted cancer waiting times decreased both nationally and
within the West Yorkshire region from 2013. This was exacerbated by COVID-19 which caused a
further decrease nationally and for the West Yorkshire region.
National data for waiting times for all diagnostic tests show a significant decline between 2006 and
2008, with a decrease in median waiting times from just under 6.0 weeks to approximately 2.0 weeks.
Overall, waiting times remained stable until late 2020 when they started to rise with the longest
median waiting times at just over 8.0 weeks in mid-2020. The total number of people waiting for
radiology tests nationally is decreasing and is predicted to continue to do so, while in West Yorkshire
the number of people waiting for radiology tests decreased until 2020 but has since been on an
upward trend which is predicted to continue. Nationally, the total number of radiology tests is on an
upward trend that is predicted to continue, while in West Yorkshire activity has been decreasing since
well before COVID-19 and is predicted to continue to do so.
Data examining the current and future workforce showed that the national figures for the total
radiology and radiography workforce are small relative to other health professional groups. In West
Yorkshire, 265 radiologists and 926 radiographers were employed, and staff turnover was generally
low. Trusts’ forecasts for the number of radiologists and radiographers they believe they need suggest
a 16% increase in the number of radiologists in post between March 2022 and March 2027 and a 25%
increase in the number of radiographers in post. The numbers of radiographers and radiologists being
trained in West Yorkshire suggest that this is feasible.
Interview data identified a number of main themes and associated issues: delivering diagnostic cancer
targets, strategic workforce planning, workforce roles and skills, service transformation, recruitment
and retention, universities, artificial intelligence, collaboration, and international recruitment. Across
all themes, some reoccurring issues were identified: a lack of staff, increased demands, a lack of
capacity in terms of space and staff, a lack of strategic workforce planning with a focus on operational
or financial plans. Respondents proposed potential solutions to some of the issues raised that
included: new ways of working, upskilling, developing current and emerging roles, Community
Diagnostic Centres (CDCs), greater collaboration between NHS Trusts, universities, CDCs, imaging
academies and networks and the private sector, and the international recruitment of radiologists and
radiographers to address workforce gaps.
The rapid review findings helped to identify a number of potential benefits of use of AI in radiology,
including contributing to improved workflow efficacy and efficiency of radiology services. However,
this is dependent on the nature of the work and the AI function. As a result of faster AI reading,
radiologists may be able to focus more on high-risk, complex reading tasks. AI can support automation
of image segmentation and classification and aid the diagnostic confidence of less experienced
radiologists. Respondents’ views on AI were mixed. There was acknowledgement that AI was already
used to support radiology service delivery and both the benefits and problems associated were
identified. The implications of AI for radiologists’ and radiographers’ roles were discussed in terms of
changing work, AI being used to support or in some cases substitute radiologists and radiographers,
and the need for the radiology workforce to adapt to the technological change whilst maintaining a
caring servic