954 research outputs found

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Using Radiomics to improve the 2-year survival of Non-Small Cell Lung Cancer Patients

    Get PDF
    This thesis both exploits and further contributes enhancements to the utilization of radiomics (extracted quantitative features of radiological imaging data) for improving cancer survival prediction. Several machine learning methods were compared in this analysis, including but not limited to support vector machines, convolutional neural networks and logistic regression.A technique for analysing prognostic image characteristics, for non-small cell lung cancer based on the edge regions, as well as tissues immediately surrounding visible tumours is developed. Regions external to and neighbouring a tumour were shown to also have prognostic value. By using the additional texture features an increase in accuracy, of 3%, is shown over previous approaches for predicting two-year survival, which has been determined by examining the outside rind tissue including the tumour compared to the volume without the rind. This indicates that while the centre of the tumour is currently the main clinical target for radiotherapy treatment, the tissue immediately around the tumour is also clinically important for survival analysis. Further, it was found that improved prediction resulted up to some 6 pixels outside the tumour volume, a distance of approximately 5mm outside the original gross tumour volume (GTV), when applying a support vector machine, which achieved the highest accuracy of 71.18%. This research indicates the periphery of the tumour is highly predictive of survival. To our knowledge this is the first study that has concentrically expanded and analysed the NSCLC rind for radiomic analysis

    Machine learning approaches for lung cancer diagnosis.

    Get PDF
    The enormity of changes and development in the field of medical imaging technology is hard to fathom, as it does not just represent the technique and process of constructing visual representations of the body from inside for medical analysis and to reveal the internal structure of different organs under the skin, but also it provides a noninvasive way for diagnosis of various disease and suggest an efficient ways to treat them. While data surrounding all of our lives are stored and collected to be ready for analysis by data scientists, medical images are considered a rich source that could provide us with a huge amount of data, that could not be read easily by physicians and radiologists, with valuable information that could be used in smart ways to discover new knowledge from these vast quantities of data. Therefore, the design of computer-aided diagnostic (CAD) system, that can be approved for use in clinical practice that aid radiologists in diagnosis and detecting potential abnormalities, is of a great importance. This dissertation deals with the development of a CAD system for lung cancer diagnosis, which is the second most common cancer in men after prostate cancer and in women after breast cancer. Moreover, lung cancer is considered the leading cause of cancer death among both genders in USA. Recently, the number of lung cancer patients has increased dramatically worldwide and its early detection doubles a patient’s chance of survival. Histological examination through biopsies is considered the gold standard for final diagnosis of pulmonary nodules. Even though resection of pulmonary nodules is the ideal and most reliable way for diagnosis, there is still a lot of different methods often used just to eliminate the risks associated with the surgical procedure. Lung nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. A pulmonary nodule is the first indication to start diagnosing lung cancer. Lung nodules can be benign (normal subjects) or malignant (cancerous subjects). Large (generally defined as greater than 2 cm in diameter) malignant nodules can be easily detected with traditional CT scanning techniques. However, the diagnostic options for small indeterminate nodules are limited due to problems associated with accessing small tumors. Therefore, additional diagnostic and imaging techniques which depends on the nodules’ shape and appearance are needed. The ultimate goal of this dissertation is to develop a fast noninvasive diagnostic system that can enhance the accuracy measures of early lung cancer diagnosis based on the well-known hypotheses that malignant nodules have different shape and appearance than benign nodules, because of the high growth rate of the malignant nodules. The proposed methodologies introduces new shape and appearance features which can distinguish between benign and malignant nodules. To achieve this goal a CAD system is implemented and validated using different datasets. This CAD system uses two different types of features integrated together to be able to give a full description to the pulmonary nodule. These two types are appearance features and shape features. For the appearance features different texture appearance descriptors are developed, namely the 3D histogram of oriented gradient, 3D spherical sector isosurface histogram of oriented gradient, 3D adjusted local binary pattern, 3D resolved ambiguity local binary pattern, multi-view analytical local binary pattern, and Markov Gibbs random field. Each one of these descriptors gives a good description for the nodule texture and the level of its signal homogeneity which is a distinguishable feature between benign and malignant nodules. For the shape features multi-view peripheral sum curvature scale space, spherical harmonics expansions, and different group of fundamental geometric features are utilized to describe the nodule shape complexity. Finally, the fusion of different combinations of these features, which is based on two stages is introduced. The first stage generates a primary estimation for every descriptor. Followed by the second stage that consists of an autoencoder with a single layer augmented with a softmax classifier to provide us with the ultimate classification of the nodule. These different combinations of descriptors are combined into different frameworks that are evaluated using different datasets. The first dataset is the Lung Image Database Consortium which is a benchmark publicly available dataset for lung nodule detection and diagnosis. The second dataset is our local acquired computed tomography imaging data that has been collected from the University of Louisville hospital and the research protocol was approved by the Institutional Review Board at the University of Louisville (IRB number 10.0642). These frameworks accuracy was about 94%, which make the proposed frameworks demonstrate promise to be valuable tool for the detection of lung cancer

    Registration of pre-operative lung cancer PET/CT scans with post-operative histopathology images

    Get PDF
    Non-invasive imaging modalities used in the diagnosis of lung cancer, such as Positron Emission Tomography (PET) or Computed Tomography (CT), currently provide insuffcient information about the cellular make-up of the lesion microenvironment, unless they are compared against the gold standard of histopathology.The aim of this retrospective study was to build a robust imaging framework for registering in vivo and post-operative scans from lung cancer patients, in order to have a global, pathology-validated multimodality map of the tumour and its surroundings.;Initial experiments were performed on tissue-mimicking phantoms, to test different shape reconstruction methods. The choice of interpolator and slice thickness were found to affect the algorithm's output, in terms of overall volume and local feature recovery. In the second phase of the study, nine lung cancer patients referred for radical lobectomy were recruited. Resected specimens were inflated with agar, sliced at 5 mm intervals, and each cross-section was photographed. The tumour area was delineated on the block-face pathology images and on the preoperative PET/CT scans.;Airway segments were also added to the reconstructed models, to act as anatomical fiducials. Binary shapes were pre-registered by aligning their minimal bounding box axes, and subsequently transformed using rigid registration. In addition, histopathology slides were matched to the block-face photographs using moving least squares algorithm.;A two-step validation process was used to evaluate the performance of the proposed method against manual registration carried out by experienced consultants. In two out of three cases, experts rated the results generated by the algorithm as the best output, suggesting that the developed framework outperforms the current standard practice.Non-invasive imaging modalities used in the diagnosis of lung cancer, such as Positron Emission Tomography (PET) or Computed Tomography (CT), currently provide insuffcient information about the cellular make-up of the lesion microenvironment, unless they are compared against the gold standard of histopathology.The aim of this retrospective study was to build a robust imaging framework for registering in vivo and post-operative scans from lung cancer patients, in order to have a global, pathology-validated multimodality map of the tumour and its surroundings.;Initial experiments were performed on tissue-mimicking phantoms, to test different shape reconstruction methods. The choice of interpolator and slice thickness were found to affect the algorithm's output, in terms of overall volume and local feature recovery. In the second phase of the study, nine lung cancer patients referred for radical lobectomy were recruited. Resected specimens were inflated with agar, sliced at 5 mm intervals, and each cross-section was photographed. The tumour area was delineated on the block-face pathology images and on the preoperative PET/CT scans.;Airway segments were also added to the reconstructed models, to act as anatomical fiducials. Binary shapes were pre-registered by aligning their minimal bounding box axes, and subsequently transformed using rigid registration. In addition, histopathology slides were matched to the block-face photographs using moving least squares algorithm.;A two-step validation process was used to evaluate the performance of the proposed method against manual registration carried out by experienced consultants. In two out of three cases, experts rated the results generated by the algorithm as the best output, suggesting that the developed framework outperforms the current standard practice

    Quantitative imaging analysis:challenges and potentials

    Get PDF

    Current Approaches for Image Fusion of Histological Data with Computed Tomography and Magnetic Resonance Imaging

    Get PDF
    Classical analysis of biological samples requires the destruction of the tissue’s integrity by cutting or grinding it down to thin slices for (Immuno)-histochemical staining and microscopic analysis. Despite high specificity, encoded in the stained 2D section of the whole tissue, the structural information, especially 3D information, is limited. Computed tomography (CT) or magnetic resonance imaging (MRI) scans performed prior to sectioning in combination with image registration algorithms provide an opportunity to regain access to morphological characteristics as well as to relate histological findings to the 3D structure of the local tissue environment. This review provides a summary of prevalent literature addressing the problem of multimodal coregistration of hard- and soft-tissue in microscopy and tomography. Grouped according to the complexity of the dimensions, including image-to-volume (2D ⟶ 3D), image-to-image (2D ⟶ 2D), and volume-to-volume (3D ⟶ 3D), selected currently applied approaches are investigated by comparing the method accuracy with respect to the limiting resolution of the tomography. Correlation of multimodal imaging could position itself as a useful tool allowing for precise histological diagnostic and allow the a priori planning of tissue extraction like biopsies

    Dynamic And Quantitative Radiomics Analysis In Interventional Radiology

    Get PDF
    Interventional Radiology (IR) is a subspecialty of radiology that performs invasive procedures driven by diagnostic imaging for predictive and therapeutic purpose. The development of artificial intelligence (AI) has revolutionized the industry of IR. Researchers have created sophisticated models backed by machine learning algorithms and optimization methodologies for image registration, cellular structure detection and computer-aided disease diagnosis and prognosis predictions. However, due to the incapacity of the human eye to detect tiny structural characteristics and inter-radiologist heterogeneity, conventional experience-based IR visual evaluations may have drawbacks. Radiomics, a technique that utilizes machine learning, offers a practical and quantifiable solution to this issue. This technology has been used to evaluate the heterogeneity of malignancies that are difficult to detect by the human eye by creating an automated pipeline for the extraction and analysis of high throughput computational imaging characteristics from radiological medical pictures. However, it is a demanding task to directly put radiomics into applications in IR because of the heterogeneity and complexity of medical imaging data. Furthermore, recent radiomics studies are based on static images, while many clinical applications (such as detecting the occurrence and development of tumors and assessing patient response to chemotherapy and immunotherapy) is a dynamic process. Merely incorporating static features cannot comprehensively reflect the metabolic characteristics and dynamic processes of tumors or soft tissues. To address these issues, we proposed a robust feature selection framework to manage the high-dimensional small-size data. Apart from that, we explore and propose a descriptor in the view of computer vision and physiology by integrating static radiomics features with time-varying information in tumor dynamics. The major contributions to this study include: Firstly, we construct a result-driven feature selection framework, which could efficiently reduce the dimension of the original feature set. The framework integrates different feature selection techniques to ensure the distinctiveness, uniqueness, and generalization ability of the output feature set. In the task of classification hepatocellular carcinoma (HCC) and intrahepatic cholangiocarcinoma (ICC) in primary liver cancer, only three radiomics features (chosen from more than 1, 800 features of the proposed framework) can obtain an AUC of 0.83 in the independent dataset. Besides, we also analyze features’ pattern and contributions to the results, enhancing clinical interpretability of radiomics biomarkers. Secondly, we explore and build a pulmonary perfusion descriptor based on 18F-FDG whole-body dynamic PET images. Our major novelties include: 1) propose a physiology-and-computer-vision-interpretable descriptor construction framework by the decomposition of spatiotemporal information into three dimensions: shades of grey levels, textures, and dynamics. 2) The spatio-temporal comparison of pulmonary descriptor intra and inter patients is feasible, making it possible to be an auxiliary diagnostic tool in pulmonary function assessment. 3) Compared with traditional PET metabolic biomarker analysis, the proposed descriptor incorporates image’s temporal information, which enables a better understanding of the time-various mechanisms and detection of visual perfusion abnormalities among different patients. 4) The proposed descriptor eliminates the impact of vascular branching structure and gravity effect by utilizing time warping algorithms. Our experimental results showed that our proposed framework and descriptor are promising tools to medical imaging analysis
    • …
    corecore