668 research outputs found

    Deep Lesion Graphs in the Wild: Relationship Learning and Organization of Significant Radiology Image Findings in a Diverse Large-scale Lesion Database

    Full text link
    Radiologists in their daily work routinely find and annotate significant abnormalities on a large number of radiology images. Such abnormalities, or lesions, have collected over years and stored in hospitals' picture archiving and communication systems. However, they are basically unsorted and lack semantic annotations like type and location. In this paper, we aim to organize and explore them by learning a deep feature representation for each lesion. A large-scale and comprehensive dataset, DeepLesion, is introduced for this task. DeepLesion contains bounding boxes and size measurements of over 32K lesions. To model their similarity relationship, we leverage multiple supervision information including types, self-supervised location coordinates and sizes. They require little manual annotation effort but describe useful attributes of the lesions. Then, a triplet network is utilized to learn lesion embeddings with a sequential sampling strategy to depict their hierarchical similarity structure. Experiments show promising qualitative and quantitative results on lesion retrieval, clustering, and classification. The learned embeddings can be further employed to build a lesion graph for various clinically useful applications. We propose algorithms for intra-patient lesion matching and missing annotation mining. Experimental results validate their effectiveness.Comment: Accepted by CVPR2018. DeepLesion url adde

    Lung Nodule Detection in CT Images using Neuro Fuzzy Classifier

    Get PDF
    Automated lung cancer detection using computer aided diagnosis (CAD) is an important area in clinical applications. As the manual nodule detection is very time consuming and costly so computerized systems can be helpful for this purpose. In this paper, we propose a computerized system for lung nodule detection in CT scan images. The automated system consists of two stages i.e. lung segmentation and enhancement, feature extraction and classification. The segmentation process will result in separating lung tissue from rest of the image, and only the lung tissues under examination are considered as candidate regions for detecting malignant nodules in lung portion. A feature vector for possible abnormal regions is calculated and regions are classified using neuro fuzzy classifier. It is a fully automatic system that does not require any manual intervention and experimental results show the validity of our system

    Analysis of Various Classification Techniques for Computer Aided Detection System of Pulmonary Nodules in CT

    Get PDF
    Lung cancer is the leading cause of cancer death in the United States. It usually exhibits its presence with the formation of pulmonary nodules. Nodules are round or oval-shaped growth present in the lung. Computed Tomography (CT) scans are used by radiologists to detect such nodules. Computer Aided Detection (CAD) of such nodules would aid in providing a second opinion to the radiologists and would be of valuable help in lung cancer screening. In this research, we study various feature selection methods for the CAD system framework proposed in FlyerScan. Algorithmic steps of FlyerScan include (i) local contrast enhancement (ii) automated anatomical segmentation (iii) detection of potential nodule candidates (iv) feature computation & selection and (v) candidate classification. In this paper, we study the performance of the FlyerScan by implementing various classification methods such as linear, quadratic and Fischer linear discriminant classifier. This algorithm is implemented using a publicly available Lung Image Database Consortium – Image Database Resource Initiative (LIDC-IDRI) dataset. 107 cases from LIDC-IDRI are handpicked in particular for this paper and performance of the CAD system is studied based on 5 example cases of Automatic Nodule Detection (ANODE09) database. This research will aid in improving the nodule detection rate in CT scans, thereby enhancing a patient’s chance of survival

    CAD system for lung nodule analysis.

    Get PDF
    Lung cancer is the deadliest type of known cancer in the United States, claiming hundreds of thousands of lives each year. However, despite the high mortality rate, the 5-year survival rate after resection of Stage 1A non–small cell lung cancer is currently in the range of 62%– 82% and in recent studies even 90%. Patient survival is highly correlated with early detection. Computed Tomography (CT) technology services the early detection of lung cancer tremendously by offering a minimally invasive medical diagnostic tool. Some early types of lung cancer begin with a small mass of tissue within the lung, less than 3 cm in diameter, called a nodule. Most nodules found in a lung are benign, but a small population of them becomes malignant over time. Expert analysis of CT scans is the first step in determining whether a nodule presents a possibility for malignancy but, due to such low spatial support, many potentially harmful nodules go undetected until other symptoms motivate a more thorough search. Computer Vision and Pattern Recognition techniques can play a significant role in aiding the process of detecting and diagnosing lung nodules. This thesis outlines the development of a CAD system which, given an input CT scan, provides a functional and fast, second-opinion diagnosis to physicians. The entire process of lung nodule screening has been cast as a system, which can be enhanced by modern computing technology, with the hopes of providing a feasible diagnostic tool for clinical use. It should be noted that the proposed CAD system is presented as a tool for experts—not a replacement for them. The primary motivation of this thesis is the design of a system that could act as a catalyst for reducing the mortality rate associated with lung cancer

    Lung nodules detection by ensemble classification

    Full text link
    A method is presented that achieves lung nodule detection by classification of nodule and non-nodule patterns. It is based on random forests which are ensemble learners that grow classification trees. Each tree produces a classification decision, and an integrated output is calculated. The performance of the developed method is compared against that of the support vector machine and the decision tree methods. Three experiments are performed using lung scans of 32 patients including thousands of images within which nodule locations are marked by expert radiologists. The classification errors and execution times are presented and discussed. The lowest classification error (2.4%) has been produced by the developed method.<br /

    Fast volumetric registration method for tumor follow-up in pulmonary CT exams

    Get PDF
    An oncological patient may go through several tomographic acquisitions during a period of time, needing an appropriate registration. We propose an automatic volumetric intrapatient registration method for tumor follow-up in pulmonary CT exams. The performance of our method is evaluated and compared with other registration methods based on optimization techniques. We also compared the metrics behavior to inspect which metric is more sensitive to changes due to the presence of lung tumors

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements

    Development, Implementation and Pre-clinical Evaluation of Medical Image Computing Tools in Support of Computer-aided Diagnosis: Respiratory, Orthopedic and Cardiac Applications

    Get PDF
    Over the last decade, image processing tools have become crucial components of all clinical and research efforts involving medical imaging and associated applications. The imaging data available to the radiologists continue to increase their workload, raising the need for efficient identification and visualization of the required image data necessary for clinical assessment. Computer-aided diagnosis (CAD) in medical imaging has evolved in response to the need for techniques that can assist the radiologists to increase throughput while reducing human error and bias without compromising the outcome of the screening, diagnosis or disease assessment. More intelligent, but simple, consistent and less time-consuming methods will become more widespread, reducing user variability, while also revealing information in a more clear, visual way. Several routine image processing approaches, including localization, segmentation, registration, and fusion, are critical for enhancing and enabling the development of CAD techniques. However, changes in clinical workflow require significant adjustments and re-training and, despite the efforts of the academic research community to develop state-of-the-art algorithms and high-performance techniques, their footprint often hampers their clinical use. Currently, the main challenge seems to not be the lack of tools and techniques for medical image processing, analysis, and computing, but rather the lack of clinically feasible solutions that leverage the already developed and existing tools and techniques, as well as a demonstration of the potential clinical impact of such tools. Recently, more and more efforts have been dedicated to devising new algorithms for localization, segmentation or registration, while their potential and much intended clinical use and their actual utility is dwarfed by the scientific, algorithmic and developmental novelty that only result in incremental improvements over already algorithms. In this thesis, we propose and demonstrate the implementation and evaluation of several different methodological guidelines that ensure the development of image processing tools --- localization, segmentation and registration --- and illustrate their use across several medical imaging modalities --- X-ray, computed tomography, ultrasound and magnetic resonance imaging --- and several clinical applications: Lung CT image registration in support for assessment of pulmonary nodule growth rate and disease progression from thoracic CT images. Automated reconstruction of standing X-ray panoramas from multi-sector X-ray images for assessment of long limb mechanical axis and knee misalignment. Left and right ventricle localization, segmentation, reconstruction, ejection fraction measurement from cine cardiac MRI or multi-plane trans-esophageal ultrasound images for cardiac function assessment. When devising and evaluating our developed tools, we use clinical patient data to illustrate the inherent clinical challenges associated with highly variable imaging data that need to be addressed before potential pre-clinical validation and implementation. In an effort to provide plausible solutions to the selected applications, the proposed methodological guidelines ensure the development of image processing tools that help achieve sufficiently reliable solutions that not only have the potential to address the clinical needs, but are sufficiently streamlined to be potentially translated into eventual clinical tools provided proper implementation. G1: Reducing the number of degrees of freedom (DOF) of the designed tool, with a plausible example being avoiding the use of inefficient non-rigid image registration methods. This guideline addresses the risk of artificial deformation during registration and it clearly aims at reducing complexity and the number of degrees of freedom. G2: The use of shape-based features to most efficiently represent the image content, either by using edges instead of or in addition to intensities and motion, where useful. Edges capture the most useful information in the image and can be used to identify the most important image features. As a result, this guideline ensures a more robust performance when key image information is missing. G3: Efficient method of implementation. This guideline focuses on efficiency in terms of the minimum number of steps required and avoiding the recalculation of terms that only need to be calculated once in an iterative process. An efficient implementation leads to reduced computational effort and improved performance. G4: Commence the workflow by establishing an optimized initialization and gradually converge toward the final acceptable result. This guideline aims to ensure reasonable outcomes in consistent ways and it avoids convergence to local minima, while gradually ensuring convergence to the global minimum solution. These guidelines lead to the development of interactive, semi-automated or fully-automated approaches that still enable the clinicians to perform final refinements, while they reduce the overall inter- and intra-observer variability, reduce ambiguity, increase accuracy and precision, and have the potential to yield mechanisms that will aid with providing an overall more consistent diagnosis in a timely fashion
    • …
    corecore