203 research outputs found

    A deep learning based dual encoder–decoder framework for anatomical structure segmentation in chest X-ray images

    Get PDF
    Automated multi-organ segmentation plays an essential part in the computer-aided diagnostic (CAD) of chest X-ray fluoroscopy. However, developing a CAD system for the anatomical structure segmentation remains challenging due to several indistinct structures, variations in the anatomical structure shape among different individuals, the presence of medical tools, such as pacemakers and catheters, and various artifacts in the chest radiographic images. In this paper, we propose a robust deep learning segmentation framework for the anatomical structure in chest radiographs that utilizes a dual encoder–decoder convolutional neural network (CNN). The first network in the dual encoder–decoder structure effectively utilizes a pre-trained VGG19 as an encoder for the segmentation task. The pre-trained encoder output is fed into the squeeze-and-excitation (SE) to boost the network’s representation power, which enables it to perform dynamic channel-wise feature calibrations. The calibrated features are efficiently passed into the first decoder to generate the mask. We integrated the generated mask with the input image and passed it through a second encoder–decoder network with the recurrent residual blocks and an attention the gate module to capture the additional contextual features and improve the segmentation of the smaller regions. Three public chest X-ray datasets are used to evaluate the proposed method for multi-organs segmentation, such as the heart, lungs, and clavicles, and single-organ segmentation, which include only lungs. The results from the experiment show that our proposed technique outperformed the existing multi-class and single-class segmentation methods

    Deep-learning framework to detect lung abnormality - A study with chest X-Ray and lung CT scan images

    Get PDF
    Lung abnormalities are highly risky conditions in humans. The early diagnosis of lung abnormalities is essential to reduce the risk by enabling quick and efficient treatment. This research work aims to propose a Deep-Learning (DL) framework to examine lung pneumonia and cancer. This work proposes two different DL techniques to assess the considered problem: (i) The initial DL method, named a modified AlexNet (MAN), is proposed to classify chest X-Ray images into normal and pneumonia class. In the MAN, the classification is implemented using with Support Vector Machine (SVM), and its performance is compared against Softmax. Further, its performance is validated with other pre-trained DL techniques, such as AlexNet, VGG16, VGG19 and ResNet50. (ii) The second DL work implements a fusion of handcrafted and learned features in the MAN to improve classification accuracy during lung cancer assessment. This work employs serial fusion and Principal Component Analysis (PCA) based features selection to enhance the feature vector. The performance of this DL frame work is tested using benchmark lung cancer CT images of LIDC-IDRI and classification accuracy (97.27%) is attained. (c) 2019 Elsevier B.V

    Classificação de nódulos pulmonares baseada em redes neurais convolucionais profundas em radiografias

    Get PDF
    Orientador: Hélio PedriniDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: O câncer de pulmão, que se caracteriza pela presença de nódulos, é o tipo mais comum de câncer em todo o mundo, além de ser um dos mais agressivos e fatais, com 20% da mortalidade total por câncer. A triagem do câncer de pulmão pode ser realizada por radiologistas que analisam imagens de raios-X de tórax (CXR). No entanto, a detecção de nódulos pulmonares é uma tarefa difícil devido a sua grande variabilidade, limitações humanas de memória, distração e fadiga, entre outros fatores. Essas dificuldades motivam o desenvolvimento de sistemas de diagnóstico por computador (CAD) para apoiar radiologistas na detecção de nódulos pulmonares. A classificação do nódulo do pulmão é um dos principais tópicos relacionados aos sistemas de CAD. Embora as redes neurais convolucionais (CNN) tenham demonstrado um bom desempenho em muitas tarefas, há poucas explorações de seu uso para classificar nódulos pulmonares em imagens CXR. Neste trabalho, propusemos e analisamos um arcabouço para a detecção de nódulos pulmonares em imagens de CXR que inclui segmentação da área pulmonar, localização de nódulos e classificação de nódulos candidatos. Apresentamos um método para classificação de nódulos candidatos com CNNs treinadas a partir do zero. A eficácia do nosso método baseia-se na seleção de parâmetros de aumento de dados, no projeto de uma arquitetura CNN especializada, no uso da regularização de dropout na rede, inclusive em camadas convolucionais, e no tratamento da falta de amostras de nódulos em comparação com amostras de fundo, balanceando mini-lotes em cada iteração da descida do gradiente estocástico. Todas as decisões de seleção do modelo foram tomadas usando-se um subconjunto de imagens CXR da base Lung Image Database Consortium and Image Database Resource Initiative (LIDC/IDRI) separadamente. Então, utilizamos todas as imagens com nódulos no conjunto de dados da Japanese Society of Radiological Technology (JSRT) para avaliação. Nossos experimentos mostraram que as CNNs foram capazes de alcançar resultados competitivos quando comparados com métodos da literatura. Nossa proposta obteve uma curva de operação (AUC) de 7.51 considerando 10 falsos positivos por imagem (FPPI) e uma sensibilidade de 71.4% e 81.0% com 2 e 5 FPPI, respectivamenteAbstract: Lung cancer, which is characterized by the presence of nodules, is the most common type of cancer around the world, as well as one of the most aggressive and deadliest cancer, with 20% of total cancer mortality. Lung cancer screening can be performed by radiologists analyzing chest X-ray (CXR) images. However, the detection of lung nodules is a difficult task due to their wide variability, human limitations of memory, distraction and fatigue, among other factors. These difficulties motivate the development of computer-aided diagnosis (CAD) systems for supporting radiologists in detecting lung nodules. Lung nodule classification is one of the main topics related to CAD systems. Although convolutional neural networks (CNN) have been demonstrated to perform well on many tasks, there are few explorations of their use for classifying lung nodules in CXR images. In this work, we proposed and analyzed a pipeline for detecting lung nodules in CXR images that includes lung area segmentation, potential nodule localization, and nodule candidate classification. We presented a method for classifying nodule candidates with a CNN trained from the scratch. The effectiveness of our method relies on the selection of data augmentation parameters, the design of a specialized CNN architecture, the use of dropout regularization on the network, inclusive in convolutional layers, and addressing the lack of nodule samples compared to background samples balancing mini-batches on each stochastic gradient descent iteration. All model selection decisions were taken using a CXR subset of the Lung Image Database Consortium and Image Database Resource Initiative (LIDC/IDRI) dataset separately. Thus, we used all images with nodules in the Japanese Society of Radiological Technology (JSRT) dataset for evaluation. Our experiments showed that CNNs were capable of achieving competitive results when compared to state-of-the-art methods. Our proposal obtained an area under the free-response receiver operating characteristic (AUC) curve of 7.51 considering 10 false positives per image (FPPI), and a sensitivity of 71.4% and 81.0% with 2 and 5 FPPI, respectivelyMestradoCiência da ComputaçãoMestre em Ciência da ComputaçãoCAPE

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Anatomy-Aware Inference of the 3D Standing Spine Posture from 2D Radiographs

    Full text link
    An important factor for the development of spinal degeneration, pain and the outcome of spinal surgery is known to be the balance of the spine. It must be analyzed in an upright, standing position to ensure physiological loading conditions and visualize load-dependent deformations. Despite the complex 3D shape of the spine, this analysis is currently performed using 2D radiographs, as all frequently used 3D imaging techniques require the patient to be scanned in a prone position. To overcome this limitation, we propose a deep neural network to reconstruct the 3D spinal pose in an upright standing position, loaded naturally. Specifically, we propose a novel neural network architecture, which takes orthogonal 2D radiographs and infers the spine’s 3D posture using vertebral shape priors. In this work, we define vertebral shape priors using an atlas and a spine shape prior, incorporating both into our proposed network architecture. We validate our architecture on digitally reconstructed radiographs, achieving a 3D reconstruction Dice of 0.95, indicating an almost perfect 2D-to-3D domain translation. Validating the reconstruction accuracy of a 3D standing spine on real data is infeasible due to the lack of a valid ground truth. Hence, we design a novel experiment for this purpose, using an orientation invariant distance metric, to evaluate our model’s ability to synthesize full-3D, upright, and patient-specific spine models. We compare the synthesized spine shapes from clinical upright standing radiographs to the same patient’s 3D spinal posture in the prone position from CT
    • …
    corecore