42 research outputs found

    Feature Detection in Medical Images Using Deep Learning

    Get PDF
    This project explores the use of deep learning to predict age based on pediatric hand X-Rays. Data from the Radiological Society of North America’s pediatric bone age challenge were used to train and evaluate a convolutional neural network. The project used InceptionV3, a CNN developed by Google, that was pre-trained on ImageNet, a popular online image dataset. Our fine-tuned version of InceptionV3 yielded an average error of less than 10 months between predicted and actual age. This project shows the effectiveness of deep learning in analyzing medical images and the potential for even greater improvements in the future. In addition to the technological and potential clinical benefits of these methods, this project will serve as a useful pedagogical tool for introducing the challenges and applications of deep learning to the Bryant community

    Longitudinal detection of radiological abnormalities with time-modulated LSTM

    Full text link
    Convolutional neural networks (CNNs) have been successfully employed in recent years for the detection of radiological abnormalities in medical images such as plain x-rays. To date, most studies use CNNs on individual examinations in isolation and discard previously available clinical information. In this study we set out to explore whether Long-Short-Term-Memory networks (LSTMs) can be used to improve classification performance when modelling the entire sequence of radiographs that may be available for a given patient, including their reports. A limitation of traditional LSTMs, though, is that they implicitly assume equally-spaced observations, whereas the radiological exams are event-based, and therefore irregularly sampled. Using both a simulated dataset and a large-scale chest x-ray dataset, we demonstrate that a simple modification of the LSTM architecture, which explicitly takes into account the time lag between consecutive observations, can boost classification performance. Our empirical results demonstrate improved detection of commonly reported abnormalities on chest x-rays such as cardiomegaly, consolidation, pleural effusion and hiatus hernia.Comment: Submitted to 4th MICCAI Workshop on Deep Learning in Medical Imaging Analysi

    Cats or CAT scans: transfer learning from natural or medical image source datasets?

    Get PDF
    Transfer learning is a widely used strategy in medical image analysis. Instead of only training a network with a limited amount of data from the target task of interest, we can first train the network with other, potentially larger source datasets, creating a more robust model. The source datasets do not have to be related to the target task. For a classification task in lung CT images, we could use both head CT images, or images of cats, as the source. While head CT images appear more similar to lung CT images, the number and diversity of cat images might lead to a better model overall. In this survey we review a number of papers that have performed similar comparisons. Although the answer to which strategy is best seems to be "it depends", we discuss a number of research directions we need to take as a community, to gain more understanding of this topic.Comment: Accepted to Current Opinion in Biomedical Engineerin

    Transfer learning for diagnosis of congenital abnormalities of the kidney and urinary tract in children based on Ultrasound imaging data

    Full text link
    Classification of ultrasound (US) kidney images for diagnosis of congenital abnormalities of the kidney and urinary tract (CAKUT) in children is a challenging task. It is desirable to improve existing pattern classification models that are built upon conventional image features. In this study, we propose a transfer learning-based method to extract imaging features from US kidney images in order to improve the CAKUT diagnosis in children. Particularly, a pre-trained deep learning model (imagenet-caffe-alex) is adopted for transfer learning-based feature extraction from 3-channel feature maps computed from US images, including original images, gradient features, and distanced transform features. Support vector machine classifiers are then built upon different sets of features, including the transfer learning features, conventional imaging features, and their combination. Experimental results have demonstrated that the combination of transfer learning features and conventional imaging features yielded the best classification performance for distinguishing CAKUT patients from normal controls based on their US kidney images.Comment: Accepted paper in IEEE International Symposium on Biomedical Imaging (ISBI), 201

    An Approach of AlexNet CNN Algorithm Model for Lung Cancer Detection and Classification

    Get PDF
    As a reliable tool for identifying and classifying different illnesses, including lung cancer, deep learning has grown significantly in  popularity. It is crucial to quickly and accurately diagnose lung cancer because different treatment options depend on the type and stage of the  disease. Deep learning algorithms (DLA) are used to speed up the critical process of lung cancer detection and lessen the burden on medical  professionals. In this study, the feasibility of employing deep learning algorithms for the early detection of lung cancer is explored, using data  from the Lung Imaging Database Consortium (LIDC) database. The study introduces the VGG-16 and AlexNet models specifically to identify  the presence of cancer in lung images. The AlexNet model is chosen for additional classification tasks based on performance. The suggested  technique displays considerable increases in both the prediction and classification accuracy of cancer. The results from using the AlexNet model  show the highest levels of accuracy, with classification accuracy of 97.76% and prediction accuracy of 97.02%, both verified using a 5-fold  cross-validation method. Moreover, when classifying the forms of cancer, the model gets a remarkable area under the curve (AUC) value of 1  for the Adenocarcinoma class, signaling extraordinary performance. Notably, the proposed model achieves an accuracy exceeding 90% across  all classes
    corecore