896 research outputs found
Deep Lesion Graphs in the Wild: Relationship Learning and Organization of Significant Radiology Image Findings in a Diverse Large-scale Lesion Database
Radiologists in their daily work routinely find and annotate significant
abnormalities on a large number of radiology images. Such abnormalities, or
lesions, have collected over years and stored in hospitals' picture archiving
and communication systems. However, they are basically unsorted and lack
semantic annotations like type and location. In this paper, we aim to organize
and explore them by learning a deep feature representation for each lesion. A
large-scale and comprehensive dataset, DeepLesion, is introduced for this task.
DeepLesion contains bounding boxes and size measurements of over 32K lesions.
To model their similarity relationship, we leverage multiple supervision
information including types, self-supervised location coordinates and sizes.
They require little manual annotation effort but describe useful attributes of
the lesions. Then, a triplet network is utilized to learn lesion embeddings
with a sequential sampling strategy to depict their hierarchical similarity
structure. Experiments show promising qualitative and quantitative results on
lesion retrieval, clustering, and classification. The learned embeddings can be
further employed to build a lesion graph for various clinically useful
applications. We propose algorithms for intra-patient lesion matching and
missing annotation mining. Experimental results validate their effectiveness.Comment: Accepted by CVPR2018. DeepLesion url adde
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Informative sample generation using class aware generative adversarial networks for classification of chest Xrays
Training robust deep learning (DL) systems for disease detection from medical
images is challenging due to limited images covering different disease types
and severity. The problem is especially acute, where there is a severe class
imbalance. We propose an active learning (AL) framework to select most
informative samples for training our model using a Bayesian neural network.
Informative samples are then used within a novel class aware generative
adversarial network (CAGAN) to generate realistic chest xray images for data
augmentation by transferring characteristics from one class label to another.
Experiments show our proposed AL framework is able to achieve state-of-the-art
performance by using about of the full dataset, thus saving significant
time and effort over conventional methods
PadChest: A large chest x-ray image dataset with multi-label annotated reports
We present a labeled large-scale, high resolution chest x-ray dataset for the
automated exploration of medical images along with their associated reports.
This dataset includes more than 160,000 images obtained from 67,000 patients
that were interpreted and reported by radiologists at Hospital San Juan
Hospital (Spain) from 2009 to 2017, covering six different position views and
additional information on image acquisition and patient demography. The reports
were labeled with 174 different radiographic findings, 19 differential
diagnoses and 104 anatomic locations organized as a hierarchical taxonomy and
mapped onto standard Unified Medical Language System (UMLS) terminology. Of
these reports, 27% were manually annotated by trained physicians and the
remaining set was labeled using a supervised method based on a recurrent neural
network with attention mechanisms. The labels generated were then validated in
an independent test set achieving a 0.93 Micro-F1 score. To the best of our
knowledge, this is one of the largest public chest x-ray database suitable for
training supervised models concerning radiographs, and the first to contain
radiographic reports in Spanish. The PadChest dataset can be downloaded from
http://bimcv.cipf.es/bimcv-projects/padchest/
High resolution propagation-based lung imaging at clinically relevant X-ray dose levels
Absorption-based clinical computed tomography (CT) is the current imaging method of choice in the diagnosis of lung diseases. Many pulmonary diseases are affecting microscopic structures of the lung, such as terminal bronchi, alveolar spaces, sublobular blood vessels or the pulmonary interstitial tissue. As spatial resolution in CT is limited by the clinically acceptable applied X-ray dose, a comprehensive diagnosis of conditions such as interstitial lung disease, idiopathic pulmonary fibrosis or the characterization of small pulmonary nodules is limited and may require additional validation by invasive lung biopsies. Propagation-based imaging (PBI) is a phase sensitive X-ray imaging technique capable of reaching high spatial resolutions at relatively low applied radiation dose levels. In this publication, we present technical refinements of PBI for the characterization of different artificial lung pathologies, mimicking clinically relevant patterns in ventilated fresh porcine lungs in a human-scale chest phantom. The combination of a very large propagation distance of 10.7 m and a photon counting detector with [Formula: see text] pixel size enabled high resolution PBI CT with significantly improved dose efficiency, measured by thermoluminescence detectors. Image quality was directly compared with state-of-the-art clinical CT. PBI with increased propagation distance was found to provide improved image quality at the same or even lower X-ray dose levels than clinical CT. By combining PBI with iodine k-edge subtraction imaging we further demonstrate that, the high quality of the calculated iodine concentration maps might be a potential tool for the analysis of lung perfusion in great detail. Our results indicate PBI to be of great value for accurate diagnosis of lung disease in patients as it allows to depict pathological lesions non-invasively at high resolution in 3D. This will especially benefit patients at high risk of complications from invasive lung biopsies such as in the setting of suspected idiopathic pulmonary fibrosis (IPF)
Conditional Generation of Medical Images via Disentangled Adversarial Inference
Synthetic medical image generation has a huge potential for improving
healthcare through many applications, from data augmentation for training
machine learning systems to preserving patient privacy. Conditional Adversarial
Generative Networks (cGANs) use a conditioning factor to generate images and
have shown great success in recent years. Intuitively, the information in an
image can be divided into two parts: 1) content which is presented through the
conditioning vector and 2) style which is the undiscovered information missing
from the conditioning vector. Current practices in using cGANs for medical
image generation, only use a single variable for image generation (i.e.,
content) and therefore, do not provide much flexibility nor control over the
generated image. In this work we propose a methodology to learn from the image
itself, disentangled representations of style and content, and use this
information to impose control over the generation process. In this framework,
style is learned in a fully unsupervised manner, while content is learned
through both supervised learning (using the conditioning vector) and
unsupervised learning (with the inference mechanism). We undergo two novel
regularization steps to ensure content-style disentanglement. First, we
minimize the shared information between content and style by introducing a
novel application of the gradient reverse layer (GRL); second, we introduce a
self-supervised regularization method to further separate information in the
content and style variables. We show that in general, two latent variable
models achieve better performance and give more control over the generated
image. We also show that our proposed model (DRAI) achieves the best
disentanglement score and has the best overall performance.Comment: Published in Medical Image Analysi
Cancer diagnosis using deep learning: A bibliographic review
In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements
- …