218 research outputs found
Deep Lesion Graphs in the Wild: Relationship Learning and Organization of Significant Radiology Image Findings in a Diverse Large-scale Lesion Database
Radiologists in their daily work routinely find and annotate significant
abnormalities on a large number of radiology images. Such abnormalities, or
lesions, have collected over years and stored in hospitals' picture archiving
and communication systems. However, they are basically unsorted and lack
semantic annotations like type and location. In this paper, we aim to organize
and explore them by learning a deep feature representation for each lesion. A
large-scale and comprehensive dataset, DeepLesion, is introduced for this task.
DeepLesion contains bounding boxes and size measurements of over 32K lesions.
To model their similarity relationship, we leverage multiple supervision
information including types, self-supervised location coordinates and sizes.
They require little manual annotation effort but describe useful attributes of
the lesions. Then, a triplet network is utilized to learn lesion embeddings
with a sequential sampling strategy to depict their hierarchical similarity
structure. Experiments show promising qualitative and quantitative results on
lesion retrieval, clustering, and classification. The learned embeddings can be
further employed to build a lesion graph for various clinically useful
applications. We propose algorithms for intra-patient lesion matching and
missing annotation mining. Experimental results validate their effectiveness.Comment: Accepted by CVPR2018. DeepLesion url adde
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
SA-Med2D-20M Dataset: Segment Anything in 2D Medical Imaging with 20 Million masks
Segment Anything Model (SAM) has achieved impressive results for natural
image segmentation with input prompts such as points and bounding boxes. Its
success largely owes to massive labeled training data. However, directly
applying SAM to medical image segmentation cannot perform well because SAM
lacks medical knowledge -- it does not use medical images for training. To
incorporate medical knowledge into SAM, we introduce SA-Med2D-20M, a
large-scale segmentation dataset of 2D medical images built upon numerous
public and private datasets. It consists of 4.6 million 2D medical images and
19.7 million corresponding masks, covering almost the whole body and showing
significant diversity. This paper describes all the datasets collected in
SA-Med2D-20M and details how to process these datasets. Furthermore,
comprehensive statistics of SA-Med2D-20M are presented to facilitate the better
use of our dataset, which can help the researchers build medical vision
foundation models or apply their models to downstream medical applications. We
hope that the large scale and diversity of SA-Med2D-20M can be leveraged to
develop medical artificial intelligence for enhancing diagnosis, medical image
analysis, knowledge sharing, and education. The data with the redistribution
license is publicly available at https://github.com/OpenGVLab/SAM-Med2D
- …