777 research outputs found
Deep Lesion Graphs in the Wild: Relationship Learning and Organization of Significant Radiology Image Findings in a Diverse Large-scale Lesion Database
Radiologists in their daily work routinely find and annotate significant
abnormalities on a large number of radiology images. Such abnormalities, or
lesions, have collected over years and stored in hospitals' picture archiving
and communication systems. However, they are basically unsorted and lack
semantic annotations like type and location. In this paper, we aim to organize
and explore them by learning a deep feature representation for each lesion. A
large-scale and comprehensive dataset, DeepLesion, is introduced for this task.
DeepLesion contains bounding boxes and size measurements of over 32K lesions.
To model their similarity relationship, we leverage multiple supervision
information including types, self-supervised location coordinates and sizes.
They require little manual annotation effort but describe useful attributes of
the lesions. Then, a triplet network is utilized to learn lesion embeddings
with a sequential sampling strategy to depict their hierarchical similarity
structure. Experiments show promising qualitative and quantitative results on
lesion retrieval, clustering, and classification. The learned embeddings can be
further employed to build a lesion graph for various clinically useful
applications. We propose algorithms for intra-patient lesion matching and
missing annotation mining. Experimental results validate their effectiveness.Comment: Accepted by CVPR2018. DeepLesion url adde
Hierarchical classification of liver tumor from CT images based on difference-of-features (DOF)
This manuscript presents an automated classification approach to classifying lesions into four categories of liver diseases, based on Computer Tomography (CT) images. The four diseases types are Cyst, Hemangioma, Hepatocellular carcinoma (HCC), and Metastasis.
The novelty of the proposed approach is attributed to utilising the difference of features (DOF) between the lesion area and the surrounding normal liver tissue. The DOF (texture and intensity) is used as the new feature vector that feeds the classifier. The classification system consists of two phases. The first phase differentiates between Benign and Malignant lesions, using a Support Vector Machine (SVM) classifier. The second phase further classifies the Benign into Hemangioma or Cyst and the Malignant into Metastasis or HCC, using a Naïve Bayes (NB) classifier. The experimental results show promising improvements to classify the liver lesion diseases. Furthermore, the proposed approach can overcome the problems of varying intensity ranges, textures between patients, demographics, and imaging devices and settings
Recommended from our members
Improving Patch-Based Convolutional Neural Networks for MRI Brain Tumor Segmentation by Leveraging Location Information.
The manual brain tumor annotation process is time consuming and resource consuming, therefore, an automated and accurate brain tumor segmentation tool is greatly in demand. In this paper, we introduce a novel method to integrate location information with the state-of-the-art patch-based neural networks for brain tumor segmentation. This is motivated by the observation that lesions are not uniformly distributed across different brain parcellation regions and that a locality-sensitive segmentation is likely to obtain better segmentation accuracy. Toward this, we use an existing brain parcellation atlas in the Montreal Neurological Institute (MNI) space and map this atlas to the individual subject data. This mapped atlas in the subject data space is integrated with structural Magnetic Resonance (MR) imaging data, and patch-based neural networks, including 3D U-Net and DeepMedic, are trained to classify the different brain lesions. Multiple state-of-the-art neural networks are trained and integrated with XGBoost fusion in the proposed two-level ensemble method. The first level reduces the uncertainty of the same type of models with different seed initializations, and the second level leverages the advantages of different types of neural network models. The proposed location information fusion method improves the segmentation performance of state-of-the-art networks including 3D U-Net and DeepMedic. Our proposed ensemble also achieves better segmentation performance compared to the state-of-the-art networks in BraTS 2017 and rivals state-of-the-art networks in BraTS 2018. Detailed results are provided on the public multimodal brain tumor segmentation (BraTS) benchmarks
CancerUniT: Towards a Single Unified Model for Effective Detection, Segmentation, and Diagnosis of Eight Major Cancers Using a Large Collection of CT Scans
Human readers or radiologists routinely perform full-body multi-organ
multi-disease detection and diagnosis in clinical practice, while most medical
AI systems are built to focus on single organs with a narrow list of a few
diseases. This might severely limit AI's clinical adoption. A certain number of
AI models need to be assembled non-trivially to match the diagnostic process of
a human reading a CT scan. In this paper, we construct a Unified Tumor
Transformer (CancerUniT) model to jointly detect tumor existence & location and
diagnose tumor characteristics for eight major cancers in CT scans. CancerUniT
is a query-based Mask Transformer model with the output of multi-tumor
prediction. We decouple the object queries into organ queries, tumor detection
queries and tumor diagnosis queries, and further establish hierarchical
relationships among the three groups. This clinically-inspired architecture
effectively assists inter- and intra-organ representation learning of tumors
and facilitates the resolution of these complex, anatomically related
multi-organ cancer image reading tasks. CancerUniT is trained end-to-end using
a curated large-scale CT images of 10,042 patients including eight major types
of cancers and occurring non-cancer tumors (all are pathology-confirmed with 3D
tumor masks annotated by radiologists). On the test set of 631 patients,
CancerUniT has demonstrated strong performance under a set of clinically
relevant evaluation metrics, substantially outperforming both multi-disease
methods and an assembly of eight single-organ expert models in tumor detection,
segmentation, and diagnosis. This moves one step closer towards a universal
high performance cancer screening tool.Comment: ICCV 2023 Camera Ready Versio
Volumetric Attention for 3D Medical Image Segmentation and Detection
A volumetric attention(VA) module for 3D medical image segmentation and
detection is proposed. VA attention is inspired by recent advances in video
processing, enables 2.5D networks to leverage context information along the z
direction, and allows the use of pretrained 2D detection models when training
data is limited, as is often the case for medical applications. Its integration
in the Mask R-CNN is shown to enable state-of-the-art performance on the Liver
Tumor Segmentation (LiTS) Challenge, outperforming the previous challenge
winner by 3.9 points and achieving top performance on the LiTS leader board at
the time of paper submission. Detection experiments on the DeepLesion dataset
also show that the addition of VA to existing object detectors enables a 69.1
sensitivity at 0.5 false positive per image, outperforming the best published
results by 6.6 points.Comment: Accepted by MICCAI 201
- …