33 research outputs found
Automatic nodule identification and differentiation in ultrasound videos to facilitate per-nodule examination
Ultrasound is a vital diagnostic technique in health screening, with the
advantages of non-invasive, cost-effective, and radiation free, and therefore
is widely applied in the diagnosis of nodules. However, it relies heavily on
the expertise and clinical experience of the sonographer. In ultrasound images,
a single nodule might present heterogeneous appearances in different
cross-sectional views which makes it hard to perform per-nodule examination.
Sonographers usually discriminate different nodules by examining the nodule
features and the surrounding structures like gland and duct, which is
cumbersome and time-consuming. To address this problem, we collected hundreds
of breast ultrasound videos and built a nodule reidentification system that
consists of two parts: an extractor based on the deep learning model that can
extract feature vectors from the input video clips and a real-time clustering
algorithm that automatically groups feature vectors by nodules. The system
obtains satisfactory results and exhibits the capability to differentiate
ultrasound videos. As far as we know, it's the first attempt to apply
re-identification technique in the ultrasonic field
Deep Lesion Graphs in the Wild: Relationship Learning and Organization of Significant Radiology Image Findings in a Diverse Large-scale Lesion Database
Radiologists in their daily work routinely find and annotate significant
abnormalities on a large number of radiology images. Such abnormalities, or
lesions, have collected over years and stored in hospitals' picture archiving
and communication systems. However, they are basically unsorted and lack
semantic annotations like type and location. In this paper, we aim to organize
and explore them by learning a deep feature representation for each lesion. A
large-scale and comprehensive dataset, DeepLesion, is introduced for this task.
DeepLesion contains bounding boxes and size measurements of over 32K lesions.
To model their similarity relationship, we leverage multiple supervision
information including types, self-supervised location coordinates and sizes.
They require little manual annotation effort but describe useful attributes of
the lesions. Then, a triplet network is utilized to learn lesion embeddings
with a sequential sampling strategy to depict their hierarchical similarity
structure. Experiments show promising qualitative and quantitative results on
lesion retrieval, clustering, and classification. The learned embeddings can be
further employed to build a lesion graph for various clinically useful
applications. We propose algorithms for intra-patient lesion matching and
missing annotation mining. Experimental results validate their effectiveness.Comment: Accepted by CVPR2018. DeepLesion url adde
Transformer Lesion Tracker
Evaluating lesion progression and treatment response via longitudinal lesion
tracking plays a critical role in clinical practice. Automated approaches for
this task are motivated by prohibitive labor costs and time consumption when
lesion matching is done manually. Previous methods typically lack the
integration of local and global information. In this work, we propose a
transformer-based approach, termed Transformer Lesion Tracker (TLT).
Specifically, we design a Cross Attention-based Transformer (CAT) to capture
and combine both global and local information to enhance feature extraction. We
also develop a Registration-based Anatomical Attention Module (RAAM) to
introduce anatomical information to CAT so that it can focus on useful feature
knowledge. A Sparse Selection Strategy (SSS) is presented for selecting
features and reducing memory footprint in Transformer training. In addition, we
use a global regression to further improve model performance. We conduct
experiments on a public dataset to show the superiority of our method and find
that our model performance has improved the average Euclidean center error by
at least 14.3% (6mm vs. 7mm) compared with the state-of-the-art (SOTA). Code is
available at https://github.com/TangWen920812/TLT.Comment: Accepted MICCAI 202
Going Deep in Medical Image Analysis: Concepts, Methods, Challenges and Future Directions
Medical Image Analysis is currently experiencing a paradigm shift due to Deep
Learning. This technology has recently attracted so much interest of the
Medical Imaging community that it led to a specialized conference in `Medical
Imaging with Deep Learning' in the year 2018. This article surveys the recent
developments in this direction, and provides a critical review of the related
major aspects. We organize the reviewed literature according to the underlying
Pattern Recognition tasks, and further sub-categorize it following a taxonomy
based on human anatomy. This article does not assume prior knowledge of Deep
Learning and makes a significant contribution in explaining the core Deep
Learning concepts to the non-experts in the Medical community. Unique to this
study is the Computer Vision/Machine Learning perspective taken on the advances
of Deep Learning in Medical Imaging. This enables us to single out `lack of
appropriately annotated large-scale datasets' as the core challenge (among
other challenges) in this research direction. We draw on the insights from the
sister research fields of Computer Vision, Pattern Recognition and Machine
Learning etc.; where the techniques of dealing with such challenges have
already matured, to provide promising directions for the Medical Imaging
community to fully harness Deep Learning in the future
U-Net and its variants for medical image segmentation: theory and applications
U-net is an image segmentation technique developed primarily for medical
image analysis that can precisely segment images using a scarce amount of
training data. These traits provide U-net with a very high utility within the
medical imaging community and have resulted in extensive adoption of U-net as
the primary tool for segmentation tasks in medical imaging. The success of
U-net is evident in its widespread use in all major image modalities from CT
scans and MRI to X-rays and microscopy. Furthermore, while U-net is largely a
segmentation tool, there have been instances of the use of U-net in other
applications. As the potential of U-net is still increasing, in this review we
look at the various developments that have been made in the U-net architecture
and provide observations on recent trends. We examine the various innovations
that have been made in deep learning and discuss how these tools facilitate
U-net. Furthermore, we look at image modalities and application areas where
U-net has been applied.Comment: 42 pages, in IEEE Acces
Recommended from our members
Trends in Computer-Aided Diagnosis Using Deep 2 Learning Techniques: A Review of Recent Studies on 3 Algorithm Development 4
With recent focus on deep neural network architectures for development of algorithms for computer-aided diagnosis (CAD), we provide a review of studies within the last 3 years (2015-2017) reported in selected top journals and conferences. 29 studies that met our inclusion criteria were reviewed to identify trends in this field and to inform future development. Studies have focused mostly on cancer-related diseases within internal medicine while diseases within gender-/age-focused fields like gynaecology/pediatrics have not received much focus. All reviewed studies employed image datasets, mostly sourced from publicly available databases (55.2%) and few based on data from human subjects (31%) and non-medical datasets (13.8%), while CNN architecture was employed in most (70%) of the studies. Confirmation of the effect of data manipulation on quality of output and adoption of multi-class rather than binary classification also require more focus. Future studies should leverage collaborations with medical experts to aid future with actual clinical testing with reporting based on some generally applicable index to enable comparison. Our next steps on plans for CAD development for osteoarthritis (OA), with plans to consider multi-class classification and comparison across deep learning approaches and unsupervised architectures were also highlighted
U-net and its variants for medical image segmentation: A review of theory and applications
U-net is an image segmentation technique developed primarily for image segmentation tasks. These traits provide U-net with a high utility within the medical imaging community and have resulted in extensive adoption of U-net as the primary tool for segmentation tasks in medical imaging. The success of U-net is evident in its widespread use in nearly all major image modalities, from CT scans and MRI to Xrays and microscopy. Furthermore, while U-net is largely a segmentation tool, there have been instances of the use of U-net in other applications. Given that U-net’s potential is still increasing, this narrative literature review examines the numerous developments and breakthroughs in the U-net architecture and provides observations on recent trends. We also discuss the many innovations that have advanced in deep learning and discuss how these tools facilitate U-net. In addition, we review the different image modalities and application areas that have been enhanced by U-net