22,286 research outputs found
Pediatric Bone Age Assessment Using Deep Convolutional Neural Networks
Skeletal bone age assessment is a common clinical practice to diagnose
endocrine and metabolic disorders in child development. In this paper, we
describe a fully automated deep learning approach to the problem of bone age
assessment using data from Pediatric Bone Age Challenge organized by RSNA 2017.
The dataset for this competition is consisted of 12.6k radiological images of
left hand labeled by the bone age and sex of patients. Our approach utilizes
several deep learning architectures: U-Net, ResNet-50, and custom VGG-style
neural networks trained end-to-end. We use images of whole hands as well as
specific parts of a hand for both training and inference. This approach allows
us to measure importance of specific hand bones for the automated bone age
analysis. We further evaluate performance of the method in the context of
skeletal development stages. Our approach outperforms other common methods for
bone age assessment.Comment: 14 pages, 9 figure
Semi-Supervised Self-Taught Deep Learning for Finger Bones Segmentation
Segmentation stands at the forefront of many high-level vision tasks. In this
study, we focus on segmenting finger bones within a newly introduced
semi-supervised self-taught deep learning framework which consists of a student
network and a stand-alone teacher module. The whole system is boosted in a
life-long learning manner wherein each step the teacher module provides a
refinement for the student network to learn with newly unlabeled data.
Experimental results demonstrate the superiority of the proposed method over
conventional supervised deep learning methods.Comment: IEEE BHI 2019 accepte
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Deep learning-based fully automatic segmentation of wrist cartilage in MR images
The study objective was to investigate the performance of a dedicated
convolutional neural network (CNN) optimized for wrist cartilage segmentation
from 2D MR images. CNN utilized a planar architecture and patch-based (PB)
training approach that ensured optimal performance in the presence of a limited
amount of training data. The CNN was trained and validated in twenty
multi-slice MRI datasets acquired with two different coils in eleven subjects
(healthy volunteers and patients). The validation included a comparison with
the alternative state-of-the-art CNN methods for the segmentation of joints
from MR images and the ground-truth manual segmentation. When trained on the
limited training data, the CNN outperformed significantly image-based and
patch-based U-Net networks. Our PB-CNN also demonstrated a good agreement with
manual segmentation (Sorensen-Dice similarity coefficient (DSC) = 0.81) in the
representative (central coronal) slices with large amount of cartilage tissue.
Reduced performance of the network for slices with a very limited amount of
cartilage tissue suggests the need for fully 3D convolutional networks to
provide uniform performance across the joint. The study also assessed inter-
and intra-observer variability of the manual wrist cartilage segmentation
(DSC=0.78-0.88 and 0.9, respectively). The proposed deep-learning-based
segmentation of the wrist cartilage from MRI could facilitate research of novel
imaging markers of wrist osteoarthritis to characterize its progression and
response to therapy
The Bionic Radiologist: avoiding blurry pictures and providing greater insights
Radiology images and reports have long been digitalized. However, the potential of the more than 3.6 billion radiology
examinations performed annually worldwide has largely gone unused in the effort to digitally transform health care. The Bionic
Radiologist is a concept that combines humanity and digitalization for better health care integration of radiology. At a practical
level, this concept will achieve critical goals: (1) testing decisions being made scientifically on the basis of disease probabilities and
patient preferences; (2) image analysis done consistently at any time and at any site; and (3) treatment suggestions that are closely
linked to imaging results and are seamlessly integrated with other information. The Bionic Radiologist will thus help avoiding missed
care opportunities, will provide continuous learning in the work process, and will also allow more time for radiologists’ primary
roles: interacting with patients and referring physicians. To achieve that potential, one has to cope with many implementation
barriers at both the individual and institutional levels. These include: reluctance to delegate decision making, a possible decrease in
image interpretation knowledge and the perception that patient safety and trust are at stake. To facilitate implementation of the
Bionic Radiologist the following will be helpful: uncertainty quantifications for suggestions, shared decision making, changes in
organizational culture and leadership style, maintained expertise through continuous learning systems for training, and role
development of the involved experts. With the support of the Bionic Radiologist, disparities are reduced and the delivery of care is
provided in a humane and personalized fashion
- …