10,205 research outputs found
Fully Automated Bone Age Assessment On Large-Scale Hand X-Ray Dataset
Bone age assessment (BAA) is an essential topic in the clinical practice of evaluating the biological maturity of children. Because the manual method is time-consuming and prone to observer variability, it is attractive to develop computer-aided and automated methods for BAA. In this paper, we present a fully automatic BAA method. To eliminate noise in a raw X-ray image, we start with using U-Net to precisely segment hand mask image from a raw X-ray image. Even though U-Net can perform the segmentation with high precision, it needs a bigger annotated dataset. To alleviate the annotation burden, we propose to use deep active learning (AL) to select unlabeled data samples with sufficient information intentionally. These samples are given to Oracle for annotation. After that, they are then used for subsequential training. In the beginning, only 300 data are manually annotated and then the improved U-Net within the AL framework can robustly segment all the 12611 images in RSNA dataset. The AL segmentation model achieved a Dice score at 0.95 in the annotated testing set. To optimize the learning process, we employ six off-the-shell deep Convolutional Neural Networks (CNNs) with pretrained weights on ImageNet. We use them to extract features of preprocessed hand images with a transfer learning technique. In the end, a variety of ensemble regression algorithms are applied to perform BAA. Besides, we choose a specific CNN to extract features and explain why we select that CNN. Experimental results show that the proposed approach achieved discrepancy between manual and predicted bone age of about 6.96 and 7.35 months for male and female cohorts, respectively, on the RSNA dataset. These accuracies are comparable to state-of-the-art performance
Pediatric Bone Age Assessment Using Deep Convolutional Neural Networks
Skeletal bone age assessment is a common clinical practice to diagnose
endocrine and metabolic disorders in child development. In this paper, we
describe a fully automated deep learning approach to the problem of bone age
assessment using data from Pediatric Bone Age Challenge organized by RSNA 2017.
The dataset for this competition is consisted of 12.6k radiological images of
left hand labeled by the bone age and sex of patients. Our approach utilizes
several deep learning architectures: U-Net, ResNet-50, and custom VGG-style
neural networks trained end-to-end. We use images of whole hands as well as
specific parts of a hand for both training and inference. This approach allows
us to measure importance of specific hand bones for the automated bone age
analysis. We further evaluate performance of the method in the context of
skeletal development stages. Our approach outperforms other common methods for
bone age assessment.Comment: 14 pages, 9 figure
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
A Survey on Artificial Intelligence Techniques for Biomedical Image Analysis in Skeleton-Based Forensic Human Identification
This paper represents the first survey on the application of AI techniques for the analysis
of biomedical images with forensic human identification purposes. Human identification is of
great relevance in today’s society and, in particular, in medico-legal contexts. As consequence,
all technological advances that are introduced in this field can contribute to the increasing necessity
for accurate and robust tools that allow for establishing and verifying human identity. We first
describe the importance and applicability of forensic anthropology in many identification scenarios.
Later, we present the main trends related to the application of computer vision, machine learning
and soft computing techniques to the estimation of the biological profile, the identification through
comparative radiography and craniofacial superimposition, traumatism and pathology analysis,
as well as facial reconstruction. The potentialities and limitations of the employed approaches are
described, and we conclude with a discussion about methodological issues and future research.Spanish Ministry of Science, Innovation and UniversitiesEuropean Union (EU)
PGC2018-101216-B-I00Regional Government of Andalusia under grant EXAISFI
P18-FR-4262Instituto de Salud Carlos IIIEuropean Union (EU)
DTS18/00136European Commission H2020-MSCA-IF-2016 through the Skeleton-ID Marie Curie Individual Fellowship
746592Spanish Ministry of Science, Innovation and Universities-CDTI, Neotec program 2019
EXP-00122609/SNEO-20191236European Union (EU)Xunta de Galicia
ED431G 2019/01European Union (EU)
RTI2018-095894-B-I0
Learning to detect chest radiographs containing lung nodules using visual attention networks
Machine learning approaches hold great potential for the automated detection
of lung nodules in chest radiographs, but training the algorithms requires vary
large amounts of manually annotated images, which are difficult to obtain. Weak
labels indicating whether a radiograph is likely to contain pulmonary nodules
are typically easier to obtain at scale by parsing historical free-text
radiological reports associated to the radiographs. Using a repositotory of
over 700,000 chest radiographs, in this study we demonstrate that promising
nodule detection performance can be achieved using weak labels through
convolutional neural networks for radiograph classification. We propose two
network architectures for the classification of images likely to contain
pulmonary nodules using both weak labels and manually-delineated bounding
boxes, when these are available. Annotated nodules are used at training time to
deliver a visual attention mechanism informing the model about its localisation
performance. The first architecture extracts saliency maps from high-level
convolutional layers and compares the estimated position of a nodule against
the ground truth, when this is available. A corresponding localisation error is
then back-propagated along with the softmax classification error. The second
approach consists of a recurrent attention model that learns to observe a short
sequence of smaller image portions through reinforcement learning. When a
nodule annotation is available at training time, the reward function is
modified accordingly so that exploring portions of the radiographs away from a
nodule incurs a larger penalty. Our empirical results demonstrate the potential
advantages of these architectures in comparison to competing methodologies
- …