16,848 research outputs found
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Automatic Brain Tumor Segmentation using Convolutional Neural Networks with Test-Time Augmentation
Automatic brain tumor segmentation plays an important role for diagnosis,
surgical planning and treatment assessment of brain tumors. Deep convolutional
neural networks (CNNs) have been widely used for this task. Due to the
relatively small data set for training, data augmentation at training time has
been commonly used for better performance of CNNs. Recent works also
demonstrated the usefulness of using augmentation at test time, in addition to
training time, for achieving more robust predictions. We investigate how
test-time augmentation can improve CNNs' performance for brain tumor
segmentation. We used different underpinning network structures and augmented
the image by 3D rotation, flipping, scaling and adding random noise at both
training and test time. Experiments with BraTS 2018 training and validation set
show that test-time augmentation helps to improve the brain tumor segmentation
accuracy and obtain uncertainty estimation of the segmentation results.Comment: 12 pages, 3 figures, MICCAI BrainLes 201
Processing of Electronic Health Records using Deep Learning: A review
Availability of large amount of clinical data is opening up new research
avenues in a number of fields. An exciting field in this respect is healthcare,
where secondary use of healthcare data is beginning to revolutionize
healthcare. Except for availability of Big Data, both medical data from
healthcare institutions (such as EMR data) and data generated from health and
wellbeing devices (such as personal trackers), a significant contribution to
this trend is also being made by recent advances on machine learning,
specifically deep learning algorithms
A layered abduction model of perception: Integrating bottom-up and top-down processing in a multi-sense agent
A layered-abduction model of perception is presented which unifies bottom-up and top-down processing in a single logical and information-processing framework. The process of interpreting the input from each sense is broken down into discrete layers of interpretation, where at each layer a best explanation hypothesis is formed of the data presented by the layer or layers below, with the help of information available laterally and from above. The formation of this hypothesis is treated as a problem of abductive inference, similar to diagnosis and theory formation. Thus this model brings a knowledge-based problem-solving approach to the analysis of perception, treating perception as a kind of compiled cognition. The bottom-up passing of information from layer to layer defines channels of information flow, which separate and converge in a specific way for any specific sense modality. Multi-modal perception occurs where channels converge from more than one sense. This model has not yet been implemented, though it is based on systems which have been successful in medical and mechanical diagnosis and medical test interpretation
Mesh-to-raster based non-rigid registration of multi-modal images
Region of interest (ROI) alignment in medical images plays a crucial role in
diagnostics, procedure planning, treatment, and follow-up. Frequently, a model
is represented as triangulated mesh while the patient data is provided from CAT
scanners as pixel or voxel data. Previously, we presented a 2D method for
curve-to-pixel registration. This paper contributes (i) a general
mesh-to-raster (M2R) framework to register ROIs in multi-modal images; (ii) a
3D surface-to-voxel application, and (iii) a comprehensive quantitative
evaluation in 2D using ground truth provided by the simultaneous truth and
performance level estimation (STAPLE) method. The registration is formulated as
a minimization problem where the objective consists of a data term, which
involves the signed distance function of the ROI from the reference image, and
a higher order elastic regularizer for the deformation. The evaluation is based
on quantitative light-induced fluoroscopy (QLF) and digital photography (DP) of
decalcified teeth. STAPLE is computed on 150 image pairs from 32 subjects, each
showing one corresponding tooth in both modalities. The ROI in each image is
manually marked by three experts (900 curves in total). In the QLF-DP setting,
our approach significantly outperforms the mutual information-based
registration algorithm implemented with the Insight Segmentation and
Registration Toolkit (ITK) and Elastix
- …