4,633 research outputs found
Intraoperative margin assessment of human breast tissue in optical coherence tomography images using deep neural networks
Objective: In this work, we perform margin assessment of human breast tissue
from optical coherence tomography (OCT) images using deep neural networks
(DNNs). This work simulates an intraoperative setting for breast cancer
lumpectomy. Methods: To train the DNNs, we use both the state-of-the-art
methods (Weight Decay and DropOut) and a newly introduced regularization method
based on function norms. Commonly used methods can fail when only a small
database is available. The use of a function norm introduces a direct control
over the complexity of the function with the aim of diminishing the risk of
overfitting. Results: As neither the code nor the data of previous results are
publicly available, the obtained results are compared with reported results in
the literature for a conservative comparison. Moreover, our method is applied
to locally collected data on several data configurations. The reported results
are the average over the different trials. Conclusion: The experimental results
show that the use of DNNs yields significantly better results than other
techniques when evaluated in terms of sensitivity, specificity, F1 score,
G-mean and Matthews correlation coefficient. Function norm regularization
yielded higher and more robust results than competing methods. Significance: We
have demonstrated a system that shows high promise for (partially) automated
margin assessment of human breast tissue, Equal error rate (EER) is reduced
from approximately 12\% (the lowest reported in the literature) to 5\%\,--\,a
58\% reduction. The method is computationally feasible for intraoperative
application (less than 2 seconds per image).Comment: 16 pages, 9 figure
An automated system for lung nodule detection in low-dose computed tomography
A computer-aided detection (CAD) system for the identification of pulmonary
nodules in low-dose multi-detector helical Computed Tomography (CT) images was
developed in the framework of the MAGIC-5 Italian project. One of the main
goals of this project is to build a distributed database of lung CT scans in
order to enable automated image analysis through a data and cpu GRID
infrastructure. The basic modules of our lung-CAD system, a dot-enhancement
filter for nodule candidate selection and a neural classifier for
false-positive finding reduction, are described. The system was designed and
tested for both internal and sub-pleural nodules. The results obtained on the
collected database of low-dose thin-slice CT scans are shown in terms of free
response receiver operating characteristic (FROC) curves and discussed.Comment: 9 pages, 9 figures; Proceedings of the SPIE Medical Imaging
Conference, 17-22 February 2007, San Diego, California, USA, Vol. 6514,
65143
Segmentation of the Proximal Femur from MR Images using Deep Convolutional Neural Networks
Magnetic resonance imaging (MRI) has been proposed as a complimentary method
to measure bone quality and assess fracture risk. However, manual segmentation
of MR images of bone is time-consuming, limiting the use of MRI measurements in
the clinical practice. The purpose of this paper is to present an automatic
proximal femur segmentation method that is based on deep convolutional neural
networks (CNNs). This study had institutional review board approval and written
informed consent was obtained from all subjects. A dataset of volumetric
structural MR images of the proximal femur from 86 subject were
manually-segmented by an expert. We performed experiments by training two
different CNN architectures with multiple number of initial feature maps and
layers, and tested their segmentation performance against the gold standard of
manual segmentations using four-fold cross-validation. Automatic segmentation
of the proximal femur achieved a high dice similarity score of 0.940.05
with precision = 0.950.02, and recall = 0.940.08 using a CNN
architecture based on 3D convolution exceeding the performance of 2D CNNs. The
high segmentation accuracy provided by CNNs has the potential to help bring the
use of structural MRI measurements of bone quality into clinical practice for
management of osteoporosis.Comment: This is a pre-print of an article published in Scientific Reports.
The final authenticated version is available online at:
https://doi.org/10.1038/s41598-018-34817-
Fast Enhanced CT Metal Artifact Reduction using Data Domain Deep Learning
Filtered back projection (FBP) is the most widely used method for image
reconstruction in X-ray computed tomography (CT) scanners. The presence of
hyper-dense materials in a scene, such as metals, can strongly attenuate
X-rays, producing severe streaking artifacts in the reconstruction. These metal
artifacts can greatly limit subsequent object delineation and information
extraction from the images, restricting their diagnostic value. This problem is
particularly acute in the security domain, where there is great heterogeneity
in the objects that can appear in a scene, highly accurate decisions must be
made quickly. The standard practical approaches to reducing metal artifacts in
CT imagery are either simplistic non-adaptive interpolation-based projection
data completion methods or direct image post-processing methods. These standard
approaches have had limited success. Motivated primarily by security
applications, we present a new deep-learning-based metal artifact reduction
(MAR) approach that tackles the problem in the projection data domain. We treat
the projection data corresponding to metal objects as missing data and train an
adversarial deep network to complete the missing data in the projection domain.
The subsequent complete projection data is then used with FBP to reconstruct
image intended to be free of artifacts. This new approach results in an
end-to-end MAR algorithm that is computationally efficient so practical and
fits well into existing CT workflows allowing easy adoption in existing
scanners. Training deep networks can be challenging, and another contribution
of our work is to demonstrate that training data generated using an accurate
X-ray simulation can be used to successfully train the deep network when
combined with transfer learning using limited real data sets. We demonstrate
the effectiveness and potential of our algorithm on simulated and real
examples.Comment: Accepted for publication in IEEE Transactions on Computational
Imagin
Convolutional Sparse Coding for Compressed Sensing CT Reconstruction
Over the past few years, dictionary learning (DL)-based methods have been
successfully used in various image reconstruction problems. However,
traditional DL-based computed tomography (CT) reconstruction methods are
patch-based and ignore the consistency of pixels in overlapped patches. In
addition, the features learned by these methods always contain shifted versions
of the same features. In recent years, convolutional sparse coding (CSC) has
been developed to address these problems. In this paper, inspired by several
successful applications of CSC in the field of signal processing, we explore
the potential of CSC in sparse-view CT reconstruction. By directly working on
the whole image, without the necessity of dividing the image into overlapped
patches in DL-based methods, the proposed methods can maintain more details and
avoid artifacts caused by patch aggregation. With predetermined filters, an
alternating scheme is developed to optimize the objective function. Extensive
experiments with simulated and real CT data were performed to validate the
effectiveness of the proposed methods. Qualitative and quantitative results
demonstrate that the proposed methods achieve better performance than several
existing state-of-the-art methods.Comment: Accepted by IEEE TM
Evaluation of Transfer Learning for Classification of: (1) Diabetic Retinopathy by Digital Fundus Photography and (2) Diabetic Macular Edema, Choroidal Neovascularization and Drusen by Optical Coherence Tomography
Deep learning has been successfully applied to a variety of image
classification tasks. There has been keen interest to apply deep learning in
the medical domain, particularly specialties that heavily utilize imaging, such
as ophthalmology. One issue that may hinder application of deep learning to the
medical domain is the vast amount of data necessary to train deep neural
networks (DNNs). Because of regulatory and privacy issues associated with
medicine, and the generally proprietary nature of data in medical domains,
obtaining large datasets to train DNNs is a challenge, particularly in the
ophthalmology domain.
Transfer learning is a technique developed to address the issue of applying
DNNs for domains with limited data. Prior reports on transfer learning have
examined custom networks to fully train or used a particular DNN for transfer
learning. However, to the best of my knowledge, no work has systematically
examined a suite of DNNs for transfer learning for classification of diabetic
retinopathy, diabetic macular edema, and two key features of age-related
macular degeneration. This work attempts to investigate transfer learning for
classification of these ophthalmic conditions. Part I gives a condensed
overview of neural networks and the DNNs under evaluation. Part II gives the
reader the necessary background concerning diabetic retinopathy and prior work
on classification using retinal fundus photographs. The methodology and results
of transfer learning for diabetic retinopathy classification are presented,
showing that transfer learning towards this domain is feasible, with promising
accuracy. Part III gives an overview of diabetic macular edema, choroidal
neovascularization and drusen (features associated with age-related macular
degeneration), and presents results for transfer learning evaluation using
optical coherence tomography to classify these entities
Deep Learning in Cardiology
The medical field is creating large amount of data that physicians are unable
to decipher and use efficiently. Moreover, rule-based expert systems are
inefficient in solving complicated medical tasks or for creating insights using
big data. Deep learning has emerged as a more accurate and effective technology
in a wide range of medical problems such as diagnosis, prediction and
intervention. Deep learning is a representation learning method that consists
of layers that transform the data non-linearly, thus, revealing hierarchical
relationships and structures. In this review we survey deep learning
application papers that use structured data, signal and imaging modalities from
cardiology. We discuss the advantages and limitations of applying deep learning
in cardiology that also apply in medicine in general, while proposing certain
directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table
Deep learning to achieve clinically applicable segmentation of head and neck anatomy for radiotherapy
Over half a million individuals are diagnosed with head and neck cancer each
year worldwide. Radiotherapy is an important curative treatment for this
disease, but it requires manual time consuming delineation of radio-sensitive
organs at risk (OARs). This planning process can delay treatment, while also
introducing inter-operator variability with resulting downstream radiation dose
differences. While auto-segmentation algorithms offer a potentially time-saving
solution, the challenges in defining, quantifying and achieving expert
performance remain. Adopting a deep learning approach, we demonstrate a 3D
U-Net architecture that achieves expert-level performance in delineating 21
distinct head and neck OARs commonly segmented in clinical practice. The model
was trained on a dataset of 663 deidentified computed tomography (CT) scans
acquired in routine clinical practice and with both segmentations taken from
clinical practice and segmentations created by experienced radiographers as
part of this research, all in accordance with consensus OAR definitions. We
demonstrate the model's clinical applicability by assessing its performance on
a test set of 21 CT scans from clinical practice, each with the 21 OARs
segmented by two independent experts. We also introduce surface Dice similarity
coefficient (surface DSC), a new metric for the comparison of organ
delineation, to quantify deviation between OAR surface contours rather than
volumes, better reflecting the clinical task of correcting errors in the
automated organ segmentations. The model's generalisability is then
demonstrated on two distinct open source datasets, reflecting different centres
and countries to model training. With appropriate validation studies and
regulatory approvals, this system could improve the efficiency, consistency,
and safety of radiotherapy pathways
Accuracy of automated 3D cephalometric landmarks by deep learning algorithms: systematic review and meta-analysis
Objectives
The aim of the present systematic review and meta-analysis is to assess the accuracy of automated landmarking using deep learning in comparison with manual tracing for cephalometric analysis of 3D medical images.
Methods
PubMed/Medline, IEEE Xplore, Scopus and ArXiv electronic databases were searched. Selection criteria were: ex vivo and in vivo volumetric data images suitable for 3D landmarking (Problem), a minimum of five automated landmarking performed by deep learning method (Intervention), manual landmarking (Comparison), and mean accuracy, in mm, between manual and automated landmarking (Outcome). QUADAS-2 was adapted for quality analysis. Meta-analysis was performed on studies that reported as outcome mean values and standard deviation of the difference (error) between manual and automated landmarking. Linear regression plots were used to analyze correlations between mean accuracy and year of publication.
Results
The initial electronic screening yielded 252 papers published between 2020 and 2022. A total of 15 studies were included for the qualitative synthesis, whereas 11 studies were used for the meta-analysis. Overall random effect model revealed a mean value of 2.44 mm, with a high heterogeneity (I-2 = 98.13%, tau(2) = 1.018, p-value < 0.001); risk of bias was high due to the presence of issues for several domains per study. Meta-regression indicated a significant relation between mean error and year of publication (p value = 0.012).
Conclusion
Deep learning algorithms showed an excellent accuracy for automated 3D cephalometric landmarking. In the last two years promising algorithms have been developed and improvements in landmarks annotation accuracy have been done
- …