38,984 research outputs found
NiftyNet: a deep-learning platform for medical imaging
Medical image analysis and computer-assisted intervention problems are
increasingly being addressed with deep-learning-based solutions. Established
deep-learning platforms are flexible but do not provide specific functionality
for medical image analysis and adapting them for this application requires
substantial implementation effort. Thus, there has been substantial duplication
of effort and incompatible infrastructure developed across many research
groups. This work presents the open-source NiftyNet platform for deep learning
in medical imaging. The ambition of NiftyNet is to accelerate and simplify the
development of these solutions, and to provide a common mechanism for
disseminating research outputs for the community to use, adapt and build upon.
NiftyNet provides a modular deep-learning pipeline for a range of medical
imaging applications including segmentation, regression, image generation and
representation learning applications. Components of the NiftyNet pipeline
including data loading, data augmentation, network architectures, loss
functions and evaluation metrics are tailored to, and take advantage of, the
idiosyncracies of medical image analysis and computer-assisted intervention.
NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D
and 3D images and computational graphs by default.
We present 3 illustrative medical image analysis applications built using
NiftyNet: (1) segmentation of multiple abdominal organs from computed
tomography; (2) image regression to predict computed tomography attenuation
maps from brain magnetic resonance images; and (3) generation of simulated
ultrasound images for specified anatomical poses.
NiftyNet enables researchers to rapidly develop and distribute deep learning
solutions for segmentation, regression, image generation and representation
learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge
Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6
figures; Update includes additional applications, updated author list and
formatting for journal submissio
Recommended from our members
Deep learning for cardiac image segmentation: A review
Deep learning has become the most widely used approach for cardiac image segmentation in recent years. In this paper, we provide a review of over 100 cardiac image segmentation papers using deep learning, which covers common imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound (US) and major anatomical structures of interest (ventricles, atria and vessels). In addition, a summary of publicly available cardiac image datasets and code repositories are included to provide a base for encouraging reproducible research. Finally, we discuss the challenges and limitations with current deep learning-based approaches (scarcity of labels, model generalizability across different domains, interpretability) and suggest potential directions for future research
Big Data Framework Using Spark Architecture for Dose Optimization Based on Deep Learning in Medical Imaging
Deep learning and machine learning provide more consistent tools and powerful functions for recognition, classification, reconstruction, noise reduction, quantification and segmentation in biomedical image analysis. Some breakthroughs. Recently, some applications of deep learning and machine learning for low-dose optimization in computed tomography have been developed. Due to reconstruction and processing technology, it has become crucial to develop architectures and/or methods based on deep learning algorithms to minimize radiation during computed tomography scan inspections. This chapter is an extension work done by Alla et al. in 2020 and explain that work very well. This chapter introduces the deep learning for computed tomography scan low-dose optimization, shows examples described in the literature, briefly discusses new methods for computed tomography scan image processing, and provides conclusions. We propose a pipeline for low-dose computed tomography scan image reconstruction based on the literature. Our proposed pipeline relies on deep learning and big data technology using Spark Framework. We will discuss with the pipeline proposed in the literature to finally derive the efficiency and importance of our pipeline. A big data architecture using computed tomography images for low-dose optimization is proposed. The proposed architecture relies on deep learning and allows us to develop effective and appropriate methods to process dose optimization with computed tomography scan images. The real realization of the image denoising pipeline shows us that we can reduce the radiation dose and use the pipeline we recommend to improve the quality of the captured image
Pulmonary nodule segmentation in computed tomography with deep learning
Early detection of lung cancer is essential for treating the disease. Lung nodule segmentation systems can be used together with Computer-Aided Detection (CAD) systems, and
help doctors diagnose and manage lung cancer. In this work, we create a lung nodule
segmentation system based on deep learning. Deep learning is a sub-field of machine
learning responsible for state-of-the-art results in several segmentation datasets such as
the PASCAL VOC 2012. Our model is a modified 3D U-Net, trained on the LIDC-IDRI
dataset, using the intersection over union (IOU) loss function. We show our model works
for multiple types of lung nodules. Our model achieves state-of-the-art performance on
the LIDC test set, using nodules annotated by at least 3 radiologists and with a consensus
truth of 50%.A deteção do cancro do pulmão numa fase inicial é essencial para o tratamento da doença.
Sistemas de segmentação de nódulos pulmonares, usados em junção com sistemas de
Deteção Assistida por Computador (DAC), podem ajudar médicos a diagnosticar e gerir
o cancro do pulmão. Neste trabalho propomos um sistema de segmentação de nódulos
pulmonares, recorrendo a técnicas de aprendizagem profunda. Aprendizagem profunda é
um sub-campo de aprendizagem automática, responsável por vários resultados estado da
arte em datasets de segmentação de imagem, como o PASCAL VOC 2012. O nosso modelo
final é uma 3D U-Net modificada, treinada no dataset LIDC-IDRI, usando interseção sobre
união como função de custo. Mostramos que o nosso modelo final funciona com vários
tipos de nódulos pulmonares. O nosso modelo consegue resultados estado da arte no
LIDC test set, usando nódulos anotados pelo menos por 3 radiologistas, com uma verdade
consensual de 50%
Multi-stage Deep Learning Artifact Reduction for Computed Tomography
In Computed Tomography (CT), an image of the interior structure of an object
is computed from a set of acquired projection images. The quality of these
reconstructed images is essential for accurate analysis, but this quality can
be degraded by a variety of imaging artifacts. To improve reconstruction
quality, the acquired projection images are often processed by a pipeline
consisting of multiple artifact-removal steps applied in various image domains
(e.g., outlier removal on projection images and denoising of reconstruction
images). These artifact-removal methods exploit the fact that certain artifacts
are easier to remove in a certain domain compared with other domains.
Recently, deep learning methods have shown promising results for artifact
removal for CT images. However, most existing deep learning methods for CT are
applied as a post-processing method after reconstruction. Therefore, artifacts
that are relatively difficult to remove in the reconstruction domain may not be
effectively removed by these methods. As an alternative, we propose a
multi-stage deep learning method for artifact removal, in which neural networks
are applied to several domains, similar to a classical CT processing pipeline.
We show that the neural networks can be effectively trained in succession,
resulting in easy-to-use and computationally efficient training. Experiments on
both simulated and real-world experimental datasets show that our method is
effective in reducing artifacts and superior to deep learning-based
post-processing
- …