1,313 research outputs found
Recommended from our members
Self-Attention Convolutional Neural Network for Improved MR Image Reconstruction.
MRI is an advanced imaging modality with the unfortunate disadvantage of long data acquisition time. To accelerate MR image acquisition while maintaining high image quality, extensive investigations have been conducted on image reconstruction of sparsely sampled MRI. Recently, deep convolutional neural networks have achieved promising results, yet the local receptive field in convolution neural network raises concerns regarding signal synthesis and artifact compensation. In this study, we proposed a deep learning-based reconstruction framework to provide improved image fidelity for accelerated MRI. We integrated the self-attention mechanism, which captured long-range dependencies across image regions, into a volumetric hierarchical deep residual convolutional neural network. Basically, a self-attention module was integrated to every convolutional layer, where signal at a position was calculated as a weighted sum of the features at all positions. Furthermore, relatively dense shortcut connections were employed, and data consistency was enforced. The proposed network, referred to as SAT-Net, was applied on cartilage MRI acquired using an ultrashort TE sequence and retrospectively undersampled in a pseudo-random Cartesian pattern. The network was trained using 336 three dimensional images (each containing 32 slices) and tested with 24 images that yielded improved outcome. The framework is generic and can be extended to various applications
Recommended from our members
Pattern classification approaches for breast cancer identification via MRI: stateāofātheāart and vision for the future
Mining algorithms for Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCEMRI)
of breast tissue are discussed. The algorithms are based on recent advances in multidimensional
signal processing and aim to advance current stateāofātheāart computerāaided detection
and analysis of breast tumours when these are observed at various states of development. The topics
discussed include image feature extraction, information fusion using radiomics, multiāparametric
computerāaided classification and diagnosis using information fusion of tensorial datasets as well
as Clifford algebra based classification approaches and convolutional neural network deep learning
methodologies. The discussion also extends to semiāsupervised deep learning and selfāsupervised
strategies as well as generative adversarial networks and algorithms using generated
confrontational learning approaches. In order to address the problem of weakly labelled tumour
images, generative adversarial deep learning strategies are considered for the classification of
different tumour types. The proposed data fusion approaches provide a novel Artificial Intelligence
(AI) based framework for more robust image registration that can potentially advance the early
identification of heterogeneous tumour types, even when the associated imaged organs are
registered as separate entities embedded in more complex geometric spaces. Finally, the general
structure of a highādimensional medical imaging analysis platform that is based on multiātask
detection and learning is proposed as a way forward. The proposed algorithm makes use of novel
loss functions that form the building blocks for a generated confrontation learning methodology
that can be used for tensorial DCEāMRI. Since some of the approaches discussed are also based on
timeālapse imaging, conclusions on the rate of proliferation of the disease can be made possible. The
proposed framework can potentially reduce the costs associated with the interpretation of medical
images by providing automated, faster and more consistent diagnosis
Knowledge-driven deep learning for fast MR imaging: undersampled MR image reconstruction from supervised to un-supervised learning
Deep learning (DL) has emerged as a leading approach in accelerating MR
imaging. It employs deep neural networks to extract knowledge from available
datasets and then applies the trained networks to reconstruct accurate images
from limited measurements. Unlike natural image restoration problems, MR
imaging involves physics-based imaging processes, unique data properties, and
diverse imaging tasks. This domain knowledge needs to be integrated with
data-driven approaches. Our review will introduce the significant challenges
faced by such knowledge-driven DL approaches in the context of fast MR imaging
along with several notable solutions, which include learning neural networks
and addressing different imaging application scenarios. The traits and trends
of these techniques have also been given which have shifted from supervised
learning to semi-supervised learning, and finally, to unsupervised learning
methods. In addition, MR vendors' choices of DL reconstruction have been
provided along with some discussions on open questions and future directions,
which are critical for the reliable imaging systems.Comment: 46 pages, 5figures, 1 tabl
Recommended from our members
Deep learning for cardiac image segmentation: A review
Deep learning has become the most widely used approach for cardiac image segmentation in recent years. In this paper, we provide a review of over 100 cardiac image segmentation papers using deep learning, which covers common imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound (US) and major anatomical structures of interest (ventricles, atria and vessels). In addition, a summary of publicly available cardiac image datasets and code repositories are included to provide a base for encouraging reproducible research. Finally, we discuss the challenges and limitations with current deep learning-based approaches (scarcity of labels, model generalizability across different domains, interpretability) and suggest potential directions for future research
Dual-Domain Multi-Contrast MRI Reconstruction with Synthesis-based Fusion Network
Purpose: To develop an efficient dual-domain reconstruction framework for
multi-contrast MRI, with the focus on minimising cross-contrast misalignment in
both the image and the frequency domains to enhance optimisation. Theory and
Methods: Our proposed framework, based on deep learning, facilitates the
optimisation for under-sampled target contrast using fully-sampled reference
contrast that is quicker to acquire. The method consists of three key steps: 1)
Learning to synthesise data resembling the target contrast from the reference
contrast; 2) Registering the multi-contrast data to reduce inter-scan motion;
and 3) Utilising the registered data for reconstructing the target contrast.
These steps involve learning in both domains with regularisation applied to
ensure their consistency. We also compare the reconstruction performance with
existing deep learning-based methods using a dataset of brain MRI scans.
Results: Extensive experiments demonstrate the superiority of our proposed
framework, for up to an 8-fold acceleration rate, compared to state-of-the-art
algorithms. Comprehensive analysis and ablation studies further present the
effectiveness of the proposed components. Conclusion:Our dual-domain framework
offers a promising approach to multi-contrast MRI reconstruction. It can also
be integrated with existing methods to further enhance the reconstruction
From Fully-Supervised Single-Task to Semi-Supervised Multi-Task Deep Learning Architectures for Segmentation in Medical Imaging Applications
Medical imaging is routinely performed in clinics worldwide for the diagnosis and treatment of numerous medical conditions in children and adults. With the advent of these medical imaging modalities, radiologists can visualize both the structure of the body as well as the tissues within the body. However, analyzing these high-dimensional (2D/3D/4D) images demands a significant amount of time and effort from radiologists. Hence, there is an ever-growing need for medical image computing tools to extract relevant information from the image data to help radiologists perform efficiently. Image analysis based on machine learning has pivotal potential to improve the entire medical imaging pipeline, providing support for clinical decision-making and computer-aided diagnosis. To be effective in addressing challenging image analysis tasks such as classification, detection, registration, and segmentation, specifically for medical imaging applications, deep learning approaches have shown significant improvement in performance. While deep learning has shown its potential in a variety of medical image analysis problems including segmentation, motion estimation, etc., generalizability is still an unsolved problem and many of these successes are achieved at the cost of a large pool of datasets. For most practical applications, getting access to a copious dataset can be very difficult, often impossible. Annotation is tedious and time-consuming. This cost is further amplified when annotation must be done by a clinical expert in medical imaging applications. Additionally, the applications of deep learning in the real-world clinical setting are still limited due to the lack of reliability caused by the limited prediction capabilities of some deep learning models. Moreover, while using a CNN in an automated image analysis pipeline, itās critical to understand which segmentation results are problematic and require further manual examination. To this extent, the estimation of uncertainty calibration in a semi-supervised setting for medical image segmentation is still rarely reported. This thesis focuses on developing and evaluating optimized machine learning models for a variety of medical imaging applications, ranging from fully-supervised, single-task learning to semi-supervised, multi-task learning that makes efficient use of annotated training data. The contributions of this dissertation are as follows: (1) developing a fully-supervised, single-task transfer learning for the surgical instrument segmentation from laparoscopic images; and (2) utilizing supervised, single-task, transfer learning for segmenting and digitally removing the surgical instruments from endoscopic/laparoscopic videos to allow the visualization of the anatomy being obscured by the tool. The tool removal algorithms use a tool segmentation mask and either instrument-free reference frames or previous instrument-containing frames to fill in (inpaint) the instrument segmentation mask; (3) developing fully-supervised, single-task learning via efficient weight pruning and learned group convolution for accurate left ventricle (LV), right ventricle (RV) blood pool and myocardium localization and segmentation from 4D cine cardiac MR images; (4) demonstrating the use of our fully-supervised memory-efficient model to generate dynamic patient-specific right ventricle (RV) models from cine cardiac MRI dataset via an unsupervised learning-based deformable registration field; and (5) integrating a Monte Carlo dropout into our fully-supervised memory-efficient model with inherent uncertainty estimation, with the overall goal to estimate the uncertainty associated with the obtained segmentation and error, as a means to flag regions that feature less than optimal segmentation results; (6) developing semi-supervised, single-task learning via self-training (through meta pseudo-labeling) in concert with a Teacher network that instructs the Student network by generating pseudo-labels given unlabeled input data; (7) proposing largely-unsupervised, multi-task learning to demonstrate the power of a simple combination of a disentanglement block, variational autoencoder (VAE), generative adversarial network (GAN), and a conditioning layer-based reconstructor for performing two of the foremost critical tasks in medical imaging ā segmentation of cardiac structures and reconstruction of the cine cardiac MR images; (8) demonstrating the use of 3D semi-supervised, multi-task learning for jointly learning multiple tasks in a single backbone module ā uncertainty estimation, geometric shape generation, and cardiac anatomical structure segmentation of the left atrial cavity from 3D Gadolinium-enhanced magnetic resonance (GE-MR) images. This dissertation summarizes the impact of the contributions of our work in terms of demonstrating the adaptation and use of deep learning architectures featuring different levels of supervision to build a variety of image segmentation tools and techniques that can be used across a wide spectrum of medical image computing applications centered on facilitating and promoting the wide-spread computer-integrated diagnosis and therapy data science
- ā¦