2,372 research outputs found
A Deep Cascade of Convolutional Neural Networks for MR Image Reconstruction
The acquisition of Magnetic Resonance Imaging (MRI) is inherently slow.
Inspired by recent advances in deep learning, we propose a framework for
reconstructing MR images from undersampled data using a deep cascade of
convolutional neural networks to accelerate the data acquisition process. We
show that for Cartesian undersampling of 2D cardiac MR images, the proposed
method outperforms the state-of-the-art compressed sensing approaches, such as
dictionary learning-based MRI (DLMRI) reconstruction, in terms of
reconstruction error, perceptual quality and reconstruction speed for both
3-fold and 6-fold undersampling. Compared to DLMRI, the error produced by the
method proposed is approximately twice as small, allowing to preserve
anatomical structures more faithfully. Using our method, each image can be
reconstructed in 23 ms, which is fast enough to enable real-time applications
Recommended from our members
Coil combination using linear deconvolution in k-space for phase imaging
Background: The combination of multi-channel data is a critical step for the imaging of phase and susceptibility contrast in magnetic resonance imaging (MRI). Magnitude-weighted phase combination methods often produce noise and aliasing artifacts in the magnitude images at accelerated imaging sceneries. To address this issue, an optimal coil combination method through deconvolution in k-space is proposed in this paper.
Methods: The proposed method firstly employs the sum-of-squares and phase aligning method to yield a complex reference coil image which is then used to calculate the coil sensitivity and its Fourier transform. Then, the coil k-space combining weights is computed, taking into account the truncated frequency data of coil sensitivity and the acquired k-space data. Finally, combining the coil k-space data with the acquired weights generates the k-space data of proton distribution, with which both phase and magnitude information can be obtained straightforwardly. Both phantom and in vivo imaging experiments were conducted to evaluate the performance of the proposed method.
Results: Compared with magnitude-weighted method and MCPC-C, the proposed method can alleviate the phase cancellation in coil combination, resulting in a less wrapped phase.
Conclusions: The proposed method provides an effective and efficient approach to combine multiple coil image in parallel MRI reconstruction, and has potential to benefit routine clinical practice in the future
Deep learning for fast and robust medical image reconstruction and analysis
Medical imaging is an indispensable component of modern medical research as well as clinical practice. Nevertheless, imaging techniques such as magnetic resonance imaging (MRI) and computational tomography (CT) are costly and are less accessible to the majority of the world. To make medical devices more accessible, affordable and efficient, it is crucial to re-calibrate our current imaging paradigm for smarter imaging. In particular, as medical imaging techniques have highly structured forms in the way they acquire data, they provide us with an opportunity to optimise the imaging techniques holistically by leveraging data. The central theme of this thesis is to explore different opportunities where we can exploit data and deep learning to improve the way we extract information for better, faster and smarter imaging.
This thesis explores three distinct problems. The first problem is the time-consuming nature of dynamic MR data acquisition and reconstruction. We propose deep learning methods for accelerated dynamic MR image reconstruction, resulting in up to 10-fold reduction in imaging time. The second problem is the redundancy in our current imaging pipeline. Traditionally, imaging pipeline treated acquisition, reconstruction and analysis as separate steps. However, we argue that one can approach them holistically and optimise the entire pipeline jointly for a specific target goal. To this end, we propose deep learning approaches for obtaining high fidelity cardiac MR segmentation directly from significantly undersampled data, greatly exceeding the undersampling limit for image reconstruction. The final part of this thesis tackles the problem of interpretability of the deep learning algorithms. We propose attention-models that can implicitly focus on salient regions in an image to improve accuracy for ultrasound scan plane detection and CT segmentation. More crucially, these models can provide explainability, which is a crucial stepping stone for the harmonisation of smart imaging and current clinical practice.Open Acces
SIRF: Synergistic Image Reconstruction Framework
The combination of positron emission tomography (PET) with magnetic resonance (MR) imaging opens the way to more accurate diagnosis and improved patient management. At present, the data acquired by PET-MR scanners are essentially processed separately, but the opportunity to improve accuracy of the tomographic reconstruction via synergy of the two imaging techniques is an active area of research. In this paper, we present Release 2.1.0 of the CCP-PETMR Synergistic Image Reconstruction Framework (SIRF) software suite, providing an open-source software platform for efficient implementation and validation of novel reconstruction algorithms. SIRF provides user-friendly Python and MATLAB interfaces built on top of C++ libraries. SIRF uses advanced PET and MR reconstruction software packages and tools. Currently, for PET this is Software for Tomographic Image Reconstruction (STIR); for MR, Gadgetron and ISMRMRD; and for image registration tools, NiftyReg. The software aims to be capable of reconstructing images from acquired scanner data, whilst being simple enough to be used for educational purposes
Respiratory-induced organ motion compensation for MRgHIFU
Summary: High Intensity Focused Ultrasound is an emerging non-invasive technology for the precise
thermal ablation of pathological tissue deep within the body. The fitful, respiratoryinduced
motion of abdominal organs, such as of the liver, renders targeting challenging.
The work in hand describes methods for imaging, modelling and managing respiratoryinduced
organ motion. The main objective is to enable 3D motion prediction of liver
tumours for the treatment with Magnetic Resonance guided High Intensity Focused Ultrasound
(MRgHIFU).
To model and predict respiratory motion, the liver motion is initially observed in 3D
space. Fast acquired 2D magnetic resonance images are retrospectively reconstructed
to time-resolved volumes, thus called 4DMRI (3D + time). From these volumes, dense
deformation fields describing the motion from time-step to time-step are extracted using
an intensity-based non-rigid registration algorithm. 4DMRI sequences of 20 subjects,
providing long-term recordings of the variability in liver motion under free breathing,
serve as the basis for this study.
Based on the obtained motion data, three main types of models were investigated and
evaluated in clinically relevant scenarios. In particular, subject-specific motion models,
inter-subject population-based motion models and the combination of both are compared
in comprehensive studies. The analysis of the prediction experiments showed that
statistical models based on Principal Component Analysis are well suited to describe
the motion of a single subject as well as of a population of different and unobserved
subjects. In order to enable target prediction, the respiratory state of the respective
organ was tracked in near-real-time and a temporal prediction of its future position is
estimated. The time span provided by the prediction is used to calculate the new target
position and to readjust the treatment focus. In addition, novel methods for faster
acquisition of subject-specific 3D data based on a manifold learner are presented and
compared to the state-of-the art 4DMRI method.
The developed methods provide motion compensation techniques for the non-invasive
and radiation-free treatment of pathological tissue in moving abdominal organs for
MRgHIFU. ---------- Zusammenfassung: High Intensity Focused Ultrasound ist eine aufkommende, nicht-invasive Technologie
für die präzise thermische Zerstörung von pathologischem Gewebe im Körper. Die
unregelmässige ateminduzierte Bewegung der Unterleibsorgane, wie z.B. im Fall der
Leber, macht genaues Zielen anspruchsvoll. Die vorliegende Arbeit beschreibt Verfahren
zur Bildgebung, Modellierung und zur Regelung ateminduzierter Organbewegung.
Das Hauptziel besteht darin, 3D Zielvorhersagen für die Behandlung von Lebertumoren
mittels Magnetic Resonance guided High Intensity Focused Ultrasound
(MRgHIFU) zu ermöglichen.
Um die Atembewegung modellieren und vorhersagen zu können, wird die Bewegung
der Leber zuerst im dreidimensionalen Raum beobachtet. Schnell aufgenommene 2DMagnetresonanz-
Bilder wurden dabei rückwirkend zu Volumen mit sowohl guter zeitlicher
als auch räumlicher Auflösung, daher 4DMRI (3D + Zeit) genannt, rekonstruiert.
Aus diesen Volumen werden Deformationsfelder, welche die Bewegung von Zeitschritt
zu Zeitschritt beschreiben, mit einem intensitätsbasierten, nicht-starren Registrierungsalgorithmus
extrahiert. 4DMRI-Sequenzen von 20 Probanden, welche Langzeitaufzeichungen
von der Variabilität der Leberbewegung beinhalten, dienen als Grundlage für
diese Studie.
Basierend auf den gewonnenen Bewegungsdaten wurden drei Arten von Modellen
in klinisch relevanten Szenarien untersucht und evaluiert. Personen-spezifische Bewegungsmodelle,
populationsbasierende Bewegungsmodelle und die Kombination beider
wurden in umfassenden Studien verglichen. Die Analyse der Vorhersage-Experimente
zeigte, dass statistische Modelle basierend auf Hauptkomponentenanalyse gut geeignet
sind, um die Bewegung einer einzelnen Person sowie einer Population von unterschiedlichen
und unbeobachteten Personen zu beschreiben. Die Bewegungsvorhersage basiert
auf der Abschätzung der Organposition, welche fast in Echtzeit verfolgt wird. Die durch
die Vorhersage bereitgestellte Zeitspanne wird verwendet, um die neue Zielposition zu
berechnen und den Behandlungsfokus auszurichten. Darüber hinaus werden neue Methoden
zur schnelleren Erfassung patienten-spezifischer 3D-Daten und deren Rekonstruktion
vorgestellt und mit der gängigen 4DMRI-Methode verglichen. Die entwickelten Methoden beschreiben Techniken zur nichtinvasiven und strahlungsfreien
Behandlung von krankhaftem Gewebe in bewegten Unterleibsorganen mittels
MRgHIFU
Transfer learning of deep neural network representations for fMRI decoding
Background:
Deep neural networks have revolutionised machine learning, with unparalleled performance in object classification. However, in brain imaging (e.g., fMRI), the direct application of Convolutional Neural Networks (CNN) to decoding subject states or perception from imaging data seems impractical given the scarcity of available data.
New method:
In this work we propose a robust method to transfer information from deep learning (DL) features to brain fMRI data with the goal of decoding. By adopting Reduced Rank Regression with Ridge Regularisation we establish a multivariate link between imaging data and the fully connected layer (fc7) of a CNN. We exploit the reconstructed fc7 features by performing an object image classification task on two datasets: one of the largest fMRI databases, taken from different scanners from more than two hundred subjects watching different movie clips, and another with fMRI data taken while watching static images.
Results:
The fc7 features could be significantly reconstructed from the imaging data, and led to significant decoding performance.
Comparison with existing methods:
The decoding based on reconstructed fc7 outperformed the decoding based on imaging data alone.
Conclusion:
In this work we show how to improve fMRI-based decoding benefiting from the mapping between functional data and CNN features. The potential advantage of the proposed method is twofold: the extraction of stimuli representations by means of an automatic procedure (unsupervised) and the embedding of high-dimensional neuroimaging data onto a space designed for visual object discrimination, leading to a more manageable space from dimensionality point of view
- …