14 research outputs found
Accelerated imaging of rest and stress myocardial perfusion MRI using multi-coil k-t SLR: a feasibility study
Modeling human observer detection in undersampled magnetic resonance imaging (MRI) reconstruction with total variation and wavelet sparsity regularization
Purpose: Task-based assessment of image quality in undersampled magnetic
resonance imaging provides a way of evaluating the impact of regularization on
task performance. In this work, we evaluated the effect of total variation (TV)
and wavelet regularization on human detection of signals with a varying
background and validated a model observer in predicting human performance.
Approach: Human observer studies used two-alternative forced choice (2-AFC)
trials with a small signal known exactly task but with varying backgrounds for
fluid-attenuated inversion recovery images reconstructed from undersampled
multi-coil data. We used a 3.48 undersampling factor with TV and a wavelet
sparsity constraints. The sparse difference-of-Gaussians (S-DOG) observer with
internal noise was used to model human observer detection.
Results: We observed a trend that the human observer detection performance
remained fairly constant for a broad range of values in the regularization
parameter before decreasing at large values. A similar result was found for the
normalized ensemble root mean squared error. Without changing the internal
noise, the model observer tracked the performance of the human observers as the
regularization was increased but overestimated the PC for large amounts of
regularization for TV and wavelet sparsity, as well as the combination of both
parameters.
Conclusions: For the task we studied, the S-DOG observer was able to
reasonably predict human performance with both TV and wavelet sparsity
regularizers over a broad range of regularization parameters. We observed a
trend that task performance remained fairly constant for a range of
regularization parameters before decreasing for large amounts of
regularization
Rapid dynamic speech imaging at 3 Tesla using combination of a custom vocal tract coil, variable density spirals and manifold regularization
Purpose: To improve dynamic speech imaging at 3 Tesla.
Methods: A novel scheme combining a 16-channel vocal tract coil, variable
density spirals (VDS), and manifold regularization was developed. Short readout
duration spirals (1.3 ms long) were used to minimize sensitivity to
off-resonance. The manifold model leveraged similarities between frames sharing
similar vocal tract postures without explicit motion binning. Reconstruction
was posed as a SENSE-based non-local soft weighted temporal regularization
scheme. The self-navigating capability of VDS was leveraged to learn the
structure of the manifold. Our approach was compared against low-rank and
finite difference reconstruction constraints on two volunteers performing
repetitive and arbitrary speaking tasks. Blinded image quality evaluation in
the categories of alias artifacts, spatial blurring, and temporal blurring were
performed by three experts in voice research.
Results: We achieved a spatial resolution of 2.4mm2/pixel and a temporal
resolution of 17.4 ms/frame for single slice imaging, and 52.2 ms/frame for
concurrent 3-slice imaging. Implicit motion binning of the manifold scheme for
both repetitive and fluent speaking tasks was demonstrated. The manifold scheme
provided superior fidelity in modeling articulatory motion compared to low rank
and temporal finite difference schemes. This was reflected by higher image
quality scores in spatial and temporal blurring categories. Our technique
exhibited faint alias artifacts, but offered a reduced interquartile range of
scores compared to other methods in alias artifact category.
Conclusion: Synergistic combination of a custom vocal-tract coil, variable
density spirals and manifold regularization enables robust dynamic speech
imaging at 3 Tesla.Comment: 30 pages, 10 figure
High-resolution three-dimensional hybrid MRI + low dose CT vocal tract modeling:A cadaveric pilot study
SummaryObjectivesMRI based vocal tract models have many applications in voice research and education. These models do not adequately capture bony structures (e.g. teeth, mandible), and spatial resolution is often relatively low in order to minimize scanning time. Most MRI sequences achieve 3D vocal tract coverage at gross resolutions of 2 mm3 within a scan time of <20 seconds. Computed tomography (CT) is well suited for vocal tract imaging, but is infrequently used due to the risk of ionizing radiation. In this cadaveric study, a single, extremely low-dose CT scan of the bony structures is blended with accelerated high-resolution (1 mm3) MRI scans of the soft tissues, creating a high-resolution hybrid CT-MRI vocal tract model.MethodsMinimum CT dosages were determined and a custom 16-channel airway receiver coil for accelerated high (1 mm3) resolution MRI was evaluated. A rigid body landmark based partial volume registration scheme was then applied to the images, creating a hybrid CT-MRI model that was segmented in Slicer.ResultsUltra-low dose CT produced images with sufficient quality to clearly visualize the bone, and exposed the cadaver to 0.06 mSv. This is comparable to atmospheric exposures during a round trip transatlantic flight. The custom 16-channel vocal tract coil produced acceptable image quality at 1 mm3 resolution when reconstructed from ∼6 fold undersampled data. High (1 mm3) resolution MR imaging of short (<10 seconds) sustained sounds was achieved. The feasibility of hybrid CT-MRI vocal tract modeling was successfully demonstrated using the rigid body landmark based partial volume registration scheme. Segmentations of CT and hybrid CT-MRI images provided more detailed 3D representations of the vocal tract than 2 mm3 MRI based segmentations.ConclusionsThe method described in this study indicates that high-resolution CT and MR image sets can be combined so that structures such as teeth and bone are accurately represented in vocal tract reconstructions. Such scans will aid learning and deepen understanding of anatomical features that relate to voice production, as well as furthering knowledge of the static and dynamic functioning of individual structures relating to voice production
A multispeaker dataset of raw and reconstructed speech production real-time MRI video and 3D volumetric images
Real-time magnetic resonance imaging (RT-MRI) of human speech production is
enabling significant advances in speech science, linguistics, bio-inspired
speech technology development, and clinical applications. Easy access to RT-MRI
is however limited, and comprehensive datasets with broad access are needed to
catalyze research across numerous domains. The imaging of the rapidly moving
articulators and dynamic airway shaping during speech demands high
spatio-temporal resolution and robust reconstruction methods. Further, while
reconstructed images have been published, to-date there is no open dataset
providing raw multi-coil RT-MRI data from an optimized speech production
experimental setup. Such datasets could enable new and improved methods for
dynamic image reconstruction, artifact correction, feature extraction, and
direct extraction of linguistically-relevant biomarkers. The present dataset
offers a unique corpus of 2D sagittal-view RT-MRI videos along with
synchronized audio for 75 subjects performing linguistically motivated speech
tasks, alongside the corresponding first-ever public domain raw RT-MRI data.
The dataset also includes 3D volumetric vocal tract MRI during sustained speech
sounds and high-resolution static anatomical T2-weighted upper airway MRI for
each subject.Comment: 27 pages, 6 figures, 5 tables, submitted to Nature Scientific Dat