162 research outputs found
In vivo1H-MR spectroscopy of the human heart
4. Conclusions: Combined respiratory and cardiac triggering improves the localization accuracy and spectral quality in cardiac1H-MRS dramatically leading to substantially increased spectral reproducibility. The best practical realization of double triggering turned out to be the use of the ECG amplitude when making use of the fact that it is modulated by respiration. In spite of the spectral quality achieved in most subjects, we still fail to record satisfactory spectra in a minority of subjects. The reasons for this are not understood at present but must be some particulars of either a given subject or the experimental setup. The cardiac1H-MR spectra contain quantifiable contributions from creatine, TMA, lipids, and probably taurine. It is possible that the spectral contributions of creatine are subject to dipolar coupling similar to the observations for skeletal muscl
First attempt to motion corrected flow encoding using free-breathing phase-contrast CINE MRI
International audienceThis study demonstrates the feasibility of free-breathing phase-contrast CINE MRI without averaging. A new version of the CINE GRICS algorithm[1] was used to correct for motion
Saccadic eye movement changes in Parkinson's disease dementia and dementia with Lewy bodies
Neurodegeneration in Parkinson's disease dementia (PDD) and dementia with Lewy bodies (DLB) affect cortical and subcortical networks involved in saccade generation. We therefore expected impairments in saccade performance in both disorders. In order to improve the pathophysiological understanding and to investigate the usefulness of saccades for differential diagnosis, saccades were tested in age- and education-matched patients with PDD (n = 20) and DLB (n = 20), Alzheimer's disease (n = 22) and Parkinson's disease (n = 24), and controls (n = 24). Reflexive (gap, overlap) and complex saccades (prediction, decision and antisaccade) were tested with electro-oculography. PDD and DLB patients had similar impairment in all tasks (P > 0.05, not significant). Compared with controls, they were impaired in both reflexive saccade execution (gap and overlap latencies, P 0.05). Patients with Parkinson's disease had, compared with controls, similar complex saccade performance (for all, P > 0.05) and only minimal impairment in reflexive tasks, i.e. hypometric gain in the gap task (P = 0.04). Impaired saccade execution in reflexive tasks allowed discrimination between DLB versus Alzheimer's disease (sensitivity ≥60%, specificity ≥77%) and between PDD versus Parkinson's disease (sensitivity ≥60%, specificity ≥88%) when ±1.5 standard deviations was used for group discrimination. We conclude that impairments in reflexive saccades may be helpful for differential diagnosis and are minimal when either cortical (Alzheimer's disease) or nigrostriatal neurodegeneration (Parkinson's disease) exists solely; however, they become prominent with combined cortical and subcortical neurodegeneration in PDD and DLB. The similarities in saccade performance in PDD and DLB underline the overlap between these conditions and underscore differences from Alzheimer's disease and Parkinson's diseas
Synchronisation vocale et mouvement compensé en reconstruction pour une ciné IRM de la parole
National audienceL’imagerie dynamique du conduit vocal permet d’étudier et de modéliser la production de laparole. La durée moyenne de chaque son est d’environ 80 ms. Lemouvement de chaque articulateur, en particulier la langue, doit être mesuréavec suffisamment de précision. Actuellement la fluoroscopie à rayon X estutilisée cliniquement. L’IRM temps réel permet la visualisation directe desmouvements articulatoires [1] mais reste limitée en résolution. Lasynchronisation de l’IRM par un système acoustique est possible [2] maisnécessite l’exécution de mouvements articulatoires avec une parfaitereproductibilité. Dans ce travail nous proposons un dispositif optimisé pour laréalisation d’une imagerie dynamique IRM de la parole avec une hauterésolution spatiale et temporelle. Il s’appuie sur l’utilisation conjointe d’unmicrophone compatible IRM enregistrant la parole pendant l’acquisition IRMet d’une reconstruction d’images synchronisées a posteriori incluant unecompensation du mouvement. Cette reconstruction permet une prise encompte de la variabilité de répétition de la phrase lors de l’acquisition
Sound synchronization and motion compensated reconstruction for speech Cine MRI
International audienceDynamic imaging of the vocal tact is important for modeling speechthrough the acoustic-articulatory relation. The average duration of each sound isabout 80ms. Movements of each articulator, in particular the tongue, should becaptured with sufficient precision. Current clinical techniques use X-ray videofluoroscopy which involves ionizing radiation. Real-time MRI allows direct recordingof speech motion [1] but is intrinsically limited in terms of resolution and SNR.Synchronization of MRI with an acoustic device is possible [2] but requires motion ofvocal system to be highly reproducible. In this work we propose an optimized setupfor achieving dynamic MRI of speech with high spatial and temporal resolution basedon a combination of: an MR-compatible acoustic device allowing simultaneousrecording of speech during MRI; and a retrospectively gated, motion-compensatedimage reconstruction that can deal with the variability of the subject repeating thesame sentence over the acquisition
Free-breathing myocardial T2 measurements at 1.5T
POSTER PRESENTATIONInternational audienc
Speech Cine SSFP with optical microphone synchronization and motion compensated reconstruction
International audienceDynamic imaging of the vocal tact is important for modeling speechthrough the acoustic-articulatory relation. The average duration of each sound isabout 80ms. Movements of each articulator, in particular the tongue, should becaptured with sufficient precision. Current clinical techniques use X-ray videofluoroscopy which involves ionizing radiation. Real-time MRI allows direct recordingof speech motion [1] but is intrinsically limited in terms of resolution and SNR.Synchronization of MRI with an acoustic device is possible [2] but requires motion ofvocal system to be highly reproducible. In this work we propose an optimized setupfor achieving dynamic MRI of speech with high spatial and temporal resolution basedon a combination of: an MR-compatible acoustic device allowing simultaneousrecording of speech during MRI; and a retrospectively gated, motion-compensatedimage reconstruction that can deal with the variability of the subject repeating thesame sentence over the acquisition
Vocal tract sagittal slices estimation from MRI midsagittal slices during speech production of CV
International audienc
Synthesize MRI vocal tract data during CV production
International audienceA set of rtMR image transformations across time is computed during the production of CV that is afterwards applied to a new speaker in order to synthesize his/her CV pseudo rtMRI data. Synthesized images are compared with the original ones using image cross-correlation. 2 Purpose To be able to enlarge MRI speech corpus by synthesizing data
A Multimodal Real-Time MRI Articulatory Corpus of French for Speech Research
International audienceIn this work we describe the creation of ArtSpeechMRIfr: a real-time as well as static magnetic resonance imaging (rtMRI, 3D MRI) database of the vocal tract. The database contains also processed data: denoised audio, its phonetically aligned annotation, articulatory contours, and vocal tract volume information , which provides a rich resource for speech research. The database is built on data from two male speakers of French. It covers a number of phonetic contexts in the controlled part, as well as spontaneous speech, 3D MRI scans of sustained vocalic articulations, and of the dental casts of the subjects. The corpus for rtMRI consists of 79 synthetic sentences constructed from a phonetized dictionary that makes possible to shorten the duration of acquisitions while keeping a very good coverage of the phonetic contexts which exist in French. The 3D MRI includes acquisitions for 12 French vowels and 10 consonants, each of which was pronounced in several vocalic contexts. Ar-ticulatory contours (tongue, jaw, epiglottis, larynx, velum, lips) as well as 3D volumes were manually drawn for a part of the images
- …