740 research outputs found

    Reference-free removal of EEG-fMRI ballistocardiogram artifacts with harmonic regression

    Get PDF
    Combining electroencephalogram (EEG) recording and functional magnetic resonance imaging (fMRI) offers the potential for imaging brain activity with high spatial and temporal resolution. This potential remains limited by the significant ballistocardiogram (BCG) artifacts induced in the EEG by cardiac pulsation-related head movement within the magnetic field. We model the BCG artifact using a harmonic basis, pose the artifact removal problem as a local harmonic regression analysis, and develop an efficient maximum likelihood algorithm to estimate and remove BCG artifacts. Our analysis paradigm accounts for time-frequency overlap between the BCG artifacts and neurophysiologic EEG signals, and tracks the spatiotemporal variations in both the artifact and the signal. We evaluate performance on: simulated oscillatory and evoked responses constructed with realistic artifacts; actual anesthesia-induced oscillatory recordings; and actual visual evoked potential recordings. In each case, the local harmonic regression analysis effectively removes the BCG artifacts, and recovers the neurophysiologic EEG signals. We further show that our algorithm outperforms commonly used reference-based and component analysis techniques, particularly in low SNR conditions, the presence of significant time-frequency overlap between the artifact and the signal, and/or large spatiotemporal variations in the BCG. Because our algorithm does not require reference signals and has low computational complexity, it offers a practical tool for removing BCG artifacts from EEG data recorded in combination with fMRI.National Institutes of Health (U.S.) (Award DP1-OD003646)National Institutes of Health (U.S.) (Award TR01-GM104948)National Institutes of Health (U.S.) (Grant R44NS071988)National Institute of Neurological Diseases and Stroke (U.S.) (Grant Grant R44NS071988

    Serial Correlations in Single-Subject fMRI with Sub-Second TR

    Full text link
    When performing statistical analysis of single-subject fMRI data, serial correlations need to be taken into account to allow for valid inference. Otherwise, the variability in the parameter estimates might be under-estimated resulting in increased false-positive rates. Serial correlations in fMRI data are commonly characterized in terms of a first-order autoregressive (AR) process and then removed via pre-whitening. The required noise model for the pre-whitening depends on a number of parameters, particularly the repetition time (TR). Here we investigate how the sub-second temporal resolution provided by simultaneous multislice (SMS) imaging changes the noise structure in fMRI time series. We fit a higher-order AR model and then estimate the optimal AR model order for a sequence with a TR of less than 600 ms providing whole brain coverage. We show that physiological noise modelling successfully reduces the required AR model order, but remaining serial correlations necessitate an advanced noise model. We conclude that commonly used noise models, such as the AR(1) model, are inadequate for modelling serial correlations in fMRI using sub-second TRs. Rather, physiological noise modelling in combination with advanced pre-whitening schemes enable valid inference in single-subject analysis using fast fMRI sequences

    ECG electrode placements for magnetohydrodynamic voltage suppression

    Get PDF
    This study aims to investigate a set of electrocardiogram (ECG) electrode lead locations to improve the quality of four-lead ECG signals acquired during magnetic resonance imaging (MRI). This was achieved by identifying electrode placements that minimized the amount of induced magnetohydrodynamic voltages (VMHD) in the ECG signals. Reducing VMHD can improve the accuracy of QRS complex detection in ECG as well as heartbeat synchronization between MRI and ECG during the acquisition of cardiac cine. A vector model based on thoracic geometry was developed to predict induced VMHD and to optimize four-lead ECG electrode placement for the purposes of improved MRI gating. Four human subjects were recruited for vector model establishment (Group 1), and five human subjects were recruited for validation of VMHD reduction in the proposed four-lead ECG (Group 2). The vector model was established using 12-lead ECG data recorded from Group 1 of four healthy subjects at 3 Tesla, and a gradient descent optimization routine was utilized to predict optimal four-lead ECG placement based on VMHD vector alignment. The optimized four-lead ECG was then validated in Group 2 of five healthy subjects by comparing the standard and proposed lead placements. A 43.41% reduction in VMHD was observed in ECGs using the proposed electrode placement, and the QRS complex was preserved. A VMHD-minimized electrode placement for four-lead ECG gating was presented and shown to reduce induced magnetohydrodynamic (MHD) signals, potentially allowing for improved cardiac MRI physiological monitoring

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    User-initialized active contour segmentation and golden-angle real-time cardiovascular magnetic resonance enable accurate assessment of LV function in patients with sinus rhythm and arrhythmias.

    Get PDF
    BackgroundData obtained during arrhythmia is retained in real-time cardiovascular magnetic resonance (rt-CMR), but there is limited and inconsistent evidence to show that rt-CMR can accurately assess beat-to-beat variation in left ventricular (LV) function or during an arrhythmia.MethodsMulti-slice, short axis cine and real-time golden-angle radial CMR data was collected in 22 clinical patients (18 in sinus rhythm and 4 patients with arrhythmia). A user-initialized active contour segmentation (ACS) software was validated via comparison to manual segmentation on clinically accepted software. For each image in the 2D acquisitions, slice volume was calculated and global LV volumes were estimated via summation across the LV using multiple slices. Real-time imaging data was reconstructed using different image exposure times and frame rates to evaluate the effect of temporal resolution on measured function in each slice via ACS. Finally, global volumetric function of ectopic and non-ectopic beats was measured using ACS in patients with arrhythmias.ResultsACS provides global LV volume measurements that are not significantly different from manual quantification of retrospectively gated cine images in sinus rhythm patients. With an exposure time of 95.2 ms and a frame rate of > 89 frames per second, golden-angle real-time imaging accurately captures hemodynamic function over a range of patient heart rates. In four patients with frequent ectopic contractions, initial quantification of the impact of ectopic beats on hemodynamic function was demonstrated.ConclusionUser-initialized active contours and golden-angle real-time radial CMR can be used to determine time-varying LV function in patients. These methods will be very useful for the assessment of LV function in patients with frequent arrhythmias

    Multimodal approaches in human brain mapping

    Get PDF

    Improving Engagement Assessment by Model Individualization and Deep Learning

    Get PDF
    This dissertation studies methods that improve engagement assessment for pilots. The major work addresses two challenging problems involved in the assessment: individual variation among pilots and the lack of labeled data for training assessment models. Task engagement is usually assessed by analyzing physiological measurements collected from subjects who are performing a task. However, physiological measurements such as Electroencephalography (EEG) vary from subject to subject. An assessment model trained for one subject may not be applicable to other subjects. We proposed a dynamic classifier selection algorithm for model individualization and compared it to other two methods: base line normalization and similarity-based model replacement. Experimental results showed that baseline normalization and dynamic classifier selection can significantly improve cross-subject engagement assessment. For complex tasks such as piloting an air plane, labeling engagement levels for pilots is challenging. Without enough labeled data, it is very difficult for traditional methods to train valid models for effective engagement assessment. This dissertation proposed to utilize deep learning models to address this challenge. Deep learning models are capable of learning valuable feature hierarchies by taking advantage of both labeled and unlabeled data. Our results showed that deep models are better tools for engagement assessment when label information is scarce. To further verify the power of deep learning techniques for scarce labeled data, we applied the deep learning algorithm to another small size data set, the ADNI data set. The ADNI data set is a public data set containing MRI and PET scans of Alzheimer\u27s Disease (AD) patients for AD diagnosis. We developed a robust deep learning system incorporating dropout and stability selection techniques to identify the different progression stages of AD patients. The experimental results showed that deep learning is very effective in AD diagnosis. In addition, we studied several imbalance learning techniques that are useful when data is highly unbalanced, i.e., when majority classes have many more training samples than minority classes. Conventional machine learning techniques usually tend to classify all data samples into majority classes and to perform poorly for minority classes. Unbalanced learning techniques can balance data sets before training and can improve learning performance

    A new anisotropic diffusion method, application to partial volume effect reduction

    Get PDF
    The partial volume effect is a significant limitation in medical imaging that results in blurring when the boundary between two structures of interest falls in the middle of a voxel. A new anisotropic diffusion method allows one to create interpolated 3D images corrected for partial volume, without enhancement of noise. After a zero-order interpolation, we apply a modified version of the anisotropic diffusion approach, wherein the diffusion coefficient becomes negative for high gradient values. As a result, the new scheme restores edges between regions that have been blurred by partial voluming, but it acts as normal anisotropic diffusion in flat regions, where it reduces noise. We add constraints to stabilize the method and model partial volume; i.e., the sum of neighboring voxels must equal the signal in the original low resolution voxel and the signal in a voxel is kept within its neighbor's limits. The method performed well on a variety of synthetic images and MRI scans. No noticeable artifact was induced by interpolation with partial volume correction, and noise was much reduced in homogeneous regions. We validated the method using the BrainWeb project database. Partial volume effect was simulated and restored brain volumes compared to the original ones. Errors due to partial volume effect were reduced by 28% and 35% for the 5% and 0% noise cases, respectively. The method was applied to in vivo "thick" MRI carotid artery images for atherosclerosis detection. There was a remarkable increase in the delineation of the lumen of the carotid artery
    corecore