4,986 research outputs found

    Methods for cleaning the BOLD fMRI signal

    Get PDF
    Available online 9 December 2016 http://www.sciencedirect.com/science/article/pii/S1053811916307418?via%3Dihubhttp://www.sciencedirect.com/science/article/pii/S1053811916307418?via%3DihubBlood oxygen-level-dependent functional magnetic resonance imaging (BOLD fMRI) has rapidly become a popular technique for the investigation of brain function in healthy individuals, patients as well as in animal studies. However, the BOLD signal arises from a complex mixture of neuronal, metabolic and vascular processes, being therefore an indirect measure of neuronal activity, which is further severely corrupted by multiple non-neuronal fluctuations of instrumental, physiological or subject-specific origin. This review aims to provide a comprehensive summary of existing methods for cleaning the BOLD fMRI signal. The description is given from a methodological point of view, focusing on the operation of the different techniques in addition to pointing out the advantages and limitations in their application. Since motion-related and physiological noise fluctuations are two of the main noise components of the signal, techniques targeting their removal are primarily addressed, including both data-driven approaches and using external recordings. Data-driven approaches, which are less specific in the assumed model and can simultaneously reduce multiple noise fluctuations, are mainly based on data decomposition techniques such as principal and independent component analysis. Importantly, the usefulness of strategies that benefit from the information available in the phase component of the signal, or in multiple signal echoes is also highlighted. The use of global signal regression for denoising is also addressed. Finally, practical recommendations regarding the optimization of the preprocessing pipeline for the purpose of denoising and future venues of research are indicated. Through the review, we summarize the importance of signal denoising as an essential step in the analysis pipeline of task-based and resting state fMRI studies.This work was supported by the Spanish Ministry of Economy and Competitiveness [Grant PSI 2013–42343 Neuroimagen Multimodal], the Severo Ochoa Programme for Centres/Units of Excellence in R & D [SEV-2015-490], and the research and writing of the paper were supported by the NIMH and NINDS Intramural Research Programs (ZICMH002888) of the NIH/HHS

    Functional Imaging of Autonomic Regulation: Methods and Key Findings.

    Get PDF
    Central nervous system processing of autonomic function involves a network of regions throughout the brain which can be visualized and measured with neuroimaging techniques, notably functional magnetic resonance imaging (fMRI). The development of fMRI procedures has both confirmed and extended earlier findings from animal models, and human stroke and lesion studies. Assessments with fMRI can elucidate interactions between different central sites in regulating normal autonomic patterning, and demonstrate how disturbed systems can interact to produce aberrant regulation during autonomic challenges. Understanding autonomic dysfunction in various illnesses reveals mechanisms that potentially lead to interventions in the impairments. The objectives here are to: (1) describe the fMRI neuroimaging methodology for assessment of autonomic neural control, (2) outline the widespread, lateralized distribution of function in autonomic sites in the normal brain which includes structures from the neocortex through the medulla and cerebellum, (3) illustrate the importance of the time course of neural changes when coordinating responses, and how those patterns are impacted in conditions of sleep-disordered breathing, and (4) highlight opportunities for future research studies with emerging methodologies. Methodological considerations specific to autonomic testing include timing of challenges relative to the underlying fMRI signal, spatial resolution sufficient to identify autonomic brainstem nuclei, blood pressure, and blood oxygenation influences on the fMRI signal, and the sustained timing, often measured in minutes of challenge periods and recovery. Key findings include the lateralized nature of autonomic organization, which is reminiscent of asymmetric motor, sensory, and language pathways. Testing brain function during autonomic challenges demonstrate closely-integrated timing of responses in connected brain areas during autonomic challenges, and the involvement with brain regions mediating postural and motoric actions, including respiration, and cardiac output. The study of pathological processes associated with autonomic disruption shows susceptibilities of different brain structures to altered timing of neural function, notably in sleep disordered breathing, such as obstructive sleep apnea and congenital central hypoventilation syndrome. The cerebellum, in particular, serves coordination roles for vestibular stimuli and blood pressure changes, and shows both injury and substantially altered timing of responses to pressor challenges in sleep-disordered breathing conditions. The insights into central autonomic processing provided by neuroimaging have assisted understanding of such regulation, and may lead to new treatment options for conditions with disrupted autonomic function

    Characterization, Classification, and Genesis of Seismocardiographic Signals

    Get PDF
    Seismocardiographic (SCG) signals are the acoustic and vibration induced by cardiac activity measured non-invasively at the chest surface. These signals may offer a method for diagnosing and monitoring heart function. Successful classification of SCG signals in health and disease depends on accurate signal characterization and feature extraction. In this study, SCG signal features were extracted in the time, frequency, and time-frequency domains. Different methods for estimating time-frequency features of SCG were investigated. Results suggested that the polynomial chirplet transform outperformed wavelet and short time Fourier transforms. Many factors may contribute to increasing intrasubject SCG variability including subject posture and respiratory phase. In this study, the effect of respiration on SCG signal variability was investigated. Results suggested that SCG waveforms can vary with lung volume, respiratory flow direction, or a combination of these criteria. SCG events were classified into groups belonging to these different respiration phases using classifiers, including artificial neural networks, support vector machines, and random forest. Categorizing SCG events into different groups containing similar events allows more accurate estimation of SCG features. SCG feature points were also identified from simultaneous measurements of SCG and other well-known physiologic signals including electrocardiography, phonocardiography, and echocardiography. Future work may use this information to get more insights into the genesis of SCG

    3D single breath-hold MR methodology for measuring cardiac parametric mapping at 3T

    Get PDF
    Mención Internacional en el título de doctorOne of the foremost and challenging subfields of MRI is cardiac magnetic resonance imaging (CMR). CMR is becoming an indispensable tool in cardiovascular medicine by acquiring data about anatomy and function simultaneously. For instance, it allows the non-invasive characterization of myocardial tissues via parametric mapping techniques. These mapping techniques provide a spatial visualization of quantitative changes in the myocardial parameters. Inspired by the need to develop novel high-quality parametric sequences for 3T, this thesis's primary goal is to introduce an accurate and efficient 3D single breath-hold MR methodology for measuring cardiac parametric mapping at 3T. This thesis is divided into two main parts: i) research and development of a new 3D T1 saturation recovery mapping technique (3D SACORA), together with a feasibility study regarding the possibility of adding a T2 mapping feature to 3D SACORA concepts, and ii) research and implementation of a deep learning-based post-processing method to improve the T1 maps obtained with 3D SACORA. In the first part of the thesis, 3D SACORA was developed as a new 3D T1 mapping sequence to speed up T1 mapping acquisition of the whole heart. The proposed sequence was validated in phantoms against the gold standard technique IR-SE and in-vivo against the reference sequence 3D SASHA. The 3D SACORA pulse sequence design was focused on acquiring the entire left ventricle in a single breath-hold while achieving good quality T1 mapping and stability over a wide range of heart rates (HRs). The precision and accuracy of 3D SACORA were assessed in phantom experiments. Reference T1 values were obtained using IR-SE. In order to further validate 3D SACORA T1 estimation accuracy and precision, T1 values were also estimated using an in-house version of 3D SASHA. For in-vivo validation, seven large healthy pigs were scanned with 3D SACORA and 3D SASHA. In all pigs, images were acquired before and after administration of MR contrast agent. The phantom results showed good agreement and no significant bias between methods. In the in-vivo experiments, all T1-weighted images showed good contrast and quality, and the T1 maps correctly represented the information contained in the T1-weighted images. Septal T1s and coefficients of variation did not considerably differ between the two sequences, confirming good accuracy and precision. 3D SACORA images showed good contrast, homogeneity and were comparable to corresponding 3D SASHA images, despite the shorter acquisition time (15s vs. 188s, for a heart rate of 60 bpm). In conclusion, the proposed 3D SACORA successfully acquired a whole-heart 3D T1 map in a single breath-hold at 3T, estimating T1 values in agreement with those obtained with the IR-SE and 3D SASHA sequences. Following the successful validation of 3D SACORA, a feasibility study was performed to assess the potential of modifying the acquisition scheme of 3D SACORA in order to obtain T1 and T2 maps simultaneously in a single breath-hold. This 3D T1/T2 sequence was named 3D dual saturation-recovery compressed SENSE rapid acquisition (3D dual-SACORA). A phantom of eight tubes was built to validate the proposed sequence. The phantom was scanned with 3D dual-SACORA with a simulated heart rate of 60 bpm. Reference T1 and T2 values were estimated using IR-SE and GraSE sequences, respectively. An in-vivo study was performed with a healthy volunteer to evaluate the parametric maps' image quality obtained with the 3D dual-SACORA sequence. T1 and T2 maps of the phantom were successfully obtained with the 3D dual-SACORA sequence. The results show that the proposed sequence achieved good precision and accuracy for most values. A volunteer was successfully scanned with the proposed sequence (acquisition duration of approximately 20s) in a single breath-hold. The saturation time images and the parametric maps obtained with the 3D dual-SACORA sequence showed good contrast and homogeneity. The septal T1 and T2 values are in good agreement with reference sequences and published work. In conclusion, this feasibility study's findings open the door to the possibility of using 3D SACORA concepts to develop a successful 3D T1/T2 sequence. In the second part of the thesis, a deep learning-based super-resolution model was implemented to improve the image quality of the T1 maps of 3D SACORA, and a comprehensive study of the performance of the model in different MR image datasets and sequences was performed. After careful consideration, the selected convolutional neural network to improve the image quality of the T1 maps was the Residual Dense Network (RDN). This network has shown outstanding performance against state-of-the-art methods on benchmark datasets; however, it has not been validated on MR datasets. In this way, the RDN model was initially validated on cardiac and brain benchmark datasets. After this validation, the model was validated on a self-acquired cardiac dataset and on improving T1 maps. The RDN model improved the images successfully for the two benchmark datasets, achieving better performance with the brain dataset than with the cardiac dataset. This result was expected as the brain images have more well-defined edges than the cardiac images, making the resolution enhancement more evident. On the self-acquired cardiac dataset, the model also obtained an enhanced performance on image quality assessment metrics and improved visual assessment, particularly on well-defined edges. Regarding the T1 mapping sequences, the model improved the image quality of the saturation time images and the T1 maps. The model was able to enhance the T1 maps analytically and visually. Analytically, the model did not considerably modify the T1 values while improving the standard deviation in both myocardium and blood. Visually, the model improved the T1 maps by removing noise and motion artifacts without losing resolution on the edges. In conclusion, the RDN model was validated on three different MR datasets and used to improve the image quality of the T1 maps obtained with 3D SACORA and 3D SASHA. In summary, a 3D single breath-hold MR methodology was introduced, including a ready to-go 3D single breath-hold T1 mapping sequence for 3T (3D SACORA), together with the ideas for a new 3D T1/T2 mapping sequence (3D dual-SACORA); and a deep learning-based post-processing implementation capable of improving the image quality of 3D SACORA T1 maps.This thesis has received funding from the European Union Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement N722427.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidente: Carlos Alberola López.- Secretario: María Jesús Ledesma Carbayo.- Vocal: Nathan Mewto

    Multiparametric measurement of cerebral physiology using calibrated fMRI

    Get PDF
    The ultimate goal of calibrated fMRI is the quantitative imaging of oxygen metabolism (CMRO2), and this has been the focus of numerous methods and approaches. However, one underappreciated aspect of this quest is that in the drive to measure CMRO2, many other physiological parameters of interest are often acquired along the way. This can significantly increase the value of the dataset, providing greater information that is clinically relevant, or detail that can disambiguate the cause of signal variations. This can also be somewhat of a double-edged sword: calibrated fMRI experiments combine multiple parameters into a physiological model that requires multiple steps, thereby providing more opportunity for error propagation and increasing the noise and error of the final derived values. As with all measurements, there is a trade-off between imaging time, spatial resolution, coverage, and accuracy. In this review, we provide a brief overview of the benefits and pitfalls of extracting multiparametric measurements of cerebral physiology through calibrated fMRI experiments

    Improving Maternal and Fetal Cardiac Monitoring Using Artificial Intelligence

    Get PDF
    Early diagnosis of possible risks in the physiological status of fetus and mother during pregnancy and delivery is critical and can reduce mortality and morbidity. For example, early detection of life-threatening congenital heart disease may increase survival rate and reduce morbidity while allowing parents to make informed decisions. To study cardiac function, a variety of signals are required to be collected. In practice, several heart monitoring methods, such as electrocardiogram (ECG) and photoplethysmography (PPG), are commonly performed. Although there are several methods for monitoring fetal and maternal health, research is currently underway to enhance the mobility, accuracy, automation, and noise resistance of these methods to be used extensively, even at home. Artificial Intelligence (AI) can help to design a precise and convenient monitoring system. To achieve the goals, the following objectives are defined in this research: The first step for a signal acquisition system is to obtain high-quality signals. As the first objective, a signal processing scheme is explored to improve the signal-to-noise ratio (SNR) of signals and extract the desired signal from a noisy one with negative SNR (i.e., power of noise is greater than signal). It is worth mentioning that ECG and PPG signals are sensitive to noise from a variety of sources, increasing the risk of misunderstanding and interfering with the diagnostic process. The noises typically arise from power line interference, white noise, electrode contact noise, muscle contraction, baseline wandering, instrument noise, motion artifacts, electrosurgical noise. Even a slight variation in the obtained ECG waveform can impair the understanding of the patient's heart condition and affect the treatment procedure. Recent solutions, such as adaptive and blind source separation (BSS) algorithms, still have drawbacks, such as the need for noise or desired signal model, tuning and calibration, and inefficiency when dealing with excessively noisy signals. Therefore, the final goal of this step is to develop a robust algorithm that can estimate noise, even when SNR is negative, using the BSS method and remove it based on an adaptive filter. The second objective is defined for monitoring maternal and fetal ECG. Previous methods that were non-invasive used maternal abdominal ECG (MECG) for extracting fetal ECG (FECG). These methods need to be calibrated to generalize well. In other words, for each new subject, a calibration with a trustable device is required, which makes it difficult and time-consuming. The calibration is also susceptible to errors. We explore deep learning (DL) models for domain mapping, such as Cycle-Consistent Adversarial Networks, to map MECG to fetal ECG (FECG) and vice versa. The advantages of the proposed DL method over state-of-the-art approaches, such as adaptive filters or blind source separation, are that the proposed method is generalized well on unseen subjects. Moreover, it does not need calibration and is not sensitive to the heart rate variability of mother and fetal; it can also handle low signal-to-noise ratio (SNR) conditions. Thirdly, AI-based system that can measure continuous systolic blood pressure (SBP) and diastolic blood pressure (DBP) with minimum electrode requirements is explored. The most common method of measuring blood pressure is using cuff-based equipment, which cannot monitor blood pressure continuously, requires calibration, and is difficult to use. Other solutions use a synchronized ECG and PPG combination, which is still inconvenient and challenging to synchronize. The proposed method overcomes those issues and only uses PPG signal, comparing to other solutions. Using only PPG for blood pressure is more convenient since it is only one electrode on the finger where its acquisition is more resilient against error due to movement. The fourth objective is to detect anomalies on FECG data. The requirement of thousands of manually annotated samples is a concern for state-of-the-art detection systems, especially for fetal ECG (FECG), where there are few publicly available FECG datasets annotated for each FECG beat. Therefore, we will utilize active learning and transfer-learning concept to train a FECG anomaly detection system with the least training samples and high accuracy. In this part, a model is trained for detecting ECG anomalies in adults. Later this model is trained to detect anomalies on FECG. We only select more influential samples from the training set for training, which leads to training with the least effort. Because of physician shortages and rural geography, pregnant women's ability to get prenatal care might be improved through remote monitoring, especially when access to prenatal care is limited. Increased compliance with prenatal treatment and linked care amongst various providers are two possible benefits of remote monitoring. If recorded signals are transmitted correctly, maternal and fetal remote monitoring can be effective. Therefore, the last objective is to design a compression algorithm that can compress signals (like ECG) with a higher ratio than state-of-the-art and perform decompression fast without distortion. The proposed compression is fast thanks to the time domain B-Spline approach, and compressed data can be used for visualization and monitoring without decompression owing to the B-spline properties. Moreover, the stochastic optimization is designed to retain the signal quality and does not distort signal for diagnosis purposes while having a high compression ratio. In summary, components for creating an end-to-end system for day-to-day maternal and fetal cardiac monitoring can be envisioned as a mix of all tasks listed above. PPG and ECG recorded from the mother can be denoised using deconvolution strategy. Then, compression can be employed for transmitting signal. The trained CycleGAN model can be used for extracting FECG from MECG. Then, trained model using active transfer learning can detect anomaly on both MECG and FECG. Simultaneously, maternal BP is retrieved from the PPG signal. This information can be used for monitoring the cardiac status of mother and fetus, and also can be used for filling reports such as partogram

    Fully Automated Myocardial Strain Estimation from Cardiovascular MRI–tagged Images Using a Deep Learning Framework in the UK Biobank

    Get PDF
    Purpose: To demonstrate the feasibility and performance of a fully automated deep learning framework to estimate myocardial strain from short-axis cardiac magnetic resonance tagged images. Methods and Materials: In this retrospective cross-sectional study, 4508 cases from the UK Biobank were split randomly into 3244 training and 812 validation cases, and 452 test cases. Ground truth myocardial landmarks were defined and tracked by manual initialization and correction of deformable image registration using previously validated software with five readers. The fully automatic framework consisted of 1) a convolutional neural network (CNN) for localization, and 2) a combination of a recurrent neural network (RNN) and a CNN to detect and track the myocardial landmarks through the image sequence for each slice. Radial and circumferential strain were then calculated from the motion of the landmarks and averaged on a slice basis. Results: Within the test set, myocardial end-systolic circumferential Green strain errors were -0.001 +/- 0.025, -0.001 +/- 0.021, and 0.004 +/- 0.035 in basal, mid, and apical slices respectively (mean +/- std. dev. of differences between predicted and manual strain). The framework reproduced significant reductions in circumferential strain in diabetics, hypertensives, and participants with previous heart attack. Typical processing time was ~260 frames (~13 slices) per second on an NVIDIA Tesla K40 with 12GB RAM, compared with 6-8 minutes per slice for the manual analysis. Conclusions: The fully automated RNNCNN framework for analysis of myocardial strain enabled unbiased strain evaluation in a high-throughput workflow, with similar ability to distinguish impairment due to diabetes, hypertension, and previous heart attack.Comment: accepted in Radiology Cardiothoracic Imagin

    Improved 3D MR Image Acquisition and Processing in Congenital Heart Disease

    Get PDF
    Congenital heart disease (CHD) is the most common type of birth defect, affecting about 1% of the population. MRI is an essential tool in the assessment of CHD, including diagnosis, intervention planning and follow-up. Three-dimensional MRI can provide particularly rich visualization and information. However, it is often complicated by long scan times, cardiorespiratory motion, injection of contrast agents, and complex and time-consuming postprocessing. This thesis comprises four pieces of work that attempt to respond to some of these challenges. The first piece of work aims to enable fast acquisition of 3D time-resolved cardiac imaging during free breathing. Rapid imaging was achieved using an efficient spiral sequence and a sparse parallel imaging reconstruction. The feasibility of this approach was demonstrated on a population of 10 patients with CHD, and areas of improvement were identified. The second piece of work is an integrated software tool designed to simplify and accelerate the development of machine learning (ML) applications in MRI research. It also exploits the strengths of recently developed ML libraries for efficient MR image reconstruction and processing. The third piece of work aims to reduce contrast dose in contrast-enhanced MR angiography (MRA). This would reduce risks and costs associated with contrast agents. A deep learning-based contrast enhancement technique was developed and shown to improve image quality in real low-dose MRA in a population of 40 children and adults with CHD. The fourth and final piece of work aims to simplify the creation of computational models for hemodynamic assessment of the great arteries. A deep learning technique for 3D segmentation of the aorta and the pulmonary arteries was developed and shown to enable accurate calculation of clinically relevant biomarkers in a population of 10 patients with CHD

    Optimization techniques in respiratory control system models

    Get PDF
    One of the most complex physiological systems whose modeling is still an open study is the respiratory control system where different models have been proposed based on the criterion of minimizing the work of breathing (WOB). The aim of this study is twofold: to compare two known models of the respiratory control system which set the breathing pattern based on quantifying the respiratory work; and to assess the influence of using direct-search or evolutionary optimization algorithms on adjustment of model parameters. This study was carried out using experimental data from a group of healthy volunteers under CO2 incremental inhalation, which were used to adjust the model parameters and to evaluate how much the equations of WOB follow a real breathing pattern. This breathing pattern was characterized by the following variables: tidal volume, inspiratory and expiratory time duration and total minute ventilation. Different optimization algorithms were considered to determine the most appropriate model from physiological viewpoint. Algorithms were used for a double optimization: firstly, to minimize the WOB and secondly to adjust model parameters. The performance of optimization algorithms was also evaluated in terms of convergence rate, solution accuracy and precision. Results showed strong differences in the performance of optimization algorithms according to constraints and topological features of the function to be optimized. In breathing pattern optimization, the sequential quadratic programming technique (SQP) showed the best performance and convergence speed when respiratory work was low. In addition, SQP allowed to implement multiple non-linear constraints through mathematical expressions in the easiest way. Regarding parameter adjustment of the model to experimental data, the evolutionary strategy with covariance matrix and adaptation (CMA-ES) provided the best quality solutions with fast convergence and the best accuracy and precision in both models. CMAES reached the best adjustment because of its good performance on noise and multi-peaked fitness functions. Although one of the studied models has been much more commonly used to simulate respiratory response to CO2 inhalation, results showed that an alternative model has a more appropriate cost function to minimize WOB from a physiological viewpoint according to experimental data.Postprint (author's final draft
    corecore