66 research outputs found
Rapid quantitative pharmacodynamic imaging with Bayesian estimation
We recently described rapid quantitative pharmacodynamic imaging, a novel method for estimating sensitivity of a biological system to a drug. We tested its accuracy in simulated biological signals with varying receptor sensitivity and varying levels of random noise, and presented initial proof-of-concept data from functional MRI (fMRI) studies in primate brain. However, the initial simulation testing used a simple iterative approach to estimate pharmacokinetic-pharmacodynamic (PKPD) parameters, an approach that was computationally efficient but returned parameters only from a small, discrete set of values chosen a priori. Here we revisit the simulation testing using a Bayesian method to estimate the PKPD parameters. This improved accuracy compared to our previous method, and noise without intentional signal was never interpreted as signal. We also reanalyze the fMRI proof-of-concept data. The success with the simulated data, and with the limited fMRI data, is a necessary first step toward further testing of rapid quantitative pharmacodynamic imaging
Modeling the R2* relaxivity of blood at 1.5 Tesla
BOLD (Blood Oxygenation Level Dependent) imaging is used in fMRI to show differences in activation of the brain based on the relative changes of the T2* (= 1/R2*) signal of the blood. However, quantification of blood oxygenation level based on the T2* signal has been hindered by the lack of a predictive model which accurately correlates the T2* signal to the oxygenation level of blood. The T2* signal decay in BOLD imaging is generated due to blood containing paramagnetic deoxyhemoglobin (in comparison to diamagnetic oxyhemoglobin). This generates local field inhomogeneities, which cause protons to experience different phase shifts, leading to dephasing and the MR signal decay. The blood T2* signal has been shown to decay with a complex behavior1, termed Non-Lorenztian, and thus is not adequately described by the traditional model of simplemono-exponential decay. Theoretical calculations show that diffusion narrowing substantially affects signal loss in our data. Over the past decade, several theoretical models have been proposed to describe this Non-Lorenztian behavior in the blood T2* signal in BOLD fMRI imaging. The goal of this project was to investigate different models which have been proposed over the years and determine a semi-phenomenological model for the T2* behaviorusing actual MR blood data
Quantum System Identification by Bayesian Analysis of Noisy Data: Beyond Hamiltonian Tomography
We consider how to characterize the dynamics of a quantum system from a
restricted set of initial states and measurements using Bayesian analysis.
Previous work has shown that Hamiltonian systems can be well estimated from
analysis of noisy data. Here we show how to generalize this approach to systems
with moderate dephasing in the eigenbasis of the Hamiltonian. We illustrate the
process for a range of three-level quantum systems. The results suggest that
the Bayesian estimation of the frequencies and dephasing rates is generally
highly accurate and the main source of errors are errors in the reconstructed
Hamiltonian basis.Comment: 6 pages, 3 figure
Renal DCE-MRI model selection using Bayesian probability theory
The goal of this work was to demonstrate the utility of Bayesian probability theory-based model selection for choosing the optimal mathematical model from among 4 competing models of renal dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data. DCE-MRI data were collected on 21 mice with high (n = 7), low (n = 7), or normal (n = 7) renal blood flow (RBF). Model parameters and posterior probabilities of 4 renal DCE-MRI models were estimated using Bayesian-based methods. Models investigated included (1) an empirical model that contained a monoexponential decay (washout) term and a constant offset, (2) an empirical model with a biexponential decay term (empirical/biexponential model), (3) the Patlak–Rutland model, and (4) the 2-compartment kidney model. Joint Bayesian model selection/parameter estimation demonstrated that the empirical/biexponential model was strongly favored for all 3 cohorts, the modeled DCE signals that characterized each of the 3 cohorts were distinctly different, and individual empirical/biexponential model parameter values clearly distinguished cohorts of low and high RBF from one another. The Bayesian methods can be readily extended to a variety of model analyses, making it a versatile and valuable tool for model selection and parameter estimation.</jats:p
Dynamic susceptibility contrast MRI with localized arterial input functions
Compared to gold-standard measurements of cerebral perfusion with positron emission tomography (PET) using H2[15O] tracers, measurements with dynamic susceptibility contrast (DSC) MR are more accessible, less expensive and less invasive. However, existing methods for analyzing and interpreting data from DSC MR have characteristic disadvantages that include sensitivity to incorrectly modeled delay and dispersion in a single, global arterial input function (AIF). We describe a model of tissue microcirculation derived from tracer kinetics which estimates for each voxel a unique, localized AIF (LAIF). Parameters of the model were estimated using Bayesian probability theory and Markov-chain Monte Carlo, circumventing difficulties arising from numerical deconvolution. Applying the new method to imaging studies from a cohort of fourteen patients with chronic, atherosclerotic, occlusive disease showed strong correlations between perfusion measured by DSC MR with LAIF and perfusion measured by quantitative PET with H2[15O]. Regression to PET measurements enabled conversion of DSC MR to a physiological scale. Regression analysis for LAIF gave estimates of a scaling factor for quantitation which described perfusion accurately in patients with substantial variability in hemodynamic impairment
Accuracy and reliability of diffusion imaging models
Diffusion imaging aims to non-invasively characterize the anatomy and integrity of the brain\u27s white matter fibers. We evaluated the accuracy and reliability of commonly used diffusion imaging methods as a function of data quantity and analysis method, using both simulations and highly sampled individual-specific data (927-1442 diffusion weighted images [DWIs] per individual). Diffusion imaging methods that allow for crossing fibers (FSL\u27s BedpostX [BPX], DSI Studio\u27s Constant Solid Angle Q-Ball Imaging [CSA-QBI], MRtrix3\u27s Constrained Spherical Deconvolution [CSD]) estimated excess fibers when insufficient data were present and/or when the data did not match the model priors. To reduce such overfitting, we developed a novel Bayesian Multi-tensor Model-selection (BaMM) method and applied it to the popular ball-and-stick model used in BedpostX within the FSL software package. BaMM was robust to overfitting and showed high reliability and the relatively best crossing-fiber accuracy with increasing amounts of diffusion data. Thus, sufficient data and an overfitting resistant analysis method enhance precision diffusion imaging. For potential clinical applications of diffusion imaging, such as neurosurgical planning and deep brain stimulation (DBS), the quantities of data required to achieve diffusion imaging reliability are lower than those needed for functional MRI
Bayesian Analysis. II. Signal Detection and Model Selection
. In the preceding paper, Bayesian analysis was applied to the parameter estimation problem, given quadrature NMR data. Here Bayesian analysis is extended to the problem of selecting the model which is most probable in view of the data and all the prior information. In addition to the analytic calculation, two examples are given. The first example demonstrates how to use Bayesian probability theory to detect small signals in noise. The second example uses Bayesian probability theory to compute the probability of the number of decaying exponentials in simulated T 1 data. The Bayesian answer to this question is essentially a microcosm of the scientific method and a quantitative statement of Ockham's razor: theorize about possible models, compare these to experiment; and select the simplest model that "best" fits the data. Introduction The parameter estimation problem discussed in the preceding paper [1] tells how to estimate the parameters given a model; but it does not tell how to sel..
Bayesian Analysis. I. Parameter Estimation Using Quadrature NMR Models
. In the analysis of magnetic resonance data, a great deal of prior information is available which is ordinarily not used. For example, considering high resolution NMR spectroscopy, one knows in general terms what functional form the signal will take (e.g., sum of exponentially decaying sinusoids) and that, for quadrature measurements, it will be the same in both channels except for a 90 ffi phase shift. When prior information is incorporated into the analysis of time domain data, the frequencies, decay rate constants, and amplitudes may be estimated much more precisely than by direct use of discrete Fourier transforms. Here, Bayesian probability theory is used to estimate parameters using quadrature models of NMR data. The calculation results in an interpretation of the quadrature model fitting that allows one to understand on an intuitive level what frequencies and decay rates will be estimated and why. Introduction Probability theory when interpreted as logic is a quantitative th..
- …