663 research outputs found

    Inferring Trajectories of Psychotic Disorders Using Dynamic Causal Modeling

    Get PDF
    INTRODUCTION: Illness course plays a crucial role in delineating psychiatric disorders. However, existing nosologies consider only its most basic features (e.g., symptom sequence, duration). We developed a Dynamic Causal Model (DCM) that characterizes course patterns more fully using dense timeseries data. This foundational study introduces the new modeling approach and evaluates its validity using empirical and simulated data. METHODS: A three-level DCM was constructed to model how latent dynamics produce symptoms of depression, mania, and psychosis. This model was fit to symptom scores of nine patients collected prospectively over four years, following first hospitalization. Simulated subjects based on these empirical data were used to evaluate model parameters at the subject-level. At the group-level, we tested the accuracy with which the DCM can estimate the latent course patterns using Parametric Empirical Bayes (PEB) and leave-one-out cross-validation. RESULTS: Analyses of empirical data showed that DCM accurately captured symptom trajectories for all nine subjects. Simulation results showed that parameters could be estimated accurately (correlations between generative and estimated parameters >= 0.76). Moreover, the model could distinguish different latent course patterns, with PEB correctly assigning simulated patients for eight of nine course patterns. When testing any pair of two specific course patterns using leave-one-out cross-validation, 30 out of 36 pairs showed a moderate or high out-of-samples correlation between the true group-membership and the estimated group-membership values. CONCLUSION: DCM has been widely used in neuroscience to infer latent neuronal processes from neuroimaging data. Our findings highlight the potential of adopting this methodology for modeling symptom trajectories to explicate nosologic entities, temporal patterns that define them, and facilitate personalized treatment

    Calibration and Uncertainty Quantification of Convective Parameters in an Idealized GCM

    Get PDF
    Parameters in climate models are usually calibrated manually, exploiting only small subsets of the available data. This precludes both optimal calibration and quantification of uncertainties. Traditional Bayesian calibration methods that allow uncertainty quantification are too expensive for climate models; they are also not robust in the presence of internal climate variability. For example, Markov chain Monte Carlo (MCMC) methods typically require O(105)O(10^5) model runs and are sensitive to internal variability noise, rendering them infeasible for climate models. Here we demonstrate an approach to model calibration and uncertainty quantification that requires only O(102)O(10^2) model runs and can accommodate internal climate variability. The approach consists of three stages: (i) a calibration stage uses variants of ensemble Kalman inversion to calibrate a model by minimizing mismatches between model and data statistics; (ii) an emulation stage emulates the parameter-to-data map with Gaussian processes (GP), using the model runs in the calibration stage for training; (iii) a sampling stage approximates the Bayesian posterior distributions by sampling the GP emulator with MCMC. We demonstrate the feasibility and computational efficiency of this calibrate-emulate-sample (CES) approach in a perfect-model setting. Using an idealized general circulation model, we estimate parameters in a simple convection scheme from synthetic data generated with the model. The CES approach generates probability distributions of the parameters that are good approximations of the Bayesian posteriors, at a fraction of the computational cost usually required to obtain them. Sampling from this approximate posterior allows the generation of climate predictions with quantified parametric uncertainties

    Pressure and saturation estimation from PRM time-lapse seismic data for a compacting reservoir

    Get PDF
    Observed 4D effects are influenced by a combination of changes in both pressure and saturation in the reservoir. Decomposition of pressure and saturation changes is crucial to explain the different physical variables that have contributed to the 4D seismic responses. This thesis addresses the challenges of pressure and saturation decomposition from such time-lapse seismic data in a compacting chalk reservoir. The technique employed integrates reservoir engineering concepts and geophysical knowledge. The innovation in this methodology is the ability to capture the complicated water weakening behaviour of the chalk as a non-linear proxy model controlled by only three constants. Thus, changes in pressure and saturation are estimated via a Bayesian inversion by employing compaction curves derived from the laboratory, constraints from the simulation model predictions, time strain information and the observed fractional change in and . The approach is tested on both synthetic and field data from the Ekofisk field in the North Sea. The results are in good agreement with well production data, and help explain strong localized anomalies in both the Ekofisk and Tor formations. These results also suggest updates to the reservoir simulation model. The second part of the thesis focuses on the geomechanics of the overburden, and the opportunity to use time-lapse time-shifts to estimate pore pressure changes in the reservoir. To achieve this, a semi-analytical approach by Geertsma is used, which numerically integrates the displacements from a nucleus of strain. This model relates the overburden time-lapse time-shifts to reservoir pressure. The existing method by Hodgson (2009) is modified to estimate reservoir pressure change and also the average dilation factor or R-factor for both the reservoir and overburden. The R-factors can be quantified when prior constraints are available from a well history matched simulation model, and their uncertainty defined. The results indicate that the magnitude of R is a function of strain change polarity, and that this asymmetry is required to match the observed timeshifts. The recovered average R-factor is 16, using the permanent reservoir monitoring (PRM) data. The streamer data has recovered average R-factors in the range of 7.2 to 18.4. Despite the limiting assumptions of a homogeneous medium, the method is beneficial, as it treats arbitrary subsurface geometries, and, in contrast to the complex numerical approaches, it is simple to parameterise and computationally fast. Finally, the aim and objective of this research have been met predominantly by the use of PRM data. These applications could not have been achieved without such highly repeatable and short repeat period acquisitions. This points to the value in using these data in reservoir characterisation, inversion and history matching
    • …
    corecore