3,037 research outputs found

    A computational framework for infinite-dimensional Bayesian inverse problems: Part II. Stochastic Newton MCMC with application to ice sheet flow inverse problems

    Full text link
    We address the numerical solution of infinite-dimensional inverse problems in the framework of Bayesian inference. In the Part I companion to this paper (arXiv.org:1308.1313), we considered the linearized infinite-dimensional inverse problem. Here in Part II, we relax the linearization assumption and consider the fully nonlinear infinite-dimensional inverse problem using a Markov chain Monte Carlo (MCMC) sampling method. To address the challenges of sampling high-dimensional pdfs arising from Bayesian inverse problems governed by PDEs, we build on the stochastic Newton MCMC method. This method exploits problem structure by taking as a proposal density a local Gaussian approximation of the posterior pdf, whose construction is made tractable by invoking a low-rank approximation of its data misfit component of the Hessian. Here we introduce an approximation of the stochastic Newton proposal in which we compute the low-rank-based Hessian at just the MAP point, and then reuse this Hessian at each MCMC step. We compare the performance of the proposed method to the original stochastic Newton MCMC method and to an independence sampler. The comparison of the three methods is conducted on a synthetic ice sheet inverse problem. For this problem, the stochastic Newton MCMC method with a MAP-based Hessian converges at least as rapidly as the original stochastic Newton MCMC method, but is far cheaper since it avoids recomputing the Hessian at each step. On the other hand, it is more expensive per sample than the independence sampler; however, its convergence is significantly more rapid, and thus overall it is much cheaper. Finally, we present extensive analysis and interpretation of the posterior distribution, and classify directions in parameter space based on the extent to which they are informed by the prior or the observations.Comment: 31 page

    Putting Cells into Context

    Get PDF
    Opinion article (excerpt): Cells Live in a Complex World It may sound blatantly obvious, but we have to remind ourselves occasionally that in vivo cells experience an environment with a level of complexity far beyond experimental reach. The developing organism is a highly complex system, where each cell receives a multitude of cues of diverse nature at any given time point. Only the comprehensive integration of all these multivalent interactions determines the actual signaling state and hence the behavior of a cell. The analysis of biological questions is mainly inspired by a reductionist approach adopted from the “exact sciences,” where it has been proven immensely successful. That is, we are used to break down our experimental setup to a manageable number of variables. This of course is inherently contradictory to the complexity of biological systems. While simplification may be the only viable option for the experimenter to dissect biological function down to detail, it has also influenced our perspective toward the experimental systems applied. For example, studies of intracellular signaling pathways are typically performed with cultured cells. Culturing cells in an in vitro setting became a standard model system in biomedical research and with it in cell and developmental biology. These simplified systems allow for the dissection of molecular interactions and pathways and are aimed to deepen and mechanistically understand cellular behavior. While cell cultures have generated a wealth of information into cellular function, the data obtained in vitro frequently are in conflict with in vivo observations. One reason for this discrepancy is that these analyses focus on the cell as a closed functional system, thus conceptually unhinging it from its environment

    Modelling and validation of hygrothermal conditions in the air gap behind wood cladding and BIPV in the building envelope

    Get PDF
    Materials used in the building envelope are exposed to a wide range of varying and harsh conditions over extended periods. Knowledge about these precise conditions allows for improving the design of testing schemes and eventually extending the durability of building materials. In this study, a numerical model in WUFI-Pro Ver. 6.5 is calibrated with field measurements in the ventilated air gap of a Zero Emission Building located in Trondheim, Norway. Measurements were taken from 01.09.2020 until 31.08.2022 and involved recording the surface temperature of the wind barrier (19 locations) and the relative humidity of the air (11 locations) in the middle of the air gap behind wood cladding and building integrated photovoltaics. Several different air change rates in the air gap are investigated in the model. Using a constant air change rate of 100 h-1 showed the overall best performance (R2 = 0.940 for the wind barrier’s surface temperatures and R2 = 0.806 for the relative humidity of air in the middle of the air gap). The largest deviations between simulation results and measurements, however, can be attributed to the uncertainty of climate data input. The developed model can be used in future studies that significantly contribute to establishing better testing schemes and test conditions for building materials such as wind barriers and adhesive tapes, and eventually improve the long-term air tightness of buildings.publishedVersio

    Risk Stratification in Post-MI Patients Based on Left Ventricular Ejection Fraction and Heart-Rate Turbulence

    Get PDF
    Objectives: Development of risk stratification criteria for predicting mortality in post-infarction patients taking into account LVEF and heart-rate turbulence (HRT). Methods: Based on previous results the two parameters LVEF (continuously) and turbulence slope (TS) as an indicator of the HRT were combined for risk stratification. The method has been applied within two independent data sets (the MPIP-trial and the EMIAT-study). Results: The criteria were defined in order to match the outcome of applying LVEF ( 30 % in sensitivity. In the MPIP trial the optimal criteria selected are TS normal and LVEF ( 21 % or TS abnormal and LVEF ( 40 %. Within the placebo group of the EMIAT-study the corresponding criteria are: TS normal and LVEF ( 23 % or TS abnormal and LVEF ( 40 %. Combining both studies the following criteria could be obtained: TS normal and LVEF ( 20 % or TS abnormal and LVEF ( 40 %. In the MPIP study 83 out of the 581 patients (= 14.3 %) are fulfilling these criteria. Within this group 30 patients have died during the follow-up. In the EMIAT-trial 218 out of the 591 patients (= 37.9 %) are classified as high risk patients with 53 deaths. Combining both studies the high risk group contains 301 patients with 83 deaths (ppv = 27.7 %). Using the MADIT-criterion as classification rule (LVEF ( 30 %) a sample of 375 patients with 85 deaths (ppv = 24 %) can be selected. Conclusions: The stratification rule based on LVEF and TS is able to select high risk patients suitable for implanting an ICD. The rule performs better than the classical one with LVEF alone. The high risk group applying the new criteria is smaller with about the same number of deaths and therefor with a higher positive predictive value. The classification criteria have been validated within a bootstrap study with 100 replications. In all samples the rule based on TS and LVEF (= NEW) was superior to LVEV alone, the high risk group has been smaller (( s: 301 ( 14.5 (NEW) vs. 375 ( 14.5 (LVEF)) and the positive predictive value was larger (( s: 27.2 ( 2.6 % (NEW) vs. 23.3 ( 2.2 % (LVEF)). The new criteria are less expensive due to a reduced number of high risk patients selected

    A Statistical Model for Risk Stratification on the Basis of Left Ventricular Ejection Fraction and Heart-Rate Turbulence

    Get PDF
    The MPIP data set was used to obtain a model for mortality risk stratification of acute myocardial infarction patients. The predictors heart rate turbulence (HRT) and left-ventricular ejection fraction (LVEF) were employed. HRT was a categorical variable of three levels; LVEF was continuous and its influence on the relative risk was explained by the natural logarithm function (found using fractional polynomials). Cox - PH model with HRT and lnLVEF was constructed and used for risk stratification. The model can be used to divide the patients into two or more groups according to mortality risk. It also describes the relationship between risk and predictors by a (continuous) function, which allows the calculation of individual mortality risk
    corecore