968 research outputs found

    On Stochastic Error and Computational Efficiency of the Markov Chain Monte Carlo Method

    Full text link
    In Markov Chain Monte Carlo (MCMC) simulations, the thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for the MCMC method but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them

    Exploiting the Kronecker product structure of Ļ†āˆ’functions in exponential integrators

    Get PDF
    Exponential time integrators are well-established discretization methods for time semilinear systems of ordinary differential equations. These methods use (Formula presented.) functions, which are matrix functions related to the exponential. This work introduces an algorithm to speed up the computation of the (Formula presented.) function action over vectors for two-dimensional (2D) matrices expressed as a Kronecker sum. For that, we present an auxiliary exponential-related matrix function that we express using Kronecker products of one-dimensional matrices. We exploit state-of-the-art implementations of (Formula presented.) functions to compute this auxiliary function's action and then recover the original (Formula presented.) action by solving a Sylvester equation system. Our approach allows us to save memory and solve exponential integrators of 2D+time problems in a fraction of the time traditional methods need. We analyze the method's performance considering different linear operators and with the nonlinear 2D+time Allenā€“Cahn equation

    Global-local nonlinear model reduction for flows in heterogeneous porous media

    Get PDF
    In this paper, we combine discrete empirical interpolation techniques, global mode decomposition methods, and local multiscale methods, such as the Generalized Multiscale Finite Element Method (GMsFEM), to reduce the computational complexity associated with nonlinear flows in highly-heterogeneous porous media. To solve the nonlinear governing equations, we employ the GMsFEM to represent the solution on a coarse grid with multiscale basis functions and apply proper orthogonal decomposition on a coarse grid. Computing the GMsFEM solution involves calculating the residual and the Jacobian on a fine grid. As such, we use local and global empirical interpolation concepts to circumvent performing these computations on the fine grid. The resulting reduced-order approach significantly reduces the flow problem size while accurately capturing the behavior of fully-resolved solutions. We consider several numerical examples of nonlinear multiscale partial differential equations that are numerically integrated using fully-implicit time marching schemes to demonstrate the capability of the proposed model reduction approach to speed up simulations of nonlinear flows in high-contrast porous media

    Depopulation of dense Ī±-synuclein aggregates is associated with rescue of dopamine neuron dysfunction and death in a new Parkinson's disease model.

    No full text
    Parkinson's disease (PD) is characterized by the presence of Ī±-synuclein aggregates known as Lewy bodies and Lewy neurites, whose formation is linked to disease development. The causal relation between Ī±-synuclein aggregates and PD is not well understood. We generated a new transgenic mouse line (MI2) expressing human, aggregation-prone truncated 1-120 Ī±-synuclein under the control of the tyrosine hydroxylase promoter. MI2 mice exhibit progressive aggregation of Ī±-synuclein in dopaminergic neurons of the substantia nigra pars compacta and their striatal terminals. This is associated with a progressive reduction of striatal dopamine release, reduced striatal innervation and significant nigral dopaminergic nerve cell death starting from 6 and 12 months of age, respectively. In the MI2 mice, alterations in gait impairment can be detected by the DigiGait test from 9 months of age, while gross motor deficit was detected by rotarod test at 20 months of age when 50% of dopaminergic neurons in the substantia nigra pars compacta are lost. These changes were associated with an increase in the number and density of 20-500 nm Ī±-synuclein species as shown by dSTORM. Treatment with the oligomer modulator anle138b, from 9 to 12 months of age, restored striatal dopamine release, prevented dopaminergic cell death and gait impairment. These effects were associated with a reduction of the inner density of large Ī±-synuclein aggregates and an increase in dispersed small Ī±-synuclein species as revealed by dSTORM. The MI2 mouse model recapitulates the progressive dopaminergic deficit observed in PD, showing that early synaptic dysfunction is associated to fine behavioral motor alterations, precedes dopaminergic axonal loss and neuronal death that become associated with a more consistent motor deficit upon reaching a certain threshold. Our data also provide new mechanistic insight for the effect of anle138b's function in vivo supporting that targeting Ī±-synuclein aggregation is a promising therapeutic approach for PD

    Reducing spatial discretization error on coarse CFD simulations using an openFOAM-embedded deep learning framework

    Get PDF
    We propose a method for reducing the spatial discretization error of coarse computational fluid dynamics (CFD) problems by enhancing the quality of low-resolution simulations using deep learning. We feed the model with fine-grid data after projecting it to the coarse-grid discretization. We substitute the default differencing scheme for the convection term by a feed-forward neural network that interpolates velocities from cell centers to face values to produce velocities that approximate the down-sampled fine-grid data well. The deep learning framework incorporates the open-source CFD code OpenFOAM, resulting in an end-to-end differentiable model. We automatically differentiate the CFD physics using a discrete adjoint code version. We present a fast communication method between TensorFlow (Python) and OpenFOAM (c++) that accelerates the training process. We applied the model to the flow past a square cylinder problem, reducing the error from 120% to 25% in the velocity for simulations inside the training distribution compared to the traditional solver using an x8 coarser mesh. For simulations outside the training distribution, the error reduction in the velocities was about 50%. The training is affordable in terms of time and data samples since the architecture exploits the local features of the physics.PID2023-146678OB-I00 PRE2020-09309
    • ā€¦
    corecore