120,951 research outputs found

    Neural Network Gradient Hamiltonian Monte Carlo

    Full text link
    Hamiltonian Monte Carlo is a widely used algorithm for sampling from posterior distributions of complex Bayesian models. It can efficiently explore high-dimensional parameter spaces guided by simulated Hamiltonian flows. However, the algorithm requires repeated gradient calculations, and these computations become increasingly burdensome as data sets scale. We present a method to substantially reduce the computation burden by using a neural network to approximate the gradient. First, we prove that the proposed method still maintains convergence to the true distribution though the approximated gradient no longer comes from a Hamiltonian system. Second, we conduct experiments on synthetic examples and real data sets to validate the proposed method

    A P2P Computing System for Overlay Networks

    Full text link
    A distributed computing system is able to perform data computation and distribution of results at the same time. The input task is divided into blocks, which are then sent to system participants that offer their resources in order to perform calculations. Next, a partial result is sent back by the participants to the task manager (usually one central node). In the case when system participants want to get the final result, the central node may become overloaded, especially if many nodes request the result at the same time. In this paper we propose a novel distributed computation system, which does not use the central node as the source of the final result, but assumes that partial results are sent between system participants. This way we avoid overloading the central node, as well as network congestion. There are two major types of distributed computing systems: grids and Peer-to-Peer (P2P) computing systems. In this work we focus on the latter case. Consequently, we assume that the computing system works on the top of an overlay network. We present a complete description of the P2P computing system, considering both computation and result distribution. To verify the proposed architecture we develop our own simulator. The obtained results show the system performance expressed by the operation cost for various types of network flows: unicast, anycast and Peer-to-Peer. Moreover, the simulations prove that our computing system provides about 66% lower cost compared to a centralized computing system

    Computational study of red cell distribution in simple networks

    Get PDF
    The distribution of red blood cells (RBC) across the vessel lumen is disturbed when blood flows through a junction. As the blood flows downstream from the junction, the RBC distribution corrects itself to regain its original symmetric character. A dispersion-type process has been used to model this rearrangement process in 3-dimensional branching tubes. In this study, the disturbance in the RBC profile is quantified by tracing streamlines through the junction. The tracing technique is based on scaled-up dye studies. The computation starts at a location where the velocity profile is fully developed. Both uniform and parabolic RBC profiles are examined as possible, final symmetric distributions for the computations. Three velocity profiles are used alternatively. The dispersion convective equation of continuity in cylindrical geometry is solved with the method of finite differences. The resulting RBC concentration profiles is then used to compute flux-flow curves which are frequently used to examine plasma skimming phenomena. The numerically computed flux-flow curves are compared to in vitro experimental data from 50 μ\mum serial bifurcation replicas. The dispersion coefficient is used as an adjustable parameter to give the best match between computation and measurement. The averaged dispersion coefficients obtained agree with previous experimental data and show an enhanced dispersion. Simple vascular networks are generated and the dispersion model is further applied to the networks. By calculating the discharge hematocrit of each branch vessel in the network the network Fahraeus effect is observed. Influences of flow disturbance to the downstream hematocrit are examined. The effects of flow heterogeneity and the dispersion model on the hematocrit heterogeneity are presented

    Simulation-based Inference : From Approximate Bayesian Computation and Particle Methods to Neural Density Estimation

    Get PDF
    This doctoral thesis in computational statistics utilizes both Monte Carlo methods(approximate Bayesian computation and sequential Monte Carlo) and machine­-learning methods (deep learning and normalizing flows) to develop novel algorithms for infer­ence in implicit Bayesian models. Implicit models are those for which calculating the likelihood function is very challenging (and often impossible), but model simulation is feasible. The inference methods developed in the thesis are simulation­-based infer­ence methods since they leverage the possibility to simulate data from the implicit models. Several approaches are considered in the thesis: Paper II and IV focus on classical methods (sequential Monte Carlo­-based methods), while paper I and III fo­cus on more recent machine learning methods (deep learning and normalizing flows, respectively).Paper I constructs novel deep learning methods for learning summary statistics for approximate Bayesian computation (ABC). To achieve this paper I introduces the partially exchangeable network (PEN), a deep learning architecture specifically de­signed for Markovian data (i.e., partially exchangeable data).Paper II considers Bayesian inference in stochastic differential equation mixed-effects models (SDEMEM). Bayesian inference for SDEMEMs is challenging due to the intractable likelihood function of SDEMEMs. Paper II addresses this problem by designing a novel a Gibbs­-blocking strategy in combination with correlated pseudo­ marginal methods. The paper also discusses how custom particle filters can be adapted to the inference procedure.Paper III introduces the novel inference method sequential neural posterior and like­lihood approximation (SNPLA). SNPLA is a simulation­-based inference algorithm that utilizes normalizing flows for learning both the posterior distribution and the likelihood function of an implicit model via a sequential scheme. By learning both the likelihood and the posterior, and by leveraging the reverse Kullback Leibler (KL) divergence, SNPLA avoids ad­-hoc correction steps and Markov chain Monte Carlo (MCMC) sampling.Paper IV introduces the accelerated-delayed acceptance (ADA) algorithm. ADA can be viewed as an extension of the delayed­-acceptance (DA) MCMC algorithm that leverages connections between the two likelihood ratios of DA to further accelerate MCMC sampling from the posterior distribution of interest, although our approach introduces an approximation. The main case study of paper IV is a double­-well po­tential stochastic differential equation (DWP­SDE) model for protein-­folding data (reaction coordinate data)

    Parallel Computation of Unsteady Flows on a Network of Workstations

    Get PDF
    Parallel computation of unsteady flows requires significant computational resources. The utilization of a network of workstations seems an efficient solution to the problem where large problems can be treated at a reasonable cost. This approach requires the solution of several problems: 1) the partitioning and distribution of the problem over a network of workstation, 2) efficient communication tools, 3) managing the system efficiently for a given problem. Of course, there is the question of the efficiency of any given numerical algorithm to such a computing system. NPARC code was chosen as a sample for the application. For the explicit version of the NPARC code both two- and three-dimensional problems were studied. Again both steady and unsteady problems were investigated. The issues studied as a part of the research program were: 1) how to distribute the data between the workstations, 2) how to compute and how to communicate at each node efficiently, 3) how to balance the load distribution. In the following, a summary of these activities is presented. Details of the work have been presented and published as referenced

    Sensitivity analysis of the variable demand probit stochastic user equilibrium with multiple user classes

    Get PDF
    This paper presents a formulation of the multiple user class, variable demand, probit stochastic user equilibrium model. Sufficient conditions are stated for differentiability of the equilibrium flows of this model. This justifies the derivation of sensitivity expressions for the equilibrium flows, which are presented in a format that can be implemented in commercially available software. A numerical example verifies the sensitivity expressions, and that this formulation is applicable to large networks
    corecore