24 research outputs found

    Variational Hamiltonian Monte Carlo via Score Matching

    Full text link
    Traditionally, the field of computational Bayesian statistics has been divided into two main subfields: variational methods and Markov chain Monte Carlo (MCMC). In recent years, however, several methods have been proposed based on combining variational Bayesian inference and MCMC simulation in order to improve their overall accuracy and computational efficiency. This marriage of fast evaluation and flexible approximation provides a promising means of designing scalable Bayesian inference methods. In this paper, we explore the possibility of incorporating variational approximation into a state-of-the-art MCMC method, Hamiltonian Monte Carlo (HMC), to reduce the required gradient computation in the simulation of Hamiltonian flow, which is the bottleneck for many applications of HMC in big data problems. To this end, we use a {\it free-form} approximation induced by a fast and flexible surrogate function based on single-hidden layer feedforward neural networks. The surrogate provides sufficiently accurate approximation while allowing for fast exploration of parameter space, resulting in an efficient approximate inference algorithm. We demonstrate the advantages of our method on both synthetic and real data problems

    Using Numerical Dynamic Programming to Compare Passive and Active Learning in the Adaptive Management of Nutrients in Shallow Lakes

    Get PDF
    This paper illustrates the use of dual/adaptive control methods to compare passive and active adaptive management decisions in the context of an ecosystem with a threshold effect. Using discrete-time dynamic programming techniques, we model optimal phosphorus loadings under both uncertainty about natural loadings and uncertainty regarding the critical level of phosphorus concentrations beyond which nutrient recycling begins. Active management is modeled by including the anticipated value of information (or learning) in the structure of the problem, and thus the agent can perturb the system (experiment), update beliefs, and learn about the uncertain parameter. Using this formulation, we define and value optimal experimentation both ex ante and ex post. Our simulation results show that experimentation is optimal over a large range of phosphorus concentration and belief space, though ex ante benefits are small. Furthermore, realized benefits may critically depend on the true underlying parameters of the problem.adaptive control, adaptive management, dynamic programming, value of experimentation, value of information, nonpoint source pollution, learning, decisions under uncertainty, Resource /Energy Economics and Policy,

    Hamiltonian Monte Carlo Acceleration Using Surrogate Functions with Random Bases

    Full text link
    For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov Chain Monte Carlo (MCMC) methods, namely, Hamiltonian Monte Carlo (HMC). The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the art methods

    Depth Recovery of Complex Surfaces from Texture-less Pairs of Stereo Images

    Get PDF
    In this paper, a novel framework is presented to recover the 3D shape information of a complex surface using its texture-less stereo images. First a linear and generalized Lambertian model is proposed to obtain the depth information by shape from shading (SfS) using an image from stereo pair. Then this depth data is corrected by integrating scale invariant features (SIFT) indexes. These SIFT indexes are defined by means of disparity between the matching invariant features in rectified stereo images. The integration process is based on correcting the 3D visible surfaces obtained from SfS using these SIFT indexes. The SIFT indexes based improvement of depth values which are obtained from generalized Lambertian reflectance model is performed by a feed-forward neural network. The experiments are performed to demonstrate the usability and accuracy of the proposed framework

    ARTIFICIAL NEURAL NETWORKS: FUNCTIONINGANDAPPLICATIONS IN PHARMACEUTICAL INDUSTRY

    Get PDF
    Artificial Neural Network (ANN) technology is a group of computer designed algorithms for simulating neurological processing to process information and produce outcomes like the thinking process of humans in learning, decision making and solving problems. The uniqueness of ANN is its ability to deliver desirable results even with the help of incomplete or historical data results without a need for structured experimental design by modeling and pattern recognition. It imbibes data through repetition with suitable learning models, similarly to humans, without actual programming. It leverages its ability by processing elements connected with the user given inputs which transfers as a function and provides as output. Moreover, the present output by ANN is a combinational effect of data collected from previous inputs and the current responsiveness of the system. Technically, ANN is associated with highly monitored network along with a back propagation learning standard. Due to its exceptional predictability, the current uses of ANN can be applied to many more disciplines in the area of science which requires multivariate data analysis. In the pharmaceutical process, this flexible tool is used to simulate various non-linear relationships. It also finds its application in the enhancement of pre-formulation parameters for predicting physicochemical properties of drug substances. It also finds its applications in pharmaceutical research, medicinal chemistry, QSAR study, pharmaceutical instrumental engineering. Its multi-objective concurrent optimization is adopted in the drug discovery process, protein structure, rational data analysis also

    Benchmark for Peak Detection Algorithms in Fiber Bragg Grating Interrogation and a New Neural Network for its Performance Improvement

    Get PDF
    This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented

    Natural Gas Properties and Flow Computation

    Get PDF
    corecore