65 research outputs found

    Estimating statistical distributions using an integral identity

    Get PDF
    We present an identity for an unbiased estimate of a general statistical distribution. The identity computes the distribution density from dividing a histogram sum over a local window by a correction factor from a mean-force integral, and the mean force can be evaluated as a configuration average. We show that the optimal window size is roughly the inverse of the local mean-force fluctuation. The new identity offers a more robust and precise estimate than a previous one by Adib and Jarzynski [J. Chem. Phys. 122, 014114, (2005)]. It also allows a straightforward generalization to an arbitrary ensemble and a joint distribution of multiple variables. Particularly we derive a mean-force enhanced version of the weighted histogram analysis method (WHAM). The method can be used to improve distributions computed from molecular simulations. We illustrate the use in computing a potential energy distribution, a volume distribution in a constant-pressure ensemble, a radial distribution function and a joint distribution of amino acid backbone dihedral angles.Comment: 45 pages, 7 figures, simplified derivation, a more general mean-force formula, add discussions to the window size, add extensions to WHAM, and 2d distribution

    Investigating the Role of Prior Disambiguation in Deep-learning Compositional Models of Meaning

    Full text link
    This paper aims to explore the effect of prior disambiguation on neural network- based compositional models, with the hope that better semantic representations for text compounds can be produced. We disambiguate the input word vectors before they are fed into a compositional deep net. A series of evaluations shows the positive effect of prior disambiguation for such deep models.Comment: NIPS 201

    Counting Solutions for the N-queens and Latin Square Problems by Efficient Monte Carlo Simulations

    Full text link
    We apply Monte Carlo simulations to count the numbers of solutions of two well-known combinatorial problems: the N-queens problem and Latin square problem. The original system is first converted to a general thermodynamic system, from which the number of solutions of the original system is obtained by using the method of computing the partition function. Collective moves are used to further accelerate sampling: swap moves are used in the N-queens problem and a cluster algorithm is developed for the Latin squares. The method can handle systems of 10410^4 degrees of freedom with more than 101000010^10000 solutions. We also observe a distinct finite size effect of the Latin square system: its heat capacity gradually develops a second maximum as the size increases.Comment: 10 pages, 4 figure

    Neural Summarization by Extracting Sentences and Words

    Get PDF
    Traditional approaches to extractive summarization rely heavily on human-engineered features. In this work we propose a data-driven approach based on neural networks and continuous sentence features. We develop a general framework for single-document summarization composed of a hierarchical document encoder and an attention-based extractor. This architecture allows us to develop different classes of summarization models which can extract sentences or words. We train our models on large scale corpora containing hundreds of thousands of document-summary pairs. Experimental results on two summarization datasets demonstrate that our models obtain results comparable to the state of the art without any access to linguistic annotation.Comment: ACL2016 conference paper with appendi

    A generative parser with a discriminative recognition algorithm

    Get PDF
    Generative models defining joint distributions over parse trees and sentences are useful for parsing and language modeling, but impose restrictions on the scope of features and are often outperformed by discriminative models. We propose a framework for parsing and language modeling which marries a generative model with a discriminative recognition model in an encoder-decoder setting. We provide interpretations of the framework based on expectation maximization and variational inference, and show that it enables parsing and language modeling within a single implementation. On the English Penn Treen-bank, our framework obtains competitive performance on constituency parsing while matching the state-of-the-art single-model language modeling score.Comment: ACL 201
    corecore