71 research outputs found

    A Super-Polynomial Separation Between Resolution and Cut-Free Sequent Calculus

    Get PDF
    We show a quadratic separation between resolution and cut-free sequent calculus width. We use this gap to get, for the first time, first, a super-polynomial separation between resolution and cut-free sequent calculus for refuting CNF formulas, and secondly, a quadratic separation between resolution width and monomial space in polynomial calculus with resolution. Our super-polynomial separation between resolution and cut-free sequent calculus only applies when clauses are seen as disjunctions of unbounded arity; our examples have linear size cut-free sequent calculus proofs writing, in a particular way, their clauses using binary disjunctions. Interestingly, this shows that the complexity of sequent calculus depends on how disjunctions are represented

    Estimating Gibbs free energies via isobaric-isothermal flows

    Full text link
    We present a machine-learning model based on normalizing flows that is trained to sample from the isobaric-isothermal ensemble. In our approach, we approximate the joint distribution of a fully-flexible triclinic simulation box and particle coordinates to achieve a desired internal pressure. This novel extension of flow-based sampling to the isobaric-isothermal ensemble yields direct estimates of Gibbs free energies. We test our NPT-flow on monatomic water in the cubic and hexagonal ice phases and find excellent agreement of Gibbs free energies and other observables compared with established baselines.Comment: 19 pages, 7 figure

    On Contrastive Learning for Likelihood-free Inference

    Get PDF
    Likelihood-free methods perform parameter inference in stochastic simulator models where evaluating the likelihood is intractable but sampling synthetic data is possible. One class of methods for this likelihood-free problem uses a classifier to distinguish between pairs of parameter-observation samples generated using the simulator and pairs sampled from some reference distribution, which implicitly learns a density ratio proportional to the likelihood. Another popular class of methods fits a conditional distribution to the parameter posterior directly, and a particular recent variant allows for the use of flexible neural density estimators for this task. In this work, we show that both of these approaches can be unified under a general contrastive learning scheme, and clarify how they should be run and compared.Comment: Appeared at ICML 202

    Cubic-Spline Flows

    Get PDF
    A normalizing flow models a complex probability density as an invertible transformation of a simple density. The invertibility means that we can evaluate densities and generate samples from a flow. In practice, autoregressive flow-based models are slow to invert, making either density estimation or sample generation slow. Flows based on coupling transforms are fast for both tasks, but have previously performed less well at density estimation than autoregressive flows. We stack a new coupling transform, based on monotonic cubic splines, with LU-decomposed linear layers. The resulting cubic-spline flow retains an exact one-pass inverse, can be used to generate high-quality images, and closes the gap with autoregressive flows on a suite of density-estimation tasks.Comment: Appeared at the 1st Workshop on Invertible Neural Networks and Normalizing Flows at ICML 201

    Compositional Score Modeling for Simulation-based Inference

    Full text link
    Neural Posterior Estimation methods for simulation-based inference can be ill-suited for dealing with posterior distributions obtained by conditioning on multiple observations, as they tend to require a large number of simulator calls to learn accurate approximations. In contrast, Neural Likelihood Estimation methods can handle multiple observations at inference time after learning from individual observations, but they rely on standard inference methods, such as MCMC or variational inference, which come with certain performance drawbacks. We introduce a new method based on conditional score modeling that enjoys the benefits of both approaches. We model the scores of the (diffused) posterior distributions induced by individual observations, and introduce a way of combining the learned scores to approximately sample from the target posterior distribution. Our approach is sample-efficient, can naturally aggregate multiple observations at inference time, and avoids the drawbacks of standard inference methods

    Masked Autoregressive Flow for Density Estimation

    Get PDF
    Autoregressive models are among the best performing neural density estimators. We describe an approach for increasing the flexibility of an autoregressive model, based on modelling the random numbers that the model uses internally when generating data. By constructing a stack of autoregressive models, each modelling the random numbers of the next model in the stack, we obtain a type of normalizing flow suitable for density estimation, which we call Masked Autoregressive Flow. This type of flow is closely related to Inverse Autoregressive Flow and is a generalization of Real NVP. Masked Autoregressive Flow achieves state-of-the-art performance in a range of general-purpose density estimation tasks.Comment: section 4.3 is corrected since the previous versio
    • …
    corecore