38,685 research outputs found

    Towards analytic description of a transition from weak to strong coupling regime in correlated electron systems. I. Systematic diagrammatic theory with two-particle Green functions

    Full text link
    We analyze behavior of correlated electrons described by Hubbard-like models at intermediate and strong coupling. We show that with increasing interaction a pole in a generic two-particle Green function is approached. The pole signals metal-insulator transition at half filling and gives rise to a new vanishing ``Kondo'' scale causing breakdown of weak-coupling perturbation theory. To describe the critical behavior at the metal-insulator transition a novel, self-consistent diagrammatic technique with two-particle Green functions is developed. The theory is based on the linked-cluster expansion for the thermodynamic potential with electron-electron interaction as propagator. Parquet diagrams with a generating functional are derived. Numerical instabilities due to the metal-insulator transition are demonstrated on simplifications of the parquet algebra with ring and ladder series only. A stable numerical solution in the critical region is reached by factorization of singular terms via a low-frequency expansion in the vertex function. We stress the necessity for dynamical vertex renormalizations, missing in the simple approximations, in order to describe the critical, strong-coupling behavior correctly. We propose a simplification of the full parquet approximation by keeping only most divergent terms in the asymptotic strong-coupling region. A qualitatively new, feasible approximation suitable for the description of a transition from weak to strong coupling is obtained.Comment: 17 pages, 4 figures, REVTe

    Renormalization in Self-Consistent Approximations schemes at Finite Temperature I: Theory

    Full text link
    Within finite temperature field theory, we show that truncated non-perturbative self-consistent Dyson resummation schemes can be renormalized with local counter-terms defined at the vacuum level. The requirements are that the underlying theory is renormalizable and that the self-consistent scheme follows Baym''s Φ\Phi-derivable concept. The scheme generates both, the renormalized self-consistent equations of motion and the closed equations for the infinite set of counter terms. At the same time the corresponding 2PI-generating functional and the thermodynamical potential can be renormalized, in consistency with the equations of motion. This guarantees the standard Φ\Phi-derivable properties like thermodynamic consistency and exact conservation laws also for the renormalized approximation schemes to hold. The proof uses the techniques of BPHZ-renormalization to cope with the explicit and the hidden overlapping vacuum divergences.Comment: 22 Pages 1 figure, uses RevTeX4. The Revision concerns the correction of some minor typos, a clarification concerning the real-time contour structure of renormalization parts and some comments concerning symmetries in the conclusions and outloo

    Hierarchical Implicit Models and Likelihood-Free Variational Inference

    Full text link
    Implicit probabilistic models are a flexible class of models defined by a simulation process for data. They form the basis for theories which encompass our understanding of the physical world. Despite this fundamental nature, the use of implicit models remains limited due to challenges in specifying complex latent structure in them, and in performing inferences in such models with large data sets. In this paper, we first introduce hierarchical implicit models (HIMs). HIMs combine the idea of implicit densities with hierarchical Bayesian modeling, thereby defining models via simulators of data with rich hidden structure. Next, we develop likelihood-free variational inference (LFVI), a scalable variational inference algorithm for HIMs. Key to LFVI is specifying a variational family that is also implicit. This matches the model's flexibility and allows for accurate approximation of the posterior. We demonstrate diverse applications: a large-scale physical simulator for predator-prey populations in ecology; a Bayesian generative adversarial network for discrete data; and a deep implicit model for text generation.Comment: Appears in Neural Information Processing Systems, 201

    Entropy-based parametric estimation of spike train statistics

    Full text link
    We consider the evolution of a network of neurons, focusing on the asymptotic behavior of spikes dynamics instead of membrane potential dynamics. The spike response is not sought as a deterministic response in this context, but as a conditional probability : "Reading out the code" consists of inferring such a probability. This probability is computed from empirical raster plots, by using the framework of thermodynamic formalism in ergodic theory. This gives us a parametric statistical model where the probability has the form of a Gibbs distribution. In this respect, this approach generalizes the seminal and profound work of Schneidman and collaborators. A minimal presentation of the formalism is reviewed here, while a general algorithmic estimation method is proposed yielding fast convergent implementations. It is also made explicit how several spike observables (entropy, rate, synchronizations, correlations) are given in closed-form from the parametric estimation. This paradigm does not only allow us to estimate the spike statistics, given a design choice, but also to compare different models, thus answering comparative questions about the neural code such as : "are correlations (or time synchrony or a given set of spike patterns, ..) significant with respect to rate coding only ?" A numerical validation of the method is proposed and the perspectives regarding spike-train code analysis are also discussed.Comment: 37 pages, 8 figures, submitte

    Optimal Belief Approximation

    Get PDF
    In Bayesian statistics probability distributions express beliefs. However, for many problems the beliefs cannot be computed analytically and approximations of beliefs are needed. We seek a loss function that quantifies how "embarrassing" it is to communicate a given approximation. We reproduce and discuss an old proof showing that there is only one ranking under the requirements that (1) the best ranked approximation is the non-approximated belief and (2) that the ranking judges approximations only by their predictions for actual outcomes. The loss function that is obtained in the derivation is equal to the Kullback-Leibler divergence when normalized. This loss function is frequently used in the literature. However, there seems to be confusion about the correct order in which its functional arguments, the approximated and non-approximated beliefs, should be used. The correct order ensures that the recipient of a communication is only deprived of the minimal amount of information. We hope that the elementary derivation settles the apparent confusion. For example when approximating beliefs with Gaussian distributions the optimal approximation is given by moment matching. This is in contrast to many suggested computational schemes.Comment: made improvements on the proof and the languag
    corecore