47 research outputs found

    From Entropic Dynamics to Quantum Theory

    Full text link
    Non-relativistic quantum theory is derived from information codified into an appropriate statistical model. The basic assumption is that there is an irreducible uncertainty in the location of particles: positions constitute a configuration space and the corresponding probability distributions constitute a statistical manifold. The dynamics follows from a principle of inference, the method of Maximum Entropy. The concept of time is introduced as a convenient way to keep track of change. A welcome feature is that the entropic dynamics notion of time incorporates a natural distinction between past and future. The statistical manifold is assumed to be a dynamical entity: its curved and evolving geometry determines the evolution of the particles which, in their turn, react back and determine the evolution of the geometry. Imposing that the dynamics conserve energy leads to the Schroedinger equation and to a natural explanation of its linearity, its unitarity, and of the role of complex numbers. The phase of the wave function is explained as a feature of purely statistical origin. There is a quantum analogue to the gravitational equivalence principle.Comment: Extended and corrected version of a paper presented at MaxEnt 2009, the 29th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (July 5-10, 2009, Oxford, Mississippi, USA). In version v3 I corrected a mistake and considerably simplified the argument. The overall conclusions remain unchange

    Jaynes' MaxEnt, Steady State Flow Systems and the Maximum Entropy Production Principle

    Full text link
    Jaynes' maximum entropy (MaxEnt) principle was recently used to give a conditional, local derivation of the ``maximum entropy production'' (MEP) principle, which states that a flow system with fixed flow(s) or gradient(s) will converge to a steady state of maximum production of thermodynamic entropy (R.K. Niven, Phys. Rev. E, in press). The analysis provides a steady state analog of the MaxEnt formulation of equilibrium thermodynamics, applicable to many complex flow systems at steady state. The present study examines the classification of physical systems, with emphasis on the choice of constraints in MaxEnt. The discussion clarifies the distinction between equilibrium, fluid flow, source/sink, flow/reactive and other systems, leading into an appraisal of the application of MaxEnt to steady state flow and reactive systems.Comment: 6 pages; paper for MaxEnt0

    TI-Stan: Model comparison using thermodynamic integration and HMC

    Get PDF
    © 2019 by the authors. We present a novel implementation of the adaptively annealed thermodynamic integration technique using Hamiltonian Monte Carlo (HMC). Thermodynamic integration with importance sampling and adaptive annealing is an especially useful method for estimating model evidence for problems that use physics-based mathematical models. Because it is based on importance sampling, this method requires an efficient way to refresh the ensemble of samples. Existing successful implementations use binary slice sampling on the Hilbert curve to accomplish this task. This implementation works well if the model has few parameters or if it can be broken into separate parts with identical parameter priors that can be refreshed separately. However, for models that are not separable and have many parameters, a different method for refreshing the samples is needed. HMC, in the form of the MC-Stan package, is effective for jointly refreshing the ensemble under a high-dimensional model. MC-Stan uses automatic differentiation to compute the gradients of the likelihood that HMC requires in about the same amount of time as it computes the likelihood function itself, easing the programming burden compared to implementations of HMC that require explicitly specified gradient functions. We present a description of the overall TI-Stan procedure and results for representative example problems

    Entropic Priors and Bayesian Model Selection

    Full text link
    We demonstrate that the principle of maximum relative entropy (ME), used judiciously, can ease the specification of priors in model selection problems. The resulting effect is that models that make sharp predictions are disfavoured, weakening the usual Bayesian "Occam's Razor". This is illustrated with a simple example involving what Jaynes called a "sure thing" hypothesis. Jaynes' resolution of the situation involved introducing a large number of alternative "sure thing" hypotheses that were possible before we observed the data. However, in more complex situations, it may not be possible to explicitly enumerate large numbers of alternatives. The entropic priors formalism produces the desired result without modifying the hypothesis space or requiring explicit enumeration of alternatives; all that is required is a good model for the prior predictive distribution for the data. This idea is illustrated with a simple rigged-lottery example, and we outline how this idea may help to resolve a recent debate amongst cosmologists: is dark energy a cosmological constant, or has it evolved with time in some way? And how shall we decide, when the data are in?Comment: Presented at MaxEnt 2009, the 29th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (July 5-10, 2009, Oxford, Mississippi, USA

    Computational methods for Bayesian model choice

    Full text link
    In this note, we shortly survey some recent approaches on the approximation of the Bayes factor used in Bayesian hypothesis testing and in Bayesian model choice. In particular, we reassess importance sampling, harmonic mean sampling, and nested sampling from a unified perspective.Comment: 12 pages, 4 figures, submitted to the proceedings of MaxEnt 2009, July 05-10, 2009, to be published by the American Institute of Physic

    Measuring on Lattices

    Full text link
    Previous derivations of the sum and product rules of probability theory relied on the algebraic properties of Boolean logic. Here they are derived within a more general framework based on lattice theory. The result is a new foundation of probability theory that encompasses and generalizes both the Cox and Kolmogorov formulations. In this picture probability is a bi-valuation defined on a lattice of statements that quantifies the degree to which one statement implies another. The sum rule is a constraint equation that ensures that valuations are assigned so as to not violate associativity of the lattice join and meet. The product rule is much more interesting in that there are actually two product rules: one is a constraint equation arises from associativity of the direct products of lattices, and the other a constraint equation derived from associativity of changes of context. The generality of this formalism enables one to derive the traditionally assumed condition of additivity in measure theory, as well introduce a general notion of product. To illustrate the generic utility of this novel lattice-theoretic foundation of measure, the sum and product rules are applied to number theory. Further application of these concepts to understand the foundation of quantum mechanics is described in a joint paper in this proceedings.Comment: 13 pages, 7 figures, Presented at the 29th International Workshop on Bayesian and Maximum Entropy Methods in Science and Engineering: MaxEnt 200

    Evaluation of decay times in coupled spaces: Bayesian decay model selection

    No full text
    Determination of sound decay times in coupled spaces often demands considerable effort. Based on Schroeder's backward integration of room impulse responses, it is often difficult to distinguish different portions of multirate sound energy decay functions. A model-based parameter estimation method, using Bayesian probabilistic inference, proves to be a powerful tool for evaluating decay times. A decay model due to one of the authors Í“N. Xiang, J. Acoust. Soc. Am. 98, 2112-2121 Í‘1995Í’Í” is extended to multirate decay functions. Following a summary of Bayesian model-based parameter estimation, the present paper discusses estimates in terms of both synthesized and measured decay functions. No careful estimation of initial values is required, in contrast to gradient-based approaches. The resulting robust algorithmic estimation of more than one decay time, from experimentally measured decay functions, is clearly superior to the existing nonlinear regression approach

    Multiobjective Design Of Linear Antenna Arrays Using Bayesian Inference Framework

    No full text
    The Bayesian inference framework for design introduced in Chan and Goggans [\u27Using Bayesian inference for linear antenna array design,\u27 IEEE Trans. Antennas Propag., vol. 59, no. 9, pp. 3211-3217, Sep. 2011] is applied to design linear antenna arrays capable of realizing multiple radiation patterns while satisfying various design requirements. Many design issues are involved when designing a linear antenna array. This paper focuses on four practical design issues: the need for minimum spacing between two adjacent array elements, limitations in the dynamic range and accuracy of the current amplitudes and phases, the ability to produce multiple desired radiation patterns using a single array, and the ability to maintain a desired radiation pattern over a certain frequency band. We present an implementation of these practical design requirements based on the Bayesian inference framework, together with representative examples. Our results demonstrate the capability and robustness of the Bayesian method in incorporating real-world design requirements into the design of linear antenna arrays

    Using Bayesian Inference For The Design Of Fir Filters With Signed Power-Of-Two Coefficients

    No full text
    The design approach presented in this paper applies Bayesian inference to the design of finite impulse response (FIR) filters with signed power-of-two (SPoT) coefficients. Given a desired frequency magnitude response specified by upper and lower bounds in decibels, Bayesian parameter estimation and model selection are adapted to produce a distribution of potential designs, all of which perform at or close to the specified standard. In the process, having incorporated prior information such as the maximum acceptable number of SPoT terms and filter length, and the practical design requirement to use the fewest bits possible, the total number of bits, filter taps and SPoT terms, and the filter length required in a design are automatically determined. The produced design candidates have design complexity appropriate to the design specifications and requirements, as designs with higher design complexity than required are rendered less probable by the embedded Ockhams razor. This innate ability is a prominent advantage that the newly developed framework possesses over many optimization based techniques as it leads to designs that require fewer SPoT terms and filter taps. Most importantly, it avoids the intricacy, arduousness and rigorousness involved in devising an appropriate scheme for balancing design performance against design complexity. © 2012 Elsevier B.V
    corecore