8 research outputs found

    Functorial Statistical Physics: Feynman--Kac Formulae and Information Geometries

    Full text link
    The main results of this paper comprise proofs of the following two related facts: (i) the Feynman--Kac formula is a functor F∗F_*, namely, between a stochastic differential equation and a dynamical system on a statistical manifold, and (ii) a statistical manifold is a sheaf generated by this functor with a canonical gluing condition. Using a particular locality property for F∗F_*, recognised from functorial quantum field theory as a `sewing law,' we then extend our results to the Chapman--Kolmogorov equation {\it via} a time-dependent generalisation of the principle of maximum entropy. This yields a partial formalisation of a variational principle which takes us beyond Feynman--Kac measures driven by Wiener laws. Our construction offers a robust glimpse at a deeper theory which we argue re-imagines time-dependent statistical physics and information geometry alike.Comment: 8+1 pages. Announcemen

    Formalising the Use of the Activation Function in Neural Inference

    Full text link
    We investigate how the activation function can be used to describe neural firing in an abstract way, and in turn, why it works well in artificial neural networks. We discuss how a spike in a biological neurone belongs to a particular universality class of phase transitions in statistical physics. We then show that the artificial neurone is, mathematically, a mean field model of biological neural membrane dynamics, which arises from modelling spiking as a phase transition. This allows us to treat selective neural firing in an abstract way, and formalise the role of the activation function in perceptron learning. The resultant statistical physical model allows us to recover the expressions for some known activation functions as various special cases. Along with deriving this model and specifying the analogous neural case, we analyse the phase transition to understand the physics of neural network learning. Together, it is shown that there is not only a biological meaning, but a physical justification, for the emergence and performance of typical activation functions; implications for neural learning and inference are also discussed.Comment: Six pages and one of references, two figure

    A Worked Example of the Bayesian Mechanics of Classical Objects

    Full text link
    Bayesian mechanics is a new approach to studying the mathematics and physics of interacting stochastic processes. Here, we provide a worked example of a physical mechanics for classical objects, which derives from a simple application thereof. We summarise the current state of the art of Bayesian mechanics in doing so. We also give a sketch of its connections to classical chaos, owing to a particular N=2\mathcal{N}=2 supersymmetry.Comment: 34 pages. Presentation revised and expository material added (v2). Condensed version to appear in The 3rd International Workshop on Active Inferenc

    Path integrals, particular kinds, and strange things

    Get PDF
    This paper describes a path integral formulation of the free energy principle. The ensuing account expresses the paths or trajectories that a particle takes as it evolves over time. The main results are a method or principle of least action that can be used to emulate the behaviour of particles in open exchange with their external milieu. Particles are defined by a particular partition, in which internal states are individuated from external states by active and sensory blanket states. The variational principle at hand allows one to interpret internal dynamics - of certain kinds of particles - as inferring external states that are hidden behind blanket states. We consider different kinds of particles, and to what extent they can be imbued with an elementary form of inference or sentience. Specifically, we consider the distinction between dissipative and conservative particles, inert and active particles and, finally, ordinary and strange particles. Strange particles (look as if they) infer their own actions, endowing them with apparent autonomy or agency. In short - of the kinds of particles afforded by a particular partition - strange kinds may be apt for describing sentient behaviour.Comment: 31 pages (excluding references), 6 figure

    Designing Ecosystems of Intelligence from First Principles

    Full text link
    This white paper lays out a vision of research and development in the field of artificial intelligence for the next decade (and beyond). Its denouement is a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants -- what we call ''shared intelligence''. This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence, and which inherits from the physics of self-organization. In this context, we understand intelligence as the capacity to accumulate evidence for a generative model of one's sensed world -- also known as self-evidencing. Formally, this corresponds to maximizing (Bayesian) model evidence, via belief updating over several scales: i.e., inference, learning, and model selection. Operationally, this self-evidencing can be realized via (variational) message passing or belief propagation on a factor graph. Crucially, active inference foregrounds an existential imperative of intelligent systems; namely, curiosity or the resolution of uncertainty. This same imperative underwrites belief sharing in ensembles of agents, in which certain aspects (i.e., factors) of each agent's generative world model provide a common ground or frame of reference. Active inference plays a foundational role in this ecology of belief sharing -- leading to a formal account of collective intelligence that rests on shared narratives and goals. We also consider the kinds of communication protocols that must be developed to enable such an ecosystem of intelligences and motivate the development of a shared hyper-spatial modeling language and transaction protocol, as a first -- and key -- step towards such an ecology.Comment: 23+18 pages, one figure, one six page appendi

    A Constraint Geometry for Inference and Integration

    Full text link
    We use the geometric framework describing gauge theories to enrich our understanding of the principle of maximum entropy, a variational method appearing in statistical inference and the analysis of stochastic dynamical systems. Using the connection on a principal GG-bundle, the gradient flows found in the calculus of functional optimisation are grounded in a geometric picture of constraint functions interacting with the dynamics of the probabilistic degrees of freedom of the process. From this, we can describe the point of maximum entropy as parallel transport over the state space. Beyond stochastic analysis, we suggest a collection of geometric structures surrounding energy-based inference.Comment: 13 pages and one of reference

    On Bayesian Mechanics: A Physics of and by Beliefs

    Get PDF
    The aim of this paper is to introduce a field of study that has emerged over the last decade, called Bayesian mechanics. Bayesian mechanics is a probabilistic mechanics, comprising tools that enable us to model systems endowed with a particular partition (i.e., into particles), where the internal states (or the trajectories of internal states) of a particular system encode the parameters of beliefs about quantities that characterise the system. These tools allow us to write down mechanical theories for systems that look as if they are estimating posterior probability distributions over the causes of their sensory states, providing a formal language to model the constraints, forces, fields, and potentials that determine how the internal states of such systems move in a space of beliefs (i.e., on a statistical manifold). Here we will review the state of the art in the literature on the free energy principle, distinguishing between three ways in which Bayesian mechanics has been applied to particular systems (i.e., path-tracking, mode-tracking, and mode-matching). We will go on to examine the duality of the free energy principle and the constrained maximum entropy principle, both of which lie at the heart of Bayesian mechanics. We also discuss the implications of this duality for Bayesian mechanics and limitations of current treatments.Comment: 51 pages, six figures. Invited contribution to "Making and Breaking Symmetries in Mind and Life," a special issue of the journal Royal Society Interface. First two authors have contributed equall
    corecore