528,509 research outputs found

    Gibbs conditioning extended, Boltzmann conditioning introduced

    Full text link
    Conditional Equi-concentration of Types on I-projections (ICET) and Extended Gibbs Conditioning Principle (EGCP) provide an extension of Conditioned Weak Law of Large Numbers and of Gibbs Conditioning Principle to the case of non-unique Relative Entropy Maximizing (REM) distribution (aka I-projection). ICET and EGCP give a probabilistic justification to REM under rather general conditions. mu-projection variants of the results are introduced. They provide a probabilistic justification to Maximum Probability (MaxProb) method. 'REM/MaxEnt or MaxProb?' question is discussed, briefly. Jeffreys Conditioning Principle is mentioned.Comment: Three major changes: 1) Definition of proper I-projection has been changed. 2) An argument preceding Eq. (7) at the proof of ICET is now correctly stated. 3) Abstract was rewritten. To appear at Proceedings of MaxEnt 2004 worksho

    Energy Quantisation and Time Parameterisation

    Get PDF
    We show that if space is compact, then trajectories cannot be defined in the framework of quantum Hamilton--Jacobi equation. The starting point is the simple observation that when the energy is quantized it is not possible to make variations with respect to the energy, and the time parameterisation t-t_0=\partial_E S_0, implied by Jacobi's theorem and that leads to group velocity, is ill defined. It should be stressed that this follows directly form the quantum HJ equation without any axiomatic assumption concerning the standard formulation of quantum mechanics. This provides a stringent connection between the quantum HJ equation and the Copenhagen interpretation. Together with tunneling and the energy quantization theorem for confining potentials, formulated in the framework of quantum HJ equation, it leads to the main features of the axioms of quantum mechanics from a unique geometrical principle. Similarly to the case of the classical HJ equation, this fixes its quantum analog by requiring that there exist point transformations, rather than canonical ones, leading to the trivial hamiltonian. This is equivalent to a basic cocycle condition on the states. Such a cocycle condition can be implemented on compact spaces, so that continuous energy spectra are allowed only as a limiting case. Remarkably, a compact space would also imply that the Dirac and von Neumann formulations of quantum mechanics essentially coincide. We suggest that there is a definition of time parameterisation leading to trajectories in the context of the quantum HJ equation having the probabilistic interpretation of the Copenhagen School.Comment: 11 pages. The main addition concerns a discussion on the variational principle in the case of discrete energy spectra (Jacobi's Theorem). References adde

    Partial information decomposition as a unified approach to the specification of neural goal functions

    Get PDF
    In many neural systems anatomical motifs are present repeatedly, but despite their structural similarity they can serve very different tasks. A prime example for such a motif is the canonical microcircuit of six-layered neo-cortex, which is repeated across cortical areas, and is involved in a number of different tasks (e.g. sensory, cognitive, or motor tasks). This observation has spawned interest in finding a common underlying principle, a ‘goal function’, of information processing implemented in this structure. By definition such a goal function, if universal, cannot be cast in processing-domain specific language (e.g. ‘edge filtering’, ‘working memory’). Thus, to formulate such a principle, we have to use a domain-independent framework. Information theory offers such a framework. However, while the classical framework of information theory focuses on the relation between one input and one output (Shannon’s mutual information), we argue that neural information processing crucially depends on the combination of multiple inputs to create the output of a processor. To account for this, we use a very recent extension of Shannon Information theory, called partial information decomposition (PID). PID allows to quantify the information that several inputs provide individually (unique information), redundantly (shared information) or only jointly (synergistic information) about the output. First, we review the framework of PID. Then we apply it to reevaluate and analyze several earlier proposals of information theoretic neural goal functions (predictive coding, infomax and coherent infomax, efficient coding). We find that PID allows to compare these goal functions in a common framework, and also provides a versatile approach to design new goal functions from first principles. Building on this, we design and analyze a novel goal function, called ‘coding with synergy’, which builds on combining external input and prior knowledge in a synergistic manner. We suggest that this novel goal function may be highly useful in neural information processing

    Beyond the classical strong maximum principle: forcing changing sign near the boundary and flat solutions

    Full text link
    We show that the classical strong maximum principle, concerning positive supersolutions of linear elliptic equations vanishing on the boundary of the domain Ω\Omega can be extended, under suitable conditions, to the case in which the forcing term f(x)f(x) is changing sign. In addition, in the case of solutions, the normal derivative on the boundary may also vanish on the boundary (definition of flat solutions). This leads to examples in which the unique continuation property fails. As a first application, we show the existence of positive solutions for a sublinear semilinear elliptic problem of indefinite sign. A second application, concerning the positivity of solutions of the linear heat equation, for some large values of time, with forcing and/or initial datum changing sign is also given.Comment: 20 pages 2 Figure
    • …
    corecore