85 research outputs found

    Comment on "Consistency, amplitudes, and probabilities in quantum theory"

    Full text link
    In a recent article [Phys. Rev. A 57, 1572 (1998)] Caticha has concluded that ``nonlinear variants of quantum mechanics are inconsistent.'' In this note we identify what it is that nonlinear quantum theories have been shown to be inconsistent with.Comment: LaTeX, 5 pages, no figure

    Opinion Dynamics of Learning Agents: Does Seeking Consensus Lead to Disagreement?

    Full text link
    We study opinion dynamics in a population of interacting adaptive agents voting on a set of complex multidimensional issues. We consider agents which can classify issues into for or against. The agents arrive at the opinions about each issue in question using an adaptive algorithm. Adaptation comes from learning and the information for the learning process comes from interacting with other neighboring agents and trying to change the internal state in order to concur with their opinions. The change in the internal state is driven by the information contained in the issue and in the opinion of the other agent. We present results in a simple yet rich context where each agent uses a Boolean Perceptron to state its opinion. If there is no internal clock, so the update occurs with asynchronously exchanged information among pairs of agents, then the typical case, if the number of issues is kept small, is the evolution into a society thorn by the emergence of factions with extreme opposite beliefs. This occurs even when seeking consensus with agents with opposite opinions. The curious result is that it is learning from those that hold the same opinions that drives the emergence of factions. This results follows from the fact that factions are prevented by not learning at all from those agents that hold the same opinion. If the number of issues is large, the dynamics becomes trapped and the society does not evolve into factions and a distribution of moderate opinions is observed. We also study the less realistic, but technically simpler synchronous case showing that global consensus is a fixed point. However, the approach to this consensus is glassy in the limit of large societies if agents adapt even in the case of agreement.Comment: 16 pages, 10 figures, revised versio

    Learning a spin glass: determining Hamiltonians from metastable states

    Full text link
    We study the problem of determining the Hamiltonian of a fully connected Ising Spin Glass of NN units from a set of measurements, whose sizes needs to be O(N2){\cal O}(N^2) bits. The student-teacher scenario, used to study learning in feed-forward neural networks, is here extended to spin systems with arbitrary couplings. The set of measurements consists of data about the local minima of the rugged energy landscape. We compare simulations and analytical approximations for the resulting learning curves obtained by using different algorithms.Comment: 5 pages, 1 figure, to appear in Physica

    Dynamical transitions in the evolution of learning algorithms by selection

    Get PDF
    We study the evolution of artificial learning systems by means of selection. Genetic programming is used to generate a sequence of populations of algorithms which can be used by neural networks for supervised learning of a rule that generates examples. In opposition to concentrating on final results, which would be the natural aim while designing good learning algorithms, we study the evolution process and pay particular attention to the temporal order of appearance of functional structures responsible for the improvements in the learning process, as measured by the generalization capabilities of the resulting algorithms. The effect of such appearances can be described as dynamical phase transitions. The concepts of phenotypic and genotypic entropies, which serve to describe the distribution of fitness in the population and the distribution of symbols respectively, are used to monitor the dynamics. In different runs the phase transitions might be present or not, with the system finding out good solutions, or staying in poor regions of algorithm space. Whenever phase transitions occur, the sequence of appearances are the same. We identify combinations of variables and operators which are useful in measuring experience or performance in rule extraction and can thus implement useful annealing of the learning schedule.Comment: 11 pages, 11 figures, 2 table

    Maximum Entropy and Bayesian Data Analysis: Entropic Priors

    Full text link
    The problem of assigning probability distributions which objectively reflect the prior information available about experiments is one of the major stumbling blocks in the use of Bayesian methods of data analysis. In this paper the method of Maximum (relative) Entropy (ME) is used to translate the information contained in the known form of the likelihood into a prior distribution for Bayesian inference. The argument is inspired and guided by intuition gained from the successful use of ME methods in statistical mechanics. For experiments that cannot be repeated the resulting "entropic prior" is formally identical with the Einstein fluctuation formula. For repeatable experiments, however, the expected value of the entropy of the likelihood turns out to be relevant information that must be included in the analysis. The important case of a Gaussian likelihood is treated in detail.Comment: 23 pages, 2 figure

    Gradient descent learning in and out of equilibrium

    Full text link
    Relations between the off thermal equilibrium dynamical process of on-line learning and the thermally equilibrated off-line learning are studied for potential gradient descent learning. The approach of Opper to study on-line Bayesian algorithms is extended to potential based or maximum likelihood learning. We look at the on-line learning algorithm that best approximates the off-line algorithm in the sense of least Kullback-Leibler information loss. It works by updating the weights along the gradient of an effective potential different from the parent off-line potential. The interpretation of this off equilibrium dynamics holds some similarities to the cavity approach of Griniasty. We are able to analyze networks with non-smooth transfer functions and transfer the smoothness requirement to the potential.Comment: 08 pages, submitted to the Journal of Physics

    Entropy Distance: New Quantum Phenomena

    Full text link
    We study a curve of Gibbsian families of complex 3x3-matrices and point out new features, absent in commutative finite-dimensional algebras: a discontinuous maximum-entropy inference, a discontinuous entropy distance and non-exposed faces of the mean value set. We analyze these problems from various aspects including convex geometry, topology and information geometry. This research is motivated by a theory of info-max principles, where we contribute by computing first order optimality conditions of the entropy distance.Comment: 34 pages, 5 figure

    The XY Spin-Glass with Slow Dynamic Couplings

    Full text link
    We investigate an XY spin-glass model in which both spins and couplings evolve in time: the spins change rapidly according to Glauber-type rules, whereas the couplings evolve slowly with a dynamics involving spin correlations and Gaussian disorder. For large times the model can be solved using replica theory. In contrast to the XY-model with static disordered couplings, solving the present model requires two levels of replicas, one for the spins and one for the couplings. Relevant order parameters are defined and a phase diagram is obtained upon making the replica-symmetric Ansatz. The system exhibits two different spin-glass phases, with distinct de Almeida-Thouless lines, marking continuous replica-symmetry breaking: one describing freezing of the spins only, and one describing freezing of both spins and couplings.Comment: 7 pages, Latex, 3 eps figure
    corecore