5 research outputs found

    Bounds of optimal learning.

    Get PDF
    Learning is considered as a dynamic process described by a trajectory on a statistical manifold, and a topology is introduced defining trajectories continuous in information. The analysis generalises the application of Orlicz spaces in non-parametric information geometry to topological function spaces with asymmetric gauge functions (e.g. quasi-metric spaces defined in terms of KL divergence). Optimality conditions are formulated for dynamical constraints, and two main results are outlined: 1) Parametrisation of optimal learning trajectories from empirical constraints using generalised characteristic potentials; 2) A gradient theorem for the potentials defining optimal utility and information bounds of a learning system. These results not only generalise some known relations of statistical mechanics and variational methods in information theory, but also can be used for optimisation of the exploration-exploitation balance in online learning systems

    A model of probability matching in a two-choice task based on stochastic control of learning in neural cell-assemblies.

    Get PDF
    Donald Hebb proposed a hypothesis that specialised groups of neurons, called cell-assemblies (CAs), form the basis for neural encoding of symbols in the human mind. It is not clear, however, how CAs can be re-used and combined to form new representations as in classical symbolic systems. We demonstrate that Hebbian learning of synaptic weights alone is not adequate for all tasks, and that additional meta-control processes should be involved. We describe an earlier proposed architecture \cite{Belavkin08:_ecai08} implementing such a process, and then evaluate it by modelling the probability matching phenomenon in a classic two-choice task. The model and its results are discussed in view of mathematical theory of learning, and existing cognitive architectures as well as some hypotheses about neural functioning in the brain

    Conflict resolution and learning probability matching in a neural cell-assembly architecture

    Get PDF
    Donald Hebb proposed a hypothesis that specialised groups of neurons, called cell-assemblies (CAs), form the basis for neural encoding of symbols in the human mind. It is not clear, however, how CAs can be re-used and combined to form new representations as in classical symbolic systems. We demonstrate that Hebbian learning of synaptic weights alone is not adequate for all tasks, and that additional meta-control processes should be involved. We describe an earlier proposed architecture implementing an adaptive conflict resolution process between CAs, and then evaluate it by modelling the probability matching phenomenon in a classic two-choice task. The model and its results are discussed in view of mathematical theory of learning and existing cognitive architectures

    Bounds of optimal learning.

    Get PDF
    Learning is considered as a dynamic process described by a trajectory on a statistical manifold, and a topology is introduced defining trajectories continuous in information. The analysis generalises the application of Orlicz spaces in non-parametric information geometry to topological function spaces with asymmetric gauge functions (e.g. quasi-metric spaces defined in terms of KL divergence). Optimality conditions are formulated for dynamical constraints, and two main results are outlined: 1) Parametrisation of optimal learning trajectories from empirical constraints using generalised characteristic potentials; 2) A gradient theorem for the potentials defining optimal utility and information bounds of a learning system. These results not only generalise some known relations of statistical mechanics and variational methods in information theory, but also can be used for optimisation of the exploration-exploitation balance in online learning systems
    corecore