85 research outputs found

    Contextually Guided Unsupervised Learning Using Local Multivariate Binary Processors

    Get PDF
    We consider the role of contextual guidance in learning and processing within multi-stream neural networks. Earlier work (Kay & Phillips, 1994, 1996; Phillips et al., 1995) showed how the goals of feature discovery and associative learning could be fused within a single objective, and made precise using information theory, in such a way that local binary processors could extract a single feature that is coherent across streams. In this paper we consider multi-unit local processors with multivariate binary outputs that enable a greater number of coherent features to be extracted. Using the Ising model, we define a class of information-theoretic objective functions and also local approximations, and derive the learning rules in both cases. These rules have similarities to, and differences from, the celebrated BCM rule. Local and global versions of Infomax appear as by-products of the general approach, as well as multivariate versions of Coherent Infomax. Focusing on the more biologically plausible local rules, we describe some computational experiments designed to investigate specific properties of the processors. The main conclusions are : 1. The local methodology introduced in the paper has the required functionality. 2. Different units within the multi-unit processors learned to respond to different aspects of their receptive fields. 3. The units within each processor generally produced a distributed code in which the outputs were correlated, and which was robust to damage; in the special case where the number of units available was only just sufficient to transmit the relevant information, a form of competitive learning was produced. 4. The contextual connections enabled the information correlated across streams to be extracted, and, by improving feature detection with weak or noisy inputs, they played a useful role in short-term processing and in improving generalization. 5. The methodology allows the statistical associations between distributed self-organizing population codes to be learned

    Partial information decomposition as a unified approach to the specification of neural goal functions

    Get PDF
    In many neural systems anatomical motifs are present repeatedly, but despite their structural similarity they can serve very different tasks. A prime example for such a motif is the canonical microcircuit of six-layered neo-cortex, which is repeated across cortical areas, and is involved in a number of different tasks (e.g. sensory, cognitive, or motor tasks). This observation has spawned interest in finding a common underlying principle, a ‘goal function’, of information processing implemented in this structure. By definition such a goal function, if universal, cannot be cast in processing-domain specific language (e.g. ‘edge filtering’, ‘working memory’). Thus, to formulate such a principle, we have to use a domain-independent framework. Information theory offers such a framework. However, while the classical framework of information theory focuses on the relation between one input and one output (Shannon’s mutual information), we argue that neural information processing crucially depends on the combination of multiple inputs to create the output of a processor. To account for this, we use a very recent extension of Shannon Information theory, called partial information decomposition (PID). PID allows to quantify the information that several inputs provide individually (unique information), redundantly (shared information) or only jointly (synergistic information) about the output. First, we review the framework of PID. Then we apply it to reevaluate and analyze several earlier proposals of information theoretic neural goal functions (predictive coding, infomax and coherent infomax, efficient coding). We find that PID allows to compare these goal functions in a common framework, and also provides a versatile approach to design new goal functions from first principles. Building on this, we design and analyze a novel goal function, called ‘coding with synergy’, which builds on combining external input and prior knowledge in a synergistic manner. We suggest that this novel goal function may be highly useful in neural information processing

    Self-Organized Complexity and Coherent Infomax from the Viewpoint of Jaynes's Probability Theory

    Get PDF
    This paper discusses concepts of self-organized complexity and the theory of Coherent Infomax in the light of Jaynes’s probability theory. Coherent Infomax, shows, in principle, how adaptively self-organized complexity can be preserved and improved by using probabilistic inference that is context-sensitive. It argues that neural systems do this by combining local reliability with flexible, holistic, context-sensitivity. Jaynes argued that the logic of probabilistic inference shows it to be based upon Bayesian and Maximum Entropy methods or special cases of them. He presented his probability theory as the logic of science; here it is considered as the logic of life. It is concluded that the theory of Coherent Infomax specifies a general objective for probabilistic inference, and that contextual interactions in neural systems perform functions required of the scientist within Jaynes’s theory

    Unlocking the Potential of Two-Point Cells for Energy-Efficient and Resilient Training of Deep Nets

    Get PDF

    The effects of arousal on apical amplification and conscious state

    Get PDF
    Neocortical pyramidal cells can integrate two classes of input separately and use one to modulate response to the other. Their tuft dendrites are electrotonically separated from basal dendrites and soma by the apical dendrite, and apical hyperpolarization-activated currents (Ih) further isolate subthreshold integration of tuft inputs. When apical depolarization exceeds a threshold, however, it can enhance response to the basal inputs that specify the cell’s selective sensitivity. This process is referred to as apical amplification (AA). We review evidence suggesting that, by regulating Ihin the apical compartments, adrenergic arousal controls the coupling between apical and somatic integration zones thus modifying cognitive capabilities closely associated with consciousness. Evidence relating AA to schizophrenia, sleep, and anesthesia is reviewed, and we assess theories that emphasize the relevance of AA to consciousness. Implications for theories of neocortical computation that emphasize context-sensitive modulation are summarized. We conclude that the findings concerning AA and its regulation by arousal offer a new perspective on states of consciousness, the function and evolution of neocortex, and psychopathology. Many issues worthy of closer examination arise

    Dynamic coordination in brain and mind

    Get PDF
    Our goal here is to clarify the concept of 'dynamic coordination', and to note major issues that it raises for the cognitive neurosciences. In general, coordinating interactions are those that produce coherent and relevant overall patterns of activity, while preserving the essential individual identities and functions of the activities coordinated. 'Dynamic coordination' is the coordination that is created on a moment-by-moment basis so as to deal effectively with unpredictable aspects of the current situation. We distinguish different computational goals for dynamic coordination, and outline issues that arise concerning local cortical circuits, brain systems, cognition, and evolution. Our focus here is on dynamic coordination by widely distributed processes of self-organisation, but we also discuss the role of central executive processes

    The coherent organization of mental life depends on mechanisms for context-sensitive gain-control that are impaired in schizophrenia

    Get PDF
    There is rapidly growing evidence that schizophrenia involves changes in context-sensitive gain-control and probabilistic inference. In addition to the well-known cognitive disorganization to which these changes lead, basic aspects of vision are also impaired, as discussed by other papers on this Frontiers Research Topic. The aim of this paper is to contribute to our understanding of such findings by examining five central hypotheses. First, context-sensitive gain-control is fundamental to brain function and mental life. Second, it occurs in many different regions of the cerebral cortex of many different mammalian species. Third, it has several computational functions, each with wide generality. Fourth, it is implemented by several neural mechanisms at cellular and circuit levels. Fifth, impairments of context-sensitive gain-control produce many of the well-known symptoms of schizophrenia and change basic processes of visual perception. These hypotheses suggest why disorders of vision in schizophrenia may provide insights into the nature and mechanisms of impaired reality testing and thought disorder in psychosis. They may also cast light on normal mental function and its neural bases. Limitations of these hypotheses, and ways in which they need further testing and development, are outlined
    • 

    corecore