63,111 research outputs found

    Analyzing and Interpreting Neural Networks for NLP: A Report on the First BlackboxNLP Workshop

    Full text link
    The EMNLP 2018 workshop BlackboxNLP was dedicated to resources and techniques specifically developed for analyzing and understanding the inner-workings and representations acquired by neural models of language. Approaches included: systematic manipulation of input to neural networks and investigating the impact on their performance, testing whether interpretable knowledge can be decoded from intermediate representations acquired by neural networks, proposing modifications to neural network architectures to make their knowledge state or generated output more explainable, and examining the performance of networks on simplified or formal languages. Here we review a number of representative studies in each category

    Oscillations, metastability and phase transitions in brain and models of cognition

    Get PDF
    Neuroscience is being practiced in many different forms and at many different organizational levels of the Nervous System. Which of these levels and associated conceptual frameworks is most informative for elucidating the association of neural processes with processes of Cognition is an empirical question and subject to pragmatic validation. In this essay, I select the framework of Dynamic System Theory. Several investigators have applied in recent years tools and concepts of this theory to interpretation of observational data, and for designing neuronal models of cognitive functions. I will first trace the essentials of conceptual development and hypotheses separately for discerning observational tests and criteria for functional realism and conceptual plausibility of the alternatives they offer. I will then show that the statistical mechanics of phase transitions in brain activity, and some of its models, provides a new and possibly revealing perspective on brain events in cognition

    Redistribution of Synaptic Efficacy Supports Stable Pattern Learning in Neural Networks

    Full text link
    Markram and Tsodyks, by showing that the elevated synaptic efficacy observed with single-pulse LTP measurements disappears with higher-frequency test pulses, have critically challenged the conventional assumption that LTP reflects a general gain increase. Redistribution of synaptic efficacy (RSE) is here seen as the local realization of a global design principle in a neural network for pattern coding. As is typical of many coding systems, the network learns by dynamically balancing a pattern-independent increase in strength against a pattern-specific increase in selectivity. This computation is implemented by a monotonic long-term memory process which has a bidirectional effect on the postsynaptic potential via functionally complementary signal components. These frequency-dependent and frequency-independent components realize the balance between specific and nonspecific functions at each synapse. This synaptic balance suggests a functional purpose for RSE which, by dynamically bounding total memory change, implements a distributed coding scheme which is stable with fast as well as slow learning. Although RSE would seem to make it impossible to code high-frequency input features, a network preprocessing step called complement coding symmetrizes the input representation, which allows the system to encode high-frequency as well as low-frequency features in an input pattern. A possible physical model interprets the two synaptic signal components in terms of ligand-gated and voltage-gated receptors, where learning converts channels from one type to another.Office of Naval Research and the Defense Advanced Research Projects Agency (N00014-95-1-0409, N00014-1-95-0657
    corecore