91,425 research outputs found

    Deterministic networks for probabilistic computing

    Get PDF
    Neural-network models of high-level brain functions such as memory recall and reasoning often rely on the presence of stochasticity. The majority of these models assumes that each neuron in the functional network is equipped with its own private source of randomness, often in the form of uncorrelated external noise. However, both in vivo and in silico, the number of noise sources is limited due to space and bandwidth constraints. Hence, neurons in large networks usually need to share noise sources. Here, we show that the resulting shared-noise correlations can significantly impair the performance of stochastic network models. We demonstrate that this problem can be overcome by using deterministic recurrent neural networks as sources of uncorrelated noise, exploiting the decorrelating effect of inhibitory feedback. Consequently, even a single recurrent network of a few hundred neurons can serve as a natural noise source for large ensembles of functional networks, each comprising thousands of units. We successfully apply the proposed framework to a diverse set of binary-unit networks with different dimensionalities and entropies, as well as to a network reproducing handwritten digits with distinct predefined frequencies. Finally, we show that the same design transfers to functional networks of spiking neurons.Comment: 22 pages, 11 figure

    Abstract Syntax Networks for Code Generation and Semantic Parsing

    Full text link
    Tasks like code generation and semantic parsing require mapping unstructured (or partially structured) inputs to well-formed, executable outputs. We introduce abstract syntax networks, a modeling framework for these problems. The outputs are represented as abstract syntax trees (ASTs) and constructed by a decoder with a dynamically-determined modular structure paralleling the structure of the output tree. On the benchmark Hearthstone dataset for code generation, our model obtains 79.2 BLEU and 22.7% exact match accuracy, compared to previous state-of-the-art values of 67.1 and 6.1%. Furthermore, we perform competitively on the Atis, Jobs, and Geo semantic parsing datasets with no task-specific engineering.Comment: ACL 2017. MR and MS contributed equall

    The effect of heterogeneity on decorrelation mechanisms in spiking neural networks: a neuromorphic-hardware study

    Get PDF
    High-level brain function such as memory, classification or reasoning can be realized by means of recurrent networks of simplified model neurons. Analog neuromorphic hardware constitutes a fast and energy efficient substrate for the implementation of such neural computing architectures in technical applications and neuroscientific research. The functional performance of neural networks is often critically dependent on the level of correlations in the neural activity. In finite networks, correlations are typically inevitable due to shared presynaptic input. Recent theoretical studies have shown that inhibitory feedback, abundant in biological neural networks, can actively suppress these shared-input correlations and thereby enable neurons to fire nearly independently. For networks of spiking neurons, the decorrelating effect of inhibitory feedback has so far been explicitly demonstrated only for homogeneous networks of neurons with linear sub-threshold dynamics. Theory, however, suggests that the effect is a general phenomenon, present in any system with sufficient inhibitory feedback, irrespective of the details of the network structure or the neuronal and synaptic properties. Here, we investigate the effect of network heterogeneity on correlations in sparse, random networks of inhibitory neurons with non-linear, conductance-based synapses. Emulations of these networks on the analog neuromorphic hardware system Spikey allow us to test the efficiency of decorrelation by inhibitory feedback in the presence of hardware-specific heterogeneities. The configurability of the hardware substrate enables us to modulate the extent of heterogeneity in a systematic manner. We selectively study the effects of shared input and recurrent connections on correlations in membrane potentials and spike trains. Our results confirm ...Comment: 20 pages, 10 figures, supplement

    Conedy: a scientific tool to investigate Complex Network Dynamics

    Full text link
    We present Conedy, a performant scientific tool to numerically investigate dynamics on complex networks. Conedy allows to create networks and provides automatic code generation and compilation to ensure performant treatment of arbitrary node dynamics. Conedy can be interfaced via an internal script interpreter or via a Python module

    Frequency dependence of signal power and spatial reach of the local field potential

    Get PDF
    The first recording of electrical potential from brain activity was reported already in 1875, but still the interpretation of the signal is debated. To take full advantage of the new generation of microelectrodes with hundreds or even thousands of electrode contacts, an accurate quantitative link between what is measured and the underlying neural circuit activity is needed. Here we address the question of how the observed frequency dependence of recorded local field potentials (LFPs) should be interpreted. By use of a well-established biophysical modeling scheme, combined with detailed reconstructed neuronal morphologies, we find that correlations in the synaptic inputs onto a population of pyramidal cells may significantly boost the low-frequency components of the generated LFP. We further find that these low-frequency components may be less `local' than the high-frequency LFP components in the sense that (1) the size of signal-generation region of the LFP recorded at an electrode is larger and (2) that the LFP generated by a synaptically activated population spreads further outside the population edge due to volume conduction

    Evolution of Symbolisation in Chimpanzees and Neural Nets

    Get PDF
    from Introduction: Animal communication systems and human languages can be characterised by the type of cognitive abilities that are required. If we consider the main semiotic distinction between communication using icons, signals, or symbols (Peirce, 1955; Harnad, 1990; Deacon, 1997) we can identify different cognitive loads for each type of reference. The use and understanding of icons require instinctive behaviour (e.g. emotions) or simple perceptual processes (e.g. visual similarities between an icon and its meaning). Communication systems that use signals are characterised by referential associations between objects and visual or auditory signals. They require the cognitive ability to learn stimulus associations, such as in conditional learning. Symbols have double associations. Initially, symbolic systems require the establishment of associations between signals and objects. Secondly, other types of relationships are learned between the signals themselves. The use of rule for the logical combination of symbols is an example of symbolic relationship. Symbolisation is the ability to acquire and handle symbols and symbolic relationships
    corecore