8,869 research outputs found

    Neural Expectation Maximization

    Full text link
    Many real world tasks such as reasoning and physical interaction require identification and manipulation of conceptual entities. A first step towards solving these tasks is the automated discovery of distributed symbol-like representations. In this paper, we explicitly formalize this problem as inference in a spatial mixture model where each component is parametrized by a neural network. Based on the Expectation Maximization framework we then derive a differentiable clustering method that simultaneously learns how to group and represent individual entities. We evaluate our method on the (sequential) perceptual grouping task and find that it is able to accurately recover the constituent objects. We demonstrate that the learned representations are useful for next-step prediction.Comment: Accepted to NIPS 201

    A Neural Model of How Horizontal and Interlaminar Connections of Visual Cortex Develop into Adult Circuits that Carry Out Perceptual Grouping and Learning

    Full text link
    A neural model suggests how horizontal and interlaminar connections in visual cortical areas Vl and V2 develop within a laminar cortical architecture and give rise to adult visual percepts. The model suggests how mechanisms that control cortical development in the infant lead to properties of adult cortical anatomy, neurophysiology, and visual perception. The model clarifies how excitatory and inhibitory connections can develop stably by maintaining a balance between excitation and inhibition. The growth of long-range excitatory horizontal connections between layer 2/3 pyramidal cells is balanced against that of short-range disynaptic interneuronal connections. The growth of excitatory on-center connections from layer 6-to-4 is balanced against that of inhibitory interneuronal off-surround connections. These balanced connections interact via intracortical and intercortical feedback to realize properties of perceptual grouping, attention, and perceptual learning in the adult, and help to explain the observed variability in the number and temporal distribution of spikes emitted by cortical neurons. The model replicates cortical point spread functions and psychophysical data on the strength of real and illusory contours. The on-center off-surround layer 6-to-4 circuit enables top-clown attentional signals from area V2 to modulate, or attentionally prime, layer 4 cells in area Vl without fully activating them. This modulatory circuit also enables adult perceptual learning within cortical area Vl and V2 to proceed in a stable way.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-95-1-0657

    A Neural Model of How Horizontal and Interlaminar Connections of Visual Cortex Develop into Adult Circuits that Carry Out Perceptual Grouping and Learning

    Full text link
    A neural model suggests how horizontal and interlaminar connections in visual cortical areas V1 and V2 develop within a laminar cortical architecture and give rise to adult visual percepts. The model suggests how mechanisms that control cortical development in the infant lead to properties of adult cortical anatomy, neurophysiology, and visual perception. The model clarifies how excitatory and inhibitory connections can develop stably by maintaining a balance between excitation and inhibition. The growth of long-range excitatory horizontal connections between layer 2/3 pyramidal cells is balanced against that of short-range disynaptie interneuronal connections. The growth of excitatory on-center connections from layer 6-to-1 is balanced against that of inhibitory interneuronal off-surround connections. These balanced connections interact via intracortical and intercortical feedback to realize properties of perceptual grouping, attention, and perceptual learning in the adult, and help to explain the observed variability in the number and temporal distribution of spikes emitted by cortical neurons. The model replicates cortical point spread functions and psychophysical data on the strength of real and illusory contours. The on-center off-surround layer 6-to-4 circuit enables top-down attentional signals from area V2 to modulate, or attentionally prime, layer 4 cells in area VI without fully activating them. This modulatory circuit also enables adult perceptual learning within cortical area, V1 and V2 to proceed in a stable way.Defense Advanced Research Projects Agency and Office of Naval Hesearch (N00014-95-l-0109); National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-95-1-0657

    Does money matter in inflation forecasting?.

    Get PDF
    This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two non-linear techniques, namely, recurrent neural networks and kernel recursive least squares regression - techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation

    CondenseNet: An Efficient DenseNet using Learned Group Convolutions

    Full text link
    Deep neural networks are increasingly used on mobile devices, where computational resources are limited. In this paper we develop CondenseNet, a novel network architecture with unprecedented efficiency. It combines dense connectivity with a novel module called learned group convolution. The dense connectivity facilitates feature re-use in the network, whereas learned group convolutions remove connections between layers for which this feature re-use is superfluous. At test time, our model can be implemented using standard group convolutions, allowing for efficient computation in practice. Our experiments show that CondenseNets are far more efficient than state-of-the-art compact convolutional networks such as MobileNets and ShuffleNets

    Multi-space Variational Encoder-Decoders for Semi-supervised Labeled Sequence Transduction

    Full text link
    Labeled sequence transduction is a task of transforming one sequence into another sequence that satisfies desiderata specified by a set of labels. In this paper we propose multi-space variational encoder-decoders, a new model for labeled sequence transduction with semi-supervised learning. The generative model can use neural networks to handle both discrete and continuous latent variables to exploit various features of data. Experiments show that our model provides not only a powerful supervised framework but also can effectively take advantage of the unlabeled data. On the SIGMORPHON morphological inflection benchmark, our model outperforms single-model state-of-art results by a large margin for the majority of languages.Comment: Accepted by ACL 201
    corecore