413 research outputs found

    Intrinsic dimension of data representations in deep neural networks

    Get PDF
    Deep neural networks progressively transform their inputs across multiple processing layers. What are the geometrical properties of the representations learned by these networks? Here we study the intrinsic dimensionality (ID) of data-representations, i.e. the minimal number of parameters needed to describe a representation. We find that, in a trained network, the ID is orders of magnitude smaller than the number of units in each layer. Across layers, the ID first increases and then progressively decreases in the final layers. Remarkably, the ID of the last hidden layer predicts classification accuracy on the test set. These results can neither be found by linear dimensionality estimates (e.g., with principal component analysis), nor in representations that had been artificially linearized. They are neither found in untrained networks, nor in networks that are trained on randomized labels. This suggests that neural networks that can generalize are those that transform the data into low-dimensional, but not necessarily flat manifolds

    Retinal metric: a stimulus distance measure derived from population neural responses

    Full text link
    The ability of the organism to distinguish between various stimuli is limited by the structure and noise in the population code of its sensory neurons. Here we infer a distance measure on the stimulus space directly from the recorded activity of 100 neurons in the salamander retina. In contrast to previously used measures of stimulus similarity, this "neural metric" tells us how distinguishable a pair of stimulus clips is to the retina, given the noise in the neural population response. We show that the retinal distance strongly deviates from Euclidean, or any static metric, yet has a simple structure: we identify the stimulus features that the neural population is jointly sensitive to, and show the SVM-like kernel function relating the stimulus and neural response spaces. We show that the non-Euclidean nature of the retinal distance has important consequences for neural decoding.Comment: 5 pages, 4 figures, to appear in Phys Rev Let

    Quantitative determination of bond order and lattice distortions in nickel oxide heterostructures by resonant x-ray scattering

    Full text link
    We present a combined study of Ni KK-edge resonant x-ray scattering and density functional calculations to probe and distinguish electronically driven ordering and lattice distortions in nickelate heterostructures. We demonstrate that due to the low crystal symmetry, contributions from structural distortions can contribute significantly to the energy-dependent Bragg peak intensities of a bond-ordered NdNiO3_3 reference film. For a LaNiO3_3-LaAlO3_3 superlattice that exhibits magnetic order, we establish a rigorous upper bound on the bond-order parameter. We thus conclusively confirm predictions of a dominant spin density wave order parameter in metallic nickelates with a quasi-two-dimensional electronic structure

    Neural population coding: combining insights from microscopic and mass signals

    Get PDF
    Panzeri S, Macke JH, Gross J, Kayser C. Neural population coding: combining insights from microscopic and mass signals. Trends Cogn Sci. 2015;19(3):162-72

    former title: A theory for the emergence of neocortical network architecture

    Get PDF
    Developmental programs that guide neurons and their neurites into specific subvolumes of the mammalian neocortex give rise to lifelong constraints for the formation of synaptic connections. To what degree do these constraints affect cortical wiring diagrams? Here we introduce an inverse modeling approach to show how cortical networks would appear if they were solely due to the spatial distributions of neurons and neurites. We find that neurite packing density and morphological diversity will inevitably translate into non-random pairwise and higher-order connectivity statistics. More importantly, we show that these non-random wiring properties are not arbitrary, but instead reflect the specific structural organization of the underlying neuropil. Our predictions are consistent with the empirically observed wiring specificity from subcellular to network scales. Thus, independent from learning and genetically encoded wiring rules, many of the properties that define the neocortex’ characteristic network architecture may emerge as a result of neuron and neurite development

    The impact of neuron morphology on cortical network architecture

    Get PDF
    The neurons in the cerebral cortex are not randomly interconnected. This specificity in wiring can result from synapse formation mechanisms that connect neurons, depending on their electrical activity and genetically defined identity. Here, we report that the morphological properties of the neurons provide an additional prominent source by which wiring specificity emerges in cortical networks. This morphologically determined wiring specificity reflects similarities between the neurons’ axo-dendritic projections patterns, the packing density, and the cellular diversity of the neuropil. The higher these three factors are, the more recurrent is the topology of the network. Conversely, the lower these factors are, the more feedforward is the network’s topology. These principles predict the empirically observed occurrences of clusters of synapses, cell type-specific connectivity patterns, and nonrandom network motifs. Thus, we demonstrate that wiring specificity emerges in the cerebral cortex at subcellular, cellular, and network scales from the specific morphological properties of its neuronal constituents
    • …
    corecore