413 research outputs found
Intrinsic dimension of data representations in deep neural networks
Deep neural networks progressively transform their inputs across multiple processing layers. What are the geometrical properties of the representations learned by these networks? Here we study the intrinsic dimensionality (ID) of data-representations, i.e. the minimal number of parameters needed to describe a representation. We find that, in a trained network, the ID is orders of magnitude smaller than the number of units in each layer. Across layers, the ID first increases and then progressively decreases in the final layers. Remarkably, the ID of the last hidden layer predicts classification accuracy on the test set. These results can neither be found by linear dimensionality estimates (e.g., with principal component analysis), nor in representations that had been artificially linearized. They are neither found in untrained networks, nor in networks that are trained on randomized labels. This suggests that neural networks that can generalize are those that transform the data into low-dimensional, but not necessarily flat manifolds
Retinal metric: a stimulus distance measure derived from population neural responses
The ability of the organism to distinguish between various stimuli is limited
by the structure and noise in the population code of its sensory neurons. Here
we infer a distance measure on the stimulus space directly from the recorded
activity of 100 neurons in the salamander retina. In contrast to previously
used measures of stimulus similarity, this "neural metric" tells us how
distinguishable a pair of stimulus clips is to the retina, given the noise in
the neural population response. We show that the retinal distance strongly
deviates from Euclidean, or any static metric, yet has a simple structure: we
identify the stimulus features that the neural population is jointly sensitive
to, and show the SVM-like kernel function relating the stimulus and neural
response spaces. We show that the non-Euclidean nature of the retinal distance
has important consequences for neural decoding.Comment: 5 pages, 4 figures, to appear in Phys Rev Let
Recommended from our members
Local and Remote Controls on Arctic Mixed-Layer Evolution
In this study Lagrangian large-eddy simulation of cloudy mixed layers in evolving warm air masses in the Arctic is constrained by in situ observations from the recent PASCAL field campaign. A key novelty is that time dependence is maintained in the large-scale forcings. An iterative procedure featuring large-eddy simulation on microgrids is explored to calibrate the case setup, inspired by and making use of the typically long memory of Arctic air masses for upstream conditions. The simulated mixed-phase clouds are part of a turbulent mixed layer that is weakly coupled to the surface and is occasionally capped by a shallow humidity layer. All eight simulated mixed layers exhibit a strong time evolution across a range of time scales, including diurnal but also synoptic fingerprints. A few cases experience rapid cloud collapse, coinciding with a rapid decrease in mixed-layer depth. To gain insight, composite budget analyses are performed. In the mixed-layer interior the heat and moisture budgets are dominated by turbulent transport, radiative cooling, and precipitation. However, near the thermal inversion the large-scale vertical advection also contributes significantly, showing a distinct difference between subsidence and upsidence conditions. A bulk mass budget analysis reveals that entrainment deepening behaves almost time-constantly, as long as clouds are present. In contrast, large-scale subsidence fluctuates much more strongly and can both counteract and boost boundary-layer deepening resulting from entrainment. Strong and sudden subsidence events following prolonged deepening periods are found to cause the cloud collapses, associated with a substantial reduction in the surface downward longwave radiative flux. ©2019. The Authors
Quantitative determination of bond order and lattice distortions in nickel oxide heterostructures by resonant x-ray scattering
We present a combined study of Ni -edge resonant x-ray scattering and
density functional calculations to probe and distinguish electronically driven
ordering and lattice distortions in nickelate heterostructures. We demonstrate
that due to the low crystal symmetry, contributions from structural distortions
can contribute significantly to the energy-dependent Bragg peak intensities of
a bond-ordered NdNiO reference film. For a LaNiO-LaAlO superlattice
that exhibits magnetic order, we establish a rigorous upper bound on the
bond-order parameter. We thus conclusively confirm predictions of a dominant
spin density wave order parameter in metallic nickelates with a
quasi-two-dimensional electronic structure
Neural population coding: combining insights from microscopic and mass signals
Panzeri S, Macke JH, Gross J, Kayser C. Neural population coding: combining insights from microscopic and mass signals. Trends Cogn Sci. 2015;19(3):162-72
former title: A theory for the emergence of neocortical network architecture
Developmental programs that guide neurons and their neurites into specific subvolumes of the mammalian neocortex give rise to lifelong constraints for the formation of synaptic connections. To what degree do these constraints affect cortical wiring diagrams? Here we introduce an inverse modeling approach to show how cortical networks would appear if they were solely due to the spatial distributions of neurons and neurites. We find that neurite packing density and morphological diversity will inevitably translate into non-random pairwise and higher-order connectivity statistics. More importantly, we show that these non-random wiring properties are not arbitrary, but instead reflect the specific structural organization of the underlying neuropil. Our predictions are consistent with the empirically observed wiring specificity from subcellular to network scales. Thus, independent from learning and genetically encoded wiring rules, many of the properties that define the neocortex’ characteristic network architecture may emerge as a result of neuron and neurite development
The impact of neuron morphology on cortical network architecture
The neurons in the cerebral cortex are not randomly interconnected. This specificity in wiring can result from synapse formation mechanisms that connect neurons, depending on their electrical activity and genetically defined identity. Here, we report that the morphological properties of the neurons provide an additional prominent source by which wiring specificity emerges in cortical networks. This morphologically determined wiring specificity reflects similarities between the neurons’ axo-dendritic projections patterns, the packing density, and the cellular diversity of the neuropil. The higher these three factors are, the more recurrent is the topology of the network. Conversely, the lower these factors are, the more feedforward is the network’s topology. These principles predict the empirically observed occurrences of clusters of synapses, cell type-specific connectivity patterns, and nonrandom network motifs. Thus, we demonstrate that wiring specificity emerges in the cerebral cortex at subcellular, cellular, and network scales from the specific morphological properties of its neuronal constituents
- …