13 research outputs found

    Functional and spatial rewiring jointly generate convergent-divergent units in self-organizing networks

    Full text link
    Self-organization through adaptive rewiring of random neural networks generates brain-like topologies comprising modular small-world structures with rich club effects, merely as the product of optimizing the network topology. In the nervous system, spatial organization is optimized no less by rewiring, through minimizing wiring distance and maximizing spatially aligned wiring layouts. We show that such spatial organization principles interact constructively with adaptive rewiring, contributing to establish the networks' connectedness and modular structures. We use an evolving neural network model with weighted and directed connections, in which neural traffic flow is based on consensus and advection dynamics, to show that wiring cost minimization supports adaptive rewiring in creating convergent-divergent unit structures. Convergent-divergent units consist of a convergent input-hub, connected to a divergent output-hub via subnetworks of intermediate nodes, which may function as the computational core of the unit. The prominence of minimizing wiring distance in the dynamic evolution of the network determines the extent to which the core is encapsulated from the rest of the network, i.e., the context-sensitivity of its computations. This corresponds to the central role convergent-divergent units play in establishing context-sensitivity in neuronal information processing

    Beyond â„“1\ell_1 sparse coding in V1

    Full text link
    Growing evidence indicates that only a sparse subset from a pool of sensory neurons is active for the encoding of visual stimuli at any instant in time. Traditionally, to replicate such biological sparsity, generative models have been using the ℓ1\ell_1 norm as a penalty due to its convexity, which makes it amenable to fast and simple algorithmic solvers. In this work, we use biological vision as a test-bed and show that the soft thresholding operation associated to the use of the ℓ1\ell_1 norm is highly suboptimal compared to other functions suited to approximating ℓq\ell_q with 0≤q<10 \leq q < 1 (including recently proposed Continuous Exact relaxations), both in terms of performance and in the production of features that are akin to signatures of the primary visual cortex. We show that ℓ1\ell_1 sparsity produces a denser code or employs a pool with more neurons, i.e. has a higher degree of overcompleteness, in order to maintain the same reconstruction error as the other methods considered. For all the penalty functions tested, a subset of the neurons develop orientation selectivity similarly to V1 neurons. When their code is sparse enough, the methods also develop receptive fields with varying functionalities, another signature of V1. Compared to other methods, soft thresholding achieves this level of sparsity at the expense of much degraded reconstruction performance, that more likely than not is not acceptable in biological vision. Our results indicate that V1 uses a sparsity inducing regularization that is closer to the ℓ0\ell_0 pseudo-norm rather than to the ℓ1\ell_1 norm

    Adaptive rewiring evolves brain-like structure in weighted networks

    No full text
    Activity-dependent plasticity refers to a range of mechanisms for adaptively reshaping neuronal connections. We model their common principle in terms of adaptive rewiring of network connectivity, while representing neural activity by diffusion on the network: Where diffusion is intensive, shortcut connections are established, while underused connections are pruned. In binary networks, this process is known to steer initially random networks robustly to high levels of structural complexity, reflecting the global characteristics of brain anatomy: modular or centralized small world topologies. We investigate whether this result extends to more realistic, weighted networks. Both normally- and lognormally-distributed weighted networks evolve either modular or centralized topologies. Which of these prevails depends on a single control parameter, representing global homeostatic or normalizing regulation mechanisms. Intermediate control parameter values exhibit the greatest levels of network complexity, incorporating both modular and centralized tendencies. The simulation results allow us to propose diffusion based adaptive rewiring as a parsimonious model for activity-dependent reshaping of brain connectivity structure.status: publishe

    Functional and spatial rewiring principles jointly regulate context-sensitive computation.

    No full text
    Adaptive rewiring provides a basic principle of self-organizing connectivity in evolving neural network topology. By selectively adding connections to regions with intense signal flow and deleting underutilized connections, adaptive rewiring generates optimized brain-like, i.e. modular, small-world, and rich club connectivity structures. Besides topology, neural self-organization also follows spatial optimization principles, such as minimizing the neural wiring distance and topographic alignment of neural pathways. We simulated the interplay of these spatial principles and adaptive rewiring in evolving neural networks with weighted and directed connections. The neural traffic flow within the network is represented by the equivalent of diffusion dynamics for directed edges: consensus and advection. We observe a constructive synergy between adaptive and spatial rewiring, which contributes to network connectedness. In particular, wiring distance minimization facilitates adaptive rewiring in creating convergent-divergent units. These units support the flow of neural information and enable context-sensitive information processing in the sensory cortex and elsewhere. Convergent-divergent units consist of convergent hub nodes, which collect inputs from pools of nodes and project these signals via a densely interconnected set of intermediate nodes onto divergent hub nodes, which broadcast their output back to the network. Convergent-divergent units vary in the degree to which their intermediate nodes are isolated from the rest of the network. This degree, and hence the context-sensitivity of the network's processing style, is parametrically determined in the evolving network model by the relative prominence of spatial versus adaptive rewiring

    Beyond â„“1\ell_1 sparse coding in V1

    No full text
    Growing evidence indicates that only a sparse subset from a pool of sensory neurons is active for the encoding of visual stimuli at any instant in time. Traditionally, to replicate such biological sparsity, generative models have been using the ℓ1\ell_1 norm as a penalty due to its convexity, which makes it amenable to fast and simple algorithmic solvers. In this work, we use biological vision as a test-bed and show that the soft thresholding operation associated to the use of the ℓ1\ell_1 norm is highly suboptimal compared to other functions suited to approximating ℓq\ell_q with 0≤q<10 \leq q < 1 (including recently proposed Continuous Exact relaxations), both in terms of performance and in the production of features that are akin to signatures of the primary visual cortex. We show that ℓ1\ell_1 sparsity produces a denser code or employs a pool with more neurons, i.e. has a higher degree of overcompleteness, in order to maintain the same reconstruction error as the other methods considered. For all the penalty functions tested, a subset of the neurons develop orientation selectivity similarly to V1 neurons. When their code is sparse enough, the methods also develop receptive fields with varying functionalities, another signature of V1. Compared to other methods, soft thresholding achieves this level of sparsity at the expense of much degraded reconstruction performance, that more likely than not is not acceptable in biological vision. Our results indicate that V1 uses a sparsity inducing regularization that is closer to the ℓ0\ell_0 pseudo-norm rather than to the ℓ1\ell_1 norm

    Beyond â„“1\ell_1 sparse coding in V1

    No full text
    Growing evidence indicates that only a sparse subset from a pool of sensory neurons is active for the encoding of visual stimuli at any instant in time. Traditionally, to replicate such biological sparsity, generative models have been using the ℓ1\ell_1 norm as a penalty due to its convexity, which makes it amenable to fast and simple algorithmic solvers. In this work, we use biological vision as a test-bed and show that the soft thresholding operation associated to the use of the ℓ1\ell_1 norm is highly suboptimal compared to other functions suited to approximating ℓq\ell_q with 0≤q<10 \leq q < 1 (including recently proposed Continuous Exact relaxations), both in terms of performance and in the production of features that are akin to signatures of the primary visual cortex. We show that ℓ1\ell_1 sparsity produces a denser code or employs a pool with more neurons, i.e. has a higher degree of overcompleteness, in order to maintain the same reconstruction error as the other methods considered. For all the penalty functions tested, a subset of the neurons develop orientation selectivity similarly to V1 neurons. When their code is sparse enough, the methods also develop receptive fields with varying functionalities, another signature of V1. Compared to other methods, soft thresholding achieves this level of sparsity at the expense of much degraded reconstruction performance, that more likely than not is not acceptable in biological vision. Our results indicate that V1 uses a sparsity inducing regularization that is closer to the ℓ0\ell_0 pseudo-norm rather than to the ℓ1\ell_1 norm

    Beyond â„“1\ell_1 sparse coding in V1

    No full text
    Growing evidence indicates that only a sparse subset from a pool of sensory neurons is active for the encoding of visual stimuli at any instant in time. Traditionally, to replicate such biological sparsity, generative models have been using the ℓ1\ell_1 norm as a penalty due to its convexity, which makes it amenable to fast and simple algorithmic solvers. In this work, we use biological vision as a test-bed and show that the soft thresholding operation associated to the use of the ℓ1\ell_1 norm is highly suboptimal compared to other functions suited to approximating ℓq\ell_q with 0≤q<10 \leq q < 1 (including recently proposed Continuous Exact relaxations), both in terms of performance and in the production of features that are akin to signatures of the primary visual cortex. We show that ℓ1\ell_1 sparsity produces a denser code or employs a pool with more neurons, i.e. has a higher degree of overcompleteness, in order to maintain the same reconstruction error as the other methods considered. For all the penalty functions tested, a subset of the neurons develop orientation selectivity similarly to V1 neurons. When their code is sparse enough, the methods also develop receptive fields with varying functionalities, another signature of V1. Compared to other methods, soft thresholding achieves this level of sparsity at the expense of much degraded reconstruction performance, that more likely than not is not acceptable in biological vision. Our results indicate that V1 uses a sparsity inducing regularization that is closer to the ℓ0\ell_0 pseudo-norm rather than to the ℓ1\ell_1 norm

    Beyond â„“1 sparse coding in V1.

    No full text
    Growing evidence indicates that only a sparse subset from a pool of sensory neurons is active for the encoding of visual stimuli at any instant in time. Traditionally, to replicate such biological sparsity, generative models have been using the ℓ1 norm as a penalty due to its convexity, which makes it amenable to fast and simple algorithmic solvers. In this work, we use biological vision as a test-bed and show that the soft thresholding operation associated to the use of the ℓ1 norm is highly suboptimal compared to other functions suited to approximating ℓp with 0 ≤ p < 1 (including recently proposed continuous exact relaxations), in terms of performance. We show that ℓ1 sparsity employs a pool with more neurons, i.e. has a higher degree of overcompleteness, in order to maintain the same reconstruction error as the other methods considered. More specifically, at the same sparsity level, the thresholding algorithm using the ℓ1 norm as a penalty requires a dictionary of ten times more units compared to the proposed approach, where a non-convex continuous relaxation of the ℓ0 pseudo-norm is used, to reconstruct the external stimulus equally well. At a fixed sparsity level, both ℓ0- and ℓ1-based regularization develop units with receptive field (RF) shapes similar to biological neurons in V1 (and a subset of neurons in V2), but ℓ0-based regularization shows approximately five times better reconstruction of the stimulus. Our results in conjunction with recent metabolic findings indicate that for V1 to operate efficiently it should follow a coding regime which uses a regularization that is closer to the ℓ0 pseudo-norm rather than the ℓ1 one, and suggests a similar mode of operation for the sensory cortex in general

    Beyond â„“1 sparse coding in V1

    No full text
    21 pags, 8 figs.Growing evidence indicates that only a sparse subset from a pool of sensory neurons is active for the encoding of visual stimuli at any instant in time. Traditionally, to replicate such biological sparsity, generative models have been using the ℓ1 norm as a penalty due to its convexity, which makes it amenable to fast and simple algorithmic solvers. In this work, we use biological vision as a test-bed and show that the soft thresholding operation associated to the use of the ℓ1 norm is highly suboptimal compared to other functions suited to approximating ℓp with 0 ≤ p < 1 (including recently proposed continuous exact relaxations), in terms of performance. We show that ℓ1 sparsity employs a pool with more neurons, i.e. has a higher degree of overcompleteness, in order to maintain the same reconstruction error as the other methods considered. More specifically, at the same sparsity level, the thresholding algorithm using the ℓ1 norm as a penalty requires a dictionary of ten times more units compared to the proposed approach, where a non-convex continuous relaxation of the ℓ0 pseudo-norm is used, to reconstruct the external stimulus equally well. At a fixed sparsity level, both ℓ0- and ℓ1-based regularization develop units with receptive field (RF) shapes similar to biological neurons in V1 (and a subset of neurons in V2), but ℓ0-based regularization shows approximately five times better reconstruction of the stimulus. Our results in conjunction with recent metabolic findings indicate that for V1 to operate efficiently it should follow a coding regime which uses a regularization that is closer to the ℓ0 pseudo-norm rather than the ℓ1 one, and suggests a similar mode of operation for the sensory cortex in general.DP and IR acknowledge the support received by the French National Research Agency (ANR) through Young Investigator (JCJC) grant project ‘Redundancy-free neuro-biological design of visual and auditory sensing’ (RUBIN-VASE). LUP received funding from the ANR project ‘Bio-mimetic agile aerial robots flying in real-life conditions’ (AgileNeuRobot), grant number ANR-20-CE23-0021. LC acknowledges the support received from the French National Centre for Scientific Research (CNRS) to the research group Information, Signal, Image and ViSion (ISIS) for the project ‘Sparse and non-convex optimisation for learning of inverse image microscopy problems’ (SPLIN). LC also received support through ANR JCJC project ‘Task-adapted bilevel learning of flexible statistical models for imaging and vision’ (TASKABILE), grant number ANR-22-CE48-0010.Peer reviewe
    corecore