10,874 research outputs found

    Optimal Neural Codes for Natural Stimuli

    Get PDF
    The efficient coding hypothesis assumes that biological sensory systems use neural codes that are optimized to best possibly represent the stimuli that occur in their environment. When formulating such optimization problem of neural codes, two key components must be considered. The first is what types of constraints the neural codes must satisfy? The second is the objective function itself -- what is the goal of the neural codes? We seek to provide a systematic framework to address these types of problem. Previous work often assume one specific set of constraint and analytically or numerically solve the optimization problem. Here we want to put everything in a unified framework and show that these results can be understood from a much more generalized perspective. In particular, we provide analytical solutions for a variety of neural noise models and two types of constraint: a range constraint which specifies the max/min neural activity and a metabolic constraint which upper bounds the mean neural activity. In terms of objective functions, most common models rely on information theoretic measures, whereas alternative formulations propose incorporating downstream decoding performance. We systematically evaluate different optimality criteria based upon the LpL_p reconstruction error of the maximum likelihood decoder. This parametric family of optimal criteria includes special cases such as the information maximization criterion and the mean squared loss minimization of decoding error. We analytically derive the optimal tuning curve of a single neuron in terms of the reconstruction error norm pp to encode natural stimuli with an arbitrary input distribution. Under our framework, we can try to answer questions such as what is the objective function the neural code is actually using? Under what constraints can the predicted results provide a better fit for the actual data? Using different combination of objective function and constraints, we tested our analytical predictions against previously measured characteristics of some early visual systems found in biology. We find solutions under the metabolic constraint and low values of pp provides a better fit for physiology data on early visual perception systems

    From Caenorhabditis elegans to the Human Connectome: A Specific Modular Organisation Increases Metabolic, Functional, and Developmental Efficiency

    Full text link
    The connectome, or the entire connectivity of a neural system represented by network, ranges various scales from synaptic connections between individual neurons to fibre tract connections between brain regions. Although the modularity they commonly show has been extensively studied, it is unclear whether connection specificity of such networks can already be fully explained by the modularity alone. To answer this question, we study two networks, the neuronal network of C. elegans and the fibre tract network of human brains yielded through diffusion spectrum imaging (DSI). We compare them to their respective benchmark networks with varying modularities, which are generated by link swapping to have desired modularity values but otherwise maximally random. We find several network properties that are specific to the neural networks and cannot be fully explained by the modularity alone. First, the clustering coefficient and the characteristic path length of C. elegans and human connectomes are both higher than those of the benchmark networks with similar modularity. High clustering coefficient indicates efficient local information distribution and high characteristic path length suggests reduced global integration. Second, the total wiring length is smaller than for the alternative configurations with similar modularity. This is due to lower dispersion of connections, which means each neuron in C. elegans connectome or each region of interest (ROI) in human connectome reaches fewer ganglia or cortical areas, respectively. Third, both neural networks show lower algorithmic entropy compared to the alternative arrangements. This implies that fewer rules are needed to encode for the organisation of neural systems

    Action potential energy efficiency varies among neuron types in vertebrates and invertebrates.

    Get PDF
    The initiation and propagation of action potentials (APs) places high demands on the energetic resources of neural tissue. Each AP forces ATP-driven ion pumps to work harder to restore the ionic concentration gradients, thus consuming more energy. Here, we ask whether the ionic currents underlying the AP can be predicted theoretically from the principle of minimum energy consumption. A long-held supposition that APs are energetically wasteful, based on theoretical analysis of the squid giant axon AP, has recently been overturned by studies that measured the currents contributing to the AP in several mammalian neurons. In the single compartment models studied here, AP energy consumption varies greatly among vertebrate and invertebrate neurons, with several mammalian neuron models using close to the capacitive minimum of energy needed. Strikingly, energy consumption can increase by more than ten-fold simply by changing the overlap of the Na+ and K+ currents during the AP without changing the APs shape. As a consequence, the height and width of the AP are poor predictors of energy consumption. In the Hodgkin–Huxley model of the squid axon, optimizing the kinetics or number of Na+ and K+ channels can whittle down the number of ATP molecules needed for each AP by a factor of four. In contrast to the squid AP, the temporal profile of the currents underlying APs of some mammalian neurons are nearly perfectly matched to the optimized properties of ionic conductances so as to minimize the ATP cost
    • …
    corecore