45 research outputs found

    Neuromatch Academy: Teaching Computational Neuroscience with Global Accessibility

    Get PDF
    Neuromatch Academy (NMA) designed and ran a fully online 3-week Computational Neuroscience Summer School for 1757 students with 191 teaching assistants (TAs) working in virtual inverted (or flipped) classrooms and on small group projects. Fourteen languages, active community management, and low cost allowed for an unprecedented level of inclusivity and universal accessibility

    Inhibition decorrelates visual feature representations in the inner retina

    Get PDF
    The retina extracts visual features for transmission to the brain. Different types of bipolar cell split the photoreceptor input into parallel channels and provide the excitatory drive for downstream visual circuits. Mouse bipolar cell types have been described at great anatomical and genetic detail, but a similarly deep understanding of their functional diversity is lacking. Here, by imaging light-driven glutamate release from more than 13,000 bipolar cell axon terminals in the intact retina, we show that bipolar cell functional diversity is generated by the interplay of dendritic excitatory inputs and axonal inhibitory inputs. The resulting centre and surround components of bipolar cell receptive fields interact to decorrelate bipolar cell output in the spatial and temporal domains. Our findings highlight the importance of inhibitory circuits in generating functionally diverse excitatory pathways and suggest that decorrelation of parallel visual pathways begins as early as the second synapse of the mouse visual system

    Generation of Induced Pluripotent Stem Cells from the Prairie Vole

    Get PDF
    The vast majority of animals mate more or less promiscuously. A few mammals, including humans, utilize more restrained mating strategies that entail a longer term affiliation with a single mating partner. Such pair bonding mating strategies have been resistant to genetic analysis because of a lack of suitable model organisms. Prairie voles are small mouse-like rodents that form enduring pair bonds in the wild as well as in the laboratory, and consequently they have been used widely to study social bonding behavior. The lack of targeted genetic approaches in this species however has restricted the study of the molecular and neural circuit basis of pair bonds. As a first step in rendering the prairie vole amenable to reverse genetics, we have generated induced pluripotent stem cell (IPSC) lines from prairie vole fibroblasts using retroviral transduction of reprogramming factors. These IPSC lines display the cellular and molecular hallmarks of IPSC cells from other organisms, including mice and humans. Moreover, the prairie vole IPSC lines have pluripotent differentiation potential since they can give rise to all three germ layers in tissue culture and in vivo. These IPSC lines can now be used to develop conditions that facilitate homologous recombination and eventually the generation of prairie voles bearing targeted genetic modifications to study the molecular and neural basis of pair bond formation

    Understanding the retinal basis of vision across species

    Get PDF
    The vertebrate retina first evolved some 500 million years ago in ancestral marine chordates. Since then, the eyes of different species have been tuned to best support their unique visuoecological lifestyles. Visual specializations in eye designs, large-scale inhomogeneities across the retinal surface and local circuit motifs mean that all species' retinas are unique. Computational theories, such as the efficient coding hypothesis, have come a long way towards an explanation of the basic features of retinal organization and function; however, they cannot explain the full extent of retinal diversity within and across species. To build a truly general understanding of vertebrate vision and the retina's computational purpose, it is therefore important to more quantitatively relate different species' retinal functions to their specific natural environments and behavioural requirements. Ultimately, the goal of such efforts should be to build up to a more general theory of vision

    Exact feature probabilities in images with occlusion

    Full text link

    Learning unbelievable marginal probabilities

    Full text link
    Loopy belief propagation performs approximate inference on graphical models with loops. One might hope to compensate for the approximation by adjusting model parameters. Learning algorithms for this purpose have been explored previously, and the claim has been made that every set of locally consistent marginals can arise from belief propagation run on a graphical model. On the contrary, here we show that many probability distributions have marginals that cannot be reached by belief propagation using any set of model parameters or any learning algorithm. We call such marginals `unbelievable.' This problem occurs whenever the Hessian of the Bethe free energy is not positive-definite at the target marginals. All learning algorithms for belief propagation necessarily fail in these cases, producing beliefs or sets of beliefs that may even be worse than the pre-learning approximation. We then show that averaging inaccurate beliefs, each obtained from belief propagation using model parameters perturbed about some learned mean values, can achieve the unbelievable marginals

    Engineering a Less Artificial Intelligence

    Full text link
    Despite enormous progress in machine learning, artificial neural networks still lag behind brains in their ability to generalize to new situations. Given identical training data, differences in generalization are caused by many defining features of a learning algorithm, such as network architecture and learning rule. Their joint effect, called “inductive bias,” determines how well any learning algorithm—or brain—generalizes: robust generalization needs good inductive biases. Artificial networks use rather nonspecific biases and often latch onto patterns that are only informative about the statistics of the training data but may not generalize to different scenarios. Brains, on the other hand, generalize across comparatively drastic changes in the sensory input all the time. We highlight some shortcomings of state-of-the-art learning algorithms compared to biological brains and discuss several ideas about how neuroscience can guide the quest for better inductive biases by providing useful constraints on representations and network architecture
    corecore