2,042 research outputs found

    Neural population coding: combining insights from microscopic and mass signals

    Get PDF
    Behavior relies on the distributed and coordinated activity of neural populations. Population activity can be measured using multi-neuron recordings and neuroimaging. Neural recordings reveal how the heterogeneity, sparseness, timing, and correlation of population activity shape information processing in local networks, whereas neuroimaging shows how long-range coupling and brain states impact on local activity and perception. To obtain an integrated perspective on neural information processing we need to combine knowledge from both levels of investigation. We review recent progress of how neural recordings, neuroimaging, and computational approaches begin to elucidate how interactions between local neural population activity and large-scale dynamics shape the structure and coding capacity of local information representations, make them state-dependent, and control distributed populations that collectively shape behavior

    Dynamic Adaptive Computation: Tuning network states to task requirements

    Get PDF
    Neural circuits are able to perform computations under very diverse conditions and requirements. The required computations impose clear constraints on their fine-tuning: a rapid and maximally informative response to stimuli in general requires decorrelated baseline neural activity. Such network dynamics is known as asynchronous-irregular. In contrast, spatio-temporal integration of information requires maintenance and transfer of stimulus information over extended time periods. This can be realized at criticality, a phase transition where correlations, sensitivity and integration time diverge. Being able to flexibly switch, or even combine the above properties in a task-dependent manner would present a clear functional advantage. We propose that cortex operates in a "reverberating regime" because it is particularly favorable for ready adaptation of computational properties to context and task. This reverberating regime enables cortical networks to interpolate between the asynchronous-irregular and the critical state by small changes in effective synaptic strength or excitation-inhibition ratio. These changes directly adapt computational properties, including sensitivity, amplification, integration time and correlation length within the local network. We review recent converging evidence that cortex in vivo operates in the reverberating regime, and that various cortical areas have adapted their integration times to processing requirements. In addition, we propose that neuromodulation enables a fine-tuning of the network, so that local circuits can either decorrelate or integrate, and quench or maintain their input depending on task. We argue that this task-dependent tuning, which we call "dynamic adaptive computation", presents a central organization principle of cortical networks and discuss first experimental evidence.Comment: 6 pages + references, 2 figure

    Inferring neural circuit structure from datasets of heterogeneous tuning curves.

    Get PDF
    Tuning curves characterizing the response selectivities of biological neurons can exhibit large degrees of irregularity and diversity across neurons. Theoretical network models that feature heterogeneous cell populations or partially random connectivity also give rise to diverse tuning curves. Empirical tuning curve distributions can thus be utilized to make model-based inferences about the statistics of single-cell parameters and network connectivity. However, a general framework for such an inference or fitting procedure is lacking. We address this problem by proposing to view mechanistic network models as implicit generative models whose parameters can be optimized to fit the distribution of experimentally measured tuning curves. A major obstacle for fitting such models is that their likelihood function is not explicitly available or is highly intractable. Recent advances in machine learning provide ways for fitting implicit generative models without the need to evaluate the likelihood and its gradient. Generative Adversarial Networks (GANs) provide one such framework which has been successful in traditional machine learning tasks. We apply this approach in two separate experiments, showing how GANs can be used to fit commonly used mechanistic circuit models in theoretical neuroscience to datasets of tuning curves. This fitting procedure avoids the computationally expensive step of inferring latent variables, such as the biophysical parameters of, or synaptic connections between, particular recorded cells. Instead, it directly learns generalizable model parameters characterizing the network's statistical structure such as the statistics of strength and spatial range of connections between different cell types. Another strength of this approach is that it fits the joint high-dimensional distribution of tuning curves, instead of matching a few summary statistics picked a priori by the user, resulting in a more accurate inference of circuit properties. More generally, this framework opens the door to direct model-based inference of circuit structure from data beyond single-cell tuning curves, such as simultaneous population recordings

    What does semantic tiling of the cortex tell us about semantics?

    Get PDF
    Recent use of voxel-wise modeling in cognitive neuroscience suggests that semantic maps tile the cortex. Although this impressive research establishes distributed cortical areas active during the conceptual processing that underlies semantics, it tells us little about the nature of this processing. While mapping concepts between Marr's computational and implementation levels to support neural encoding and decoding, this approach ignores Marr's algorithmic level, central for understanding the mechanisms that implement cognition, in general, and conceptual processing, in particular. Following decades of research in cognitive science and neuroscience, what do we know so far about the representation and processing mechanisms that implement conceptual abilities? Most basically, much is known about the mechanisms associated with: (1) features and frame representations, (2) grounded, abstract, and linguistic representations, (3) knowledge-based inference, (4) concept composition, and (5) conceptual flexibility. Rather than explaining these fundamental representation and processing mechanisms, semantic tiles simply provide a trace of their activity over a relatively short time period within a specific learning context. Establishing the mechanisms that implement conceptual processing in the brain will require more than mapping it to cortical (and sub-cortical) activity, with process models from cognitive science likely to play central roles in specifying the intervening mechanisms. More generally, neuroscience will not achieve its basic goals until it establishes algorithmic-level mechanisms that contribute essential explanations to how the brain works, going beyond simply establishing the brain areas that respond to various task conditions

    Characterization of response properties and connectivity in mouse visual thalamus and cortex

    Get PDF
    How neuronal activity is shaped by circuit connectivity between neuronal populations is a central question in visual neuroscience. Combined with experimental data, computational models allow causal investigation and prediction of both how connectivity influences activity and how activity constrains connectivity. In order to develop and refine these computational models of the visual system, thorough characterization of neuronal response patterns is required. In this thesis, I first present an approach to infer connectivity from in vivo stimulus responses in mouse visual cortex, revealing underlying principles of connectivity between excitatory and inhibitory neurons. Second, I investigate suppressed-by-contrast neurons, which, while known since the 1960s, still remain to be included in standard models of visual function. I present a characterization of intrinsic firing properties and stimulus responses that expands the knowledge about this obscure neuron type. Inferring the neuronal connectome from neural activity is a major objective of computational connectomics. Complementary to direct experimental investigation of connectivity, inference approaches combine simultaneous activity data of individual neurons with methods ranging from statistical considerations of similarity to large-scale simulations of neuronal networks. However, due to the mathematically ill-defined nature of inferring connectivity from in vivo activity, most approaches have to constrain the inference procedure using experimental findings that are not part of the neural activity data set at hand. Combining the stabilized-supralinear network model with response data from the visual thalamus and cortex of mice, my collaborators and I have found a way to infer connectivity from in vivo data alone. Leveraging a property of neural responses known as contrast-invariance of orientation tuning, our inference approach reveals a consistent order of connection strengths between cortical neuron populations as well as tuning differences between thalamic inputs and cortex. Throughout the history of visual neuroscience, neurons that respond to a visual stimulus with an increase in firing have been at the center of attention. A different response type that decreases its activity in response to visual stimuli, however, has been only sparsely investigated. Consequently, these suppressed-by-contrast neurons, while recently receiving renewed attention from researchers, have not been characterized in depth. Together with my collaborators, I have conducted a survey of SbC properties covering firing reliability, cortical location, and tuning to stimulus orientation. We find SbC neurons to fire less regularly than expected, be located in the lower parts of cortex, and show significant tuning to oriented gratings

    In vivo extracellular recordings of thalamic and cortical visual responses reveal V1 connectivity rules

    Get PDF
    The brain’s connectome provides the scaffold for canonical neural computations. However, a comparison of connectivity studies in the mouse primary visual cortex (V1) reveals that the average number and strength of connections between specific neuron types can vary. Can variability in V1 connectivity measurements coexist with canonical neural computations? We developed a theory-driven approach to deduce V1 network connectivity from visual responses in mouse V1 and visual thalamus (dLGN). Our method revealed that the same recorded visual responses were captured by multiple connectivity configurations. Remarkably, the magnitude and selectivity of connectivity weights followed a specific order across most of the inferred connectivity configurations. We argue that this order stems from the specific shapes of the recorded contrast response functions and contrast invariance of orientation tuning. Remarkably, despite variability across connectivity studies, connectivity weights computed from individual published connectivity reports followed the order we identified with our method, suggesting that the relations between the weights, rather than their magnitudes, represent a connectivity motif supporting canonical V1 computations
    • …
    corecore