58 research outputs found

    A Bayesian approach for inferring neuronal connectivity from calcium fluorescent imaging data

    Full text link
    Deducing the structure of neural circuits is one of the central problems of modern neuroscience. Recently-introduced calcium fluorescent imaging methods permit experimentalists to observe network activity in large populations of neurons, but these techniques provide only indirect observations of neural spike trains, with limited time resolution and signal quality. In this work we present a Bayesian approach for inferring neural circuitry given this type of imaging data. We model the network activity in terms of a collection of coupled hidden Markov chains, with each chain corresponding to a single neuron in the network and the coupling between the chains reflecting the network's connectivity matrix. We derive a Monte Carlo Expectation--Maximization algorithm for fitting the model parameters; to obtain the sufficient statistics in a computationally-efficient manner, we introduce a specialized blockwise-Gibbs algorithm for sampling from the joint activity of all observed neurons given the observed fluorescence data. We perform large-scale simulations of randomly connected neuronal networks with biophysically realistic parameters and find that the proposed methods can accurately infer the connectivity in these networks given reasonable experimental and computational constraints. In addition, the estimation accuracy may be improved significantly by incorporating prior knowledge about the sparseness of connectivity in the network, via standard L1_1 penalization methods.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS303 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A unifying functional approach towards synaptic long-term plasticity

    Get PDF
    Das Gehirn ist die wohl komplexeste Struktur auf Erden, die der Mensch erforscht. Es besteht aus einem riesigen Netzwerk von Nervenzellen, welches in der Lage ist eingehende sensorische Informationen zu verarbeiten um daraus eine sinnvolle Repräsentation der Umgebung zu erstellen. Außerdem koordiniert es die Aktionen des Organismus um mit der Umgebung zu interagieren. Das Gehirn hat die bemerkenswerte Fähigkeit sowohl Informationen zu speichern als auch sich ständig an ändernde Bedingungen anzupassen, und zwar über die gesamte Lebensdauer. Dies ist essentiell für Mensch oder Tier um sich zu entwickeln und zu lernen. Die Grundlage für diesen lebenslangen Lernprozess ist die Plastizität des Gehirns, welche das riesige Netzwerk von Neuronen ständig anpasst und neu verbindet. Die Veränderungen an den synaptischen Verbindungen und der intrinsischen Erregbarkeit jedes Neurons finden durch selbstorganisierte Mechanismen statt und optimieren das Verhalten des Organismus als Ganzes. Das Phänomen der neuronalen Plastizität beschäftigt die Neurowissenschaften und anderen Disziplinen bereits über mehrere Jahrzehnte. Dabei beschreibt die intrinsische Plastizität die ständige Anpassung der Erregbarkeit eines Neurons um einen ausbalancierten, homöostatischen Arbeitsbereich zu gewährleisten. Aber besonders die synaptische Plastizität, welche die Änderungen in der Stärke bestehender Verbindungen bezeichnet, wurde unter vielen verschiedenen Bedingungen erforscht und erwies sich mit jeder neuen Studie als immer komplexer. Sie wird durch ein komplexes Zusammenspiel von biophysikalischen Mechanismen induziert und hängt von verschiedenen Faktoren wie der Frequenz der Aktionspotentiale, deren Timing und dem Membranpotential ab und zeigt außerdem eine metaplastische Abhängigkeit von vergangenen Ereignissen. Letztlich beeinflusst die synaptische Plastizität die Signalverarbeitung und Berechnung einzelner Neuronen und der neuronalen Netzwerke. Der Schwerpunkt dieser Arbeit ist es das Verständnis der biologischen Mechanismen und deren Folgen, die zu den beobachteten Plastizitätsphänomene führen, durch eine stärker vereinheitlichte Theorie voranzutreiben.Dazu stelle ich zwei funktionale Ziele für neuronale Plastizität auf, leite Lernregeln aus diesen ab und analysiere deren Konsequenzen und Vorhersagen. Kapitel 3 untersucht die Unterscheidbarkeit der Populationsaktivität in Netzwerken als funktionales Ziel für neuronale Plastizität. Die Hypothese ist dabei, dass gerade in rekurrenten aber auch in vorwärtsgekoppelten Netzwerken die Populationsaktivität als Repräsentation der Eingangssignale optimiert werden kann, wenn ähnliche Eingangssignale eine möglichst unterschiedliche Repräsentation haben und dadurch für die nachfolgende Verarbeitung besser unterscheidbar sind. Das funktionale Ziel ist daher diese Unterscheidbarkeit durch Veränderungen an den Verbindungsstärke und der Erregbarkeit der Neuronen mithilfe von lokalen selbst-organisierten Lernregeln zu maximieren. Aus diesem funktionale Ziel lassen sich eine Reihe von Standard-Lernenregeln für künstliche neuronale Netze gemeinsam abzuleiten. Kapitel 4 wendet einen ähnlichen funktionalen Ansatz auf ein komplexeres, biophysikalisches Neuronenmodell an. Das Ziel ist eine spärliche, stark asymmetrische Verteilung der synaptischen Stärke, wie sie auch bereits mehrfach experimentell gefunden wurde, durch lokale, synaptische Lernregeln zu maximieren. Aus diesem funktionalen Ansatz können alle wichtigen Phänomene der synaptischen Plastizität erklärt werden. Simulationen der Lernregel in einem realistischen Neuronmodell mit voller Morphologie erklären die Daten von timing-, raten- und spannungsabhängigen Plastizitätsprotokollen. Die Lernregel hat auch eine intrinsische Abhängigkeit von der Position der Synapse, welche mit den experimentellen Ergebnissen übereinstimmt. Darüber hinaus kann die Lernregel ohne zusätzliche Annahmen metaplastische Phänomene erklären. Dabei sagt der Ansatz eine neue Form der Metaplastizität voraus, welche die timing-abhängige Plastizität beeinflusst. Die formulierte Lernregel führt zu zwei neuartigen Vereinheitlichungen für synaptische Plastizität: Erstens zeigt sie, dass die verschiedenen Phänomene der synaptischen Plastizität als Folge eines einzigen funktionalen Ziels verstanden werden können. Und zweitens überbrückt der Ansatz die Lücke zwischen der funktionalen und mechanistische Beschreibungsweise. Das vorgeschlagene funktionale Ziel führt zu einer Lernregel mit biophysikalischer Formulierung, welche mit etablierten Theorien der biologischen Mechanismen in Verbindung gebracht werden kann. Außerdem kann das Ziel einer spärlichen Verteilung der synaptischen Stärke als Beitrag zu einer energieeffizienten synaptischen Signalübertragung und optimierten Codierung interpretiert werden

    Learning as filtering: Implications for spike-based plasticity.

    Get PDF
    Most normative models in computational neuroscience describe the task of learning as the optimisation of a cost function with respect to a set of parameters. However, learning as optimisation fails to account for a time-varying environment during the learning process and the resulting point estimate in parameter space does not account for uncertainty. Here, we frame learning as filtering, i.e., a principled method for including time and parameter uncertainty. We derive the filtering-based learning rule for a spiking neuronal network-the Synaptic Filter-and show its computational and biological relevance. For the computational relevance, we show that filtering improves the weight estimation performance compared to a gradient learning rule with optimal learning rate. The dynamics of the mean of the Synaptic Filter is consistent with spike-timing dependent plasticity (STDP) while the dynamics of the variance makes novel predictions regarding spike-timing dependent changes of EPSP variability. Moreover, the Synaptic Filter explains experimentally observed negative correlations between homo- and heterosynaptic plasticity

    Uncovering hidden network architecture from spiking activities using an exact statistical input-output relation of neurons

    Get PDF
    神経回路網の構造をつきとめる --神経活動と回路構造をつなぐ新しい地図を作成--. 京都大学プレスリリース. 2023-02-16.Charting a course in the brainy frontier. 京都大学プレスリリース. 2023-02-17.Identifying network architecture from observed neural activities is crucial in neuroscience studies. A key requirement is knowledge of the statistical input-output relation of single neurons in vivo. By utilizing an exact analytical solution of the spike-timing for leaky integrate-and-fire neurons under noisy inputs balanced near the threshold, we construct a framework that links synaptic type, strength, and spiking nonlinearity with the statistics of neuronal population activity. The framework explains structured pairwise and higher-order interactions of neurons receiving common inputs under different architectures. We compared the theoretical predictions with the activity of monkey and mouse V1 neurons and found that excitatory inputs given to pairs explained the observed sparse activity characterized by strong negative triple-wise interactions, thereby ruling out the alternative explanation by shared inhibition. Moreover, we showed that the strong interactions are a signature of excitatory rather than inhibitory inputs whenever the spontaneous rate is low. We present a guide map of neural interactions that help researchers to specify the hidden neuronal motifs underlying observed interactions found in empirical data

    Biologically plausible attractor networks

    Get PDF
    Attractor networks have shownmuch promise as a neural network architecture that can describe many aspects of brain function. Much of the field of study around these networks has coalesced around pioneering work done by John Hoprield, and therefore many approaches have been strongly linked to the field of statistical physics. In this thesis I use existing theoretical and statistical notions of attractor networks, and introduce several biologically inspired extensions to an attractor network for which a mean-field solution has been previously derived. This attractor network is a computational neuroscience model that accounts for decision-making in the situation of two competing stimuli. By basing our simulation studies on such a network, we are able to study situations where mean- field solutions have been derived, and use these as the starting case, which we then extend with large scale integrate-and-fire attractor network simulations. The simulations are large enough to provide evidence that the results apply to networks of the size found in the brain. One factor that has been highlighted by previous research to be very important to brain function is that of noise. Spiking-related noise is seen to be a factor that influences processes such as decision-making, signal detection, short-term memory, and memory recall even with the quite large networks found in the cerebral cortex, and this thesis aims to measure the effects of noise on biologically plausible attractor networks. Our results are obtained using a spiking neural network made up of integrate-and-fire neurons, and we focus our results on the stochastic transition that this network undergoes. In this thesis we examine two such processes that are biologically relevant, but for which no mean-field solutions yet exist: graded firing rates, and diluted connectivity. Representations in the cortex are often graded, and we find that noise in these networks may be larger than with binary representations. In further investigations it was shown that diluted connectivity reduces the effects of noise in the situation where the number of synapses onto each neuron is held constant. In this thesis we also use the same attractor network framework to investigate the Communication through Coherence hypothesis. The Communication through Coherence hypothesis states that synchronous oscillations, especially in the gamma range, can facilitate communication between neural systems. It is shown that information transfer from one network to a second network occurs for a much lower strength of synaptic coupling between the networks than is required to produce coherence. Thus, information transmission can occur before any coherence is produced. This indicates that coherence is not needed for information transmission between coupled networks. This raises a major question about the Communication through Coherence hypothesis. Overall, the results provide substantial contributions towards understanding operation of attractor neuronal networks in the brain

    A plastic multilayer network of the early visual system inspired by the neocortical circuit

    Get PDF
    The ability of the visual system for object recognition is remarkable. A better understanding of its processing would lead to better computer vision systems and could improve our understanding of the underlying principles which produce intelligence. We propose a computational model of the visual areas V1 and V2, implementing a rich connectivity inspired by the neocortical circuit. We combined the three most important cortical plasticity mechanisms. 1) Hebbian synaptic plasticity to learn the synapse strengths of excitatory and inhibitory neurons, including trace learning to learn invariant representations. 2) Intrinsic plasticity to regulate the neurons responses and stabilize the learning in deeper layers. 3) Structural plasticity to modify the connections and to overcome the bias for the learnings from the initial definitions. Among others, we show that our model neurons learn comparable receptive fields to cortical ones. We verify the invariant object recognition performance of the model. We further show that the developed weight strengths and connection probabilities are related to the response correlations of the neurons. We link the connection probabilities of the inhibitory connections to the underlying plasticity mechanisms and explain why inhibitory connections appear unspecific. The proposed model is more detailed than previous approaches. It can reproduce neuroscientific findings and fulfills the purpose of the visual system, invariant object recognition.Das visuelle System des Menschen hat die herausragende Fähigkeit zur invarianten Objekterkennung. Ein besseres Verständnis seiner Arbeitsweise kann zu besseren Computersystemen für das Bildverstehen führen und könnte darüber hinaus unser Verständnis von den zugrundeliegenden Prinzipien unserer Intelligenz verbessern. Diese Arbeit stellt ein Modell der visuellen Areale V1 und V2 vor, welches eine komplexe, von den Strukturen des Neokortex inspirierte, Verbindungsstruktur integriert. Es kombiniert die drei wichtigsten kortikalen Plastizitäten: 1) Hebbsche synaptische Plastizität, um die Stärke der exzitatorischen und inhibitorischen Synapsen zu lernen, welches auch „trace“-Lernen, zum Lernen invarianter Repräsentationen, umfasst. 2) Intrinsische Plastizität, um das Antwortverhalten der Neuronen zu regulieren und damit das Lernen in tieferen Schichten zu stabilisieren. 3) Strukturelle Plastizität, um die Verbindungen zu modifizieren und damit den Einfluss anfänglicher Festlegungen auf das Lernergebnis zu reduzieren. Neben weiteren Ergebnissen wird gezeigt, dass die Neuronen des Modells vergleichbare rezeptive Felder zu Neuronen des visuellen Kortex erlernen. Ebenso wird die Leistungsfähigkeit des Modells zur invariante Objekterkennung verifiziert. Des Weiteren wird der Zusammenhang von Gewichtsstärke und Verbindungswahrscheinlichkeit zur Korrelation der Aktivitäten der Neuronen aufgezeigt. Die gefundenen Verbindungswahrscheinlichkeiten der inhibitorischen Neuronen werden in Zusammenhang mit der Funktionsweise der inhibitorischen Plastizität gesetzt, womit erklärt wird warum inhibitorische Verbindungen unspezifisch erscheinen. Das vorgestellte Modell ist detaillierter als vorangegangene Arbeiten. Es ermöglicht neurowissenschaftliche Erkenntnisse nachzuvollziehen, wobei es ebenso die Hauptleistung des visuellen Systems erbringt, invariante Objekterkennung. Darüber hinaus ermöglichen sein Detailgrad und seine Selbstorganisationsprinzipien weitere neurowissenschaftliche Erkenntnisse und die Modellierung komplexerer Modelle der Verarbeitung im Gehirn

    シナプスのダイナミクスと学習 : いかにして可塑性の生物学的メカニズムは、神経情報処理を可能とする効率的な学習則を実現するか。

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学客員教授 深井 朋樹, 東京大学教授 能瀬 聡直, 東京大学教授 岡田 真人, 東京大学准教授 久恒 辰博, 東京大学講師 牧野 泰才University of Tokyo(東京大学

    Memory capacity in the hippocampus

    Get PDF
    Neural assemblies in hippocampus encode positions. During rest, the hippocam- pus replays sequences of neural activity seen during awake behavior. This replay is linked to memory consolidation and mental exploration of the environment. Re- current networks can be used to model the replay of sequential activity. Multiple sequences can be stored in the synaptic connections. To achieve a high mem- ory capacity, recurrent networks require a pattern separation mechanism. Such a mechanism is global remapping, observed in place cell populations. A place cell fires at a particular position of an environment and is silent elsewhere. Multiple place cells usually cover an environment with their firing fields. Small changes in the environment or context of a behavioral task can cause global remapping, i.e. profound changes in place cell firing fields. Global remapping causes some cells to cease firing, other silent cells to gain a place field, and other place cells to move their firing field and change their peak firing rate. The effect is strong enough to make global remapping a viable pattern separation mechanism. We model two mechanisms that improve the memory capacity of recurrent net- works. The effect of inhibition on replay in a recurrent network is modeled using binary neurons and binary synapses. A mean field approximation is used to de- termine the optimal parameters for the inhibitory neuron population. Numerical simulations of the full model were carried out to verify the predictions of the mean field model. A second model analyzes a hypothesized global remapping mecha- nism, in which grid cell firing is used as feed forward input to place cells. Grid cells have multiple firing fields in the same environment, arranged in a hexagonal grid. Grid cells can be used in a model as feed forward inputs to place cells to produce place fields. In these grid-to-place cell models, shifts in the grid cell firing patterns cause remapping in the place cell population. We analyze the capacity of such a system to create sets of separated patterns, i.e. how many different spatial codes can be generated. The limiting factor are the synapses connecting grid cells to place cells. To assess their capacity, we produce different place codes in place and grid cell populations, by shuffling place field positions and shifting grid fields of grid cells. Then we use Hebbian learning to increase the synaptic weights be- tween grid and place cells for each set of grid and place code. The capacity limit is reached when synaptic interference makes it impossible to produce a place code with sufficient spatial acuity from grid cell firing. Additionally, it is desired to also maintain the place fields compact, or sparse if seen from a coding standpoint. Of course, as more environments are stored, the sparseness is lost. Interestingly, place cells lose the sparseness of their firing fields much earlier than their spatial acuity. For the sequence replay model we are able to increase capacity in a simulated recurrent network by including an inhibitory population. We show that even in this more complicated case, capacity is improved. We observe oscillations in the average activity of both excitatory and inhibitory neuron populations. The oscillations get stronger at the capacity limit. In addition, at the capacity limit, rather than observing a sudden failure of replay, we find sequences are replayed transiently for a couple of time steps before failing. Analyzing the remapping model, we find that, as we store more spatial codes in the synapses, first the sparseness of place fields is lost. Only later do we observe a decay in spatial acuity of the code. We found two ways to maintain sparse place fields while achieving a high capacity: inhibition between place cells, and partitioning the place cell population so that learning affects only a small fraction of them in each environment. We present scaling predictions that suggest that hundreds of thousands of spatial codes can be produced by this pattern separation mechanism. The effect inhibition has on the replay model is two-fold. Capacity is increased, and the graceful transition from full replay to failure allows for higher capacities when using short sequences. Additional mechanisms not explored in this model could be at work to concatenate these short sequences, or could perform more complex operations on them. The interplay of excitatory and inhibitory populations gives rise to oscillations, which are strongest at the capacity limit. The oscillation draws a picture of how a memory mechanism can cause hippocampal oscillations as observed in experiments. In the remapping model we showed that sparseness of place cell firing is constraining the capacity of this pattern separation mechanism. Grid codes outperform place codes regarding spatial acuity, as shown in Mathis et al. (2012). Our model shows that the grid-to-place transformation is not harnessing the full spatial information from the grid code in order to maintain sparse place fields. This suggests that the two codes are independent, and communication between the areas might be mostly for synchronization. High spatial acuity seems to be a specialization of the grid code, while the place code is more suitable for memory tasks. In a detailed model of hippocampal replay we show that feedback inhibition can increase the number of sequences that can be replayed. The effect of inhibition on capacity is determined using a meanfield model, and the results are verified with numerical simulations of the full network. Transient replay is found at the capacity limit, accompanied by oscillations that resemble sharp wave ripples in hippocampus. In a second model Hippocampal replay of neuronal activity is linked to memory consolidation and mental exploration. Furthermore, replay is a potential neural correlate of episodic memory. To model hippocampal sequence replay, recurrent neural networks are used. Memory capacity of such networks is of great interest to determine their biological feasibility. And additionally, any mechanism that improves capacity has explanatory power. We investigate two such mechanisms. The first mechanism to improve capacity is global, unspecific feedback inhibition for the recurrent network. In a simplified meanfield model we show that capacity is indeed improved. The second mechanism that increases memory capacity is pattern separation. In the spatial context of hippocampal place cell firing, global remapping is one way to achieve pattern separation. Changes in the environment or context of a task cause global remapping. During global remapping, place cell firing changes in unpredictable ways: cells shift their place fields, or fully cease firing, and formerly silent cells acquire place fields. Global remapping can be triggered by subtle changes in grid cells that give feed-forward inputs to hippocampal place cells. We investigate the capacity of the underlying synaptic connections, defined as the number of different environments that can be represented at a given spatial acuity. We find two essential conditions to achieve a high capacity and sparse place fields: inhibition between place cells, and partitioning the place cell population so that learning affects only a small fraction of them in each environments. We also find that sparsity of place fields is the constraining factor of the model rather than spatial acuity. Since the hippocampal place code is sparse, we conclude that the hippocampus does not fully harness the spatial information available in the grid code. The two codes of space might thus serve different purposes
    corecore