102 research outputs found

    High-Pass Filtering and Dynamic Gain Regulation Enhance Vertical Bursts Transmission along the Mossy Fiber Pathway of Cerebellum

    Get PDF
    Signal elaboration in the cerebellum mossy fiber input pathway presents controversial aspects, especially concerning gain regulation and the spot-like (rather than beam-like) appearance of granular to molecular layer transmission. By using voltage-sensitive dye imaging in rat cerebellar slices (Mapelli et al., 2010), we found that mossy fiber bursts optimally excited the granular layer above ∼50 Hz and the overlaying molecular layer above ∼100 Hz, thus generating a cascade of high-pass filters. NMDA receptors enhanced transmission in the granular, while GABA-A receptors depressed transmission in both the granular and molecular layer. Burst transmission gain was controlled through a dynamic frequency-dependent involvement of these receptors. Moreover, while high-frequency transmission was enhanced along vertical lines connecting the granular to molecular layer, no high-frequency enhancement was observed along the parallel fiber axis in the molecular layer. This was probably due to the stronger effect of Purkinje cell GABA-A receptor-mediated inhibition occurring along the parallel fibers than along the granule cell axon ascending branch. The consequent amplification of burst responses along vertical transmission lines could explain the spot-like activation of Purkinje cells observed following punctuate stimulation in vivo

    Inhibitory Plasticity: From Molecules to Computation and Beyond

    Get PDF
    Synaptic plasticity is the cellular and molecular counterpart of learning and memory and, since its first discovery, the analysis of the mechanisms underlying long-term changes of synaptic strength has been almost exclusively focused on excitatory connections. Conversely, inhibition was considered as a fixed controller of circuit excitability. Only recently, inhibitory networks were shown to be finely regulated by a wide number of mechanisms residing in their synaptic connections. Here, we review recent findings on the forms of inhibitory plasticity (IP) that have been discovered and characterized in different brain areas. In particular, we focus our attention on the molecular pathways involved in the induction and expression mechanisms leading to changes in synaptic efficacy, and we discuss, from the computational perspective, how IP can contribute to the emergence of functional properties of brain circuits

    Biologically Plausible Information Propagation in a CMOS Integrate-and-Fire Artificial Neuron Circuit with Memristive Synapses

    Get PDF
    Neuromorphic circuits based on spikes are currently envisioned as a viable option to achieve brain-like computation capabilities in specific electronic implementations while limiting power dissipation given their ability to mimic energy efficient bio-inspired mechanisms. While several network architectures have been developed to embed in hardware the bio-inspired learning rules found in the biological brain, such as the Spike Timing Dependent Plasticity, it is still unclear if hardware spiking neural network architectures can handle and transfer information akin to biological networks. In this work, we investigate the analogies between an artificial neuron combining memristor synapses and rate-based learning rule with biological neuron response in terms of information propagation from a theoretical perspective. Bio-inspired experiments have been reproduced by linking the biological probability of release with the artificial synapses conductance. Mutual information and surprise have been chosen as metrics to evidence how, for different values of synaptic weights, an artificial neuron allows to develop a reliable and biological resembling neural network in terms of information propagation and analysi

    A Hybrid CMOS-Memristor Spiking Neural Network Supporting Multiple Learning Rules

    Get PDF
    Artificial intelligence (AI) is changing the way computing is performed to cope with real-world, ill-defined tasks for which traditional algorithms fail. AI requires significant memory access, thus running into the von Neumann bottleneck when implemented in standard computing platforms. In this respect, low-latency energy-efficient in-memory computing can be achieved by exploiting emerging memristive devices, given their ability to emulate synaptic plasticity, which provides a path to design large-scale brain-inspired spiking neural networks (SNNs). Several plasticity rules have been described in the brain and their coexistence in the same network largely expands the computational capabilities of a given circuit. In this work, starting from the electrical characterization and modeling of the memristor device, we propose a neuro-synaptic architecture that co-integrates in a unique platform with a single type of synaptic device to implement two distinct learning rules, namely, the spike-timing-dependent plasticity (STDP) and the Bienenstock-Cooper-Munro (BCM). This architecture, by exploiting the aforementioned learning rules, successfully addressed two different tasks of unsupervised learning

    Modeling Early Phases of COVID-19 Pandemic in Northern Italy and Its Implication for Outbreak Diffusion

    Get PDF
    The COVID-19 pandemic has sparked an intense debate about the hidden factors underlying the dynamics of the outbreak. Several computational models have been proposed to inform effective social and healthcare strategies. Crucially, the predictive validity of these models often depends upon incorporating behavioral and social responses to infection. Among these tools, the analytic framework known as “dynamic causal modeling” (DCM) has been applied to the COVID-19 pandemic, shedding new light on the factors underlying the dynamics of the outbreak. We have applied DCM to data from northern Italian regions, the first areas in Europe to contend with the outbreak, and analyzed the predictive validity of the model and also its suitability in highlighting the hidden factors governing the pandemic diffusion. By taking into account data from the beginning of the pandemic, the model could faithfully predict the dynamics of outbreak diffusion varying from region to region. The DCM appears to be a reliable tool to investigate the mechanisms governing the spread of the SARS-CoV-2 to identify the containment and control strategies that could efficiently be used to counteract further waves of infection

    The effect of desflurane on neuronal communication at a central synapse

    Get PDF
    Although general anesthetics are thought to modify critical neuronal functions, their impact on neuronal communication has been poorly examined. We have investigated the effect induced by desflurane, a clinically used general anesthetic, on information transfer at the synapse between mossy fibers and granule cells of cerebellum, where this analysis can be carried out extensively. Mutual information values were assessed by measuring the variability of postsynaptic output in relationship to the variability of a given set of presynaptic inputs. Desflurane synchronized granule cell firing and reduced mutual information in response to physiologically relevant mossy fibers patterns. The decrease in spike variability was due to an increased postsynaptic membrane excitability, which made granule cells more prone to elicit action potentials, and to a strengthened synaptic inhibition, which markedly hampered membrane depolarization. These concomitant actions on granule cells firing indicate that desflurane re-shapes the transfer of information between neurons by providing a less informative neurotransmission rather than completely silencing neuronal activity

    A realistic morpho-anatomical connection strategy for modelling full-scale point-neuron microcircuits

    Get PDF
    The modeling of extended microcircuits is emerging as an effective tool to simulate the neurophysiological correlates of brain activity and to investigate brain dysfunctions. However, for specific networks, a realistic modeling approach based on the combination of available physiological, morphological and anatomical data is still an open issue. One of the main problems in the generation of realistic networks lies in the strategy adopted to build network connectivity. Here we propose a method to implement a neuronal network at single cell resolution by using the geometrical probability volumes associated with pre- and postsynaptic neurites. This allows us to build a network with plausible connectivity properties without the explicit use of computationally intensive touch detection algorithms using full 3D neuron reconstructions. The method has been benchmarked for the mouse hippocampus CA1 area, and the results show that this approach is able to generate full-scale brain networks at single cell resolution that are in good agreement with experimental findings. This geometric reconstruction of axonal and dendritic occupancy, by effectively reflecting morphological and anatomical constraints, could be integrated into structured simulators generating entire circuits of different brain areas facilitating the simulation of different brain regions with realistic models

    Emergence of associative learning in a neuromorphic inference network

    Get PDF
    OBJECTIVE: In the theoretical framework of predictive coding and active inference, the brain can be viewed as instantiating a rich generative model of the world that predicts incoming sensory data while continuously updating its parameters via minimization of prediction errors. While this theory has been successfully applied to cognitive processes - by modelling the activity of functional neural networks at a mesoscopic scale - the validity of the approach when modelling neurons as an ensemble of inferring agents, in a biologically plausible architecture, remained to be explored. APPROACH: We modelled a simplified cerebellar circuit with individual neurons acting as Bayesian agents to simulate the classical delayed eyeblink conditioning protocol. Neurons and synapses adjusted their activity to minimize their prediction error, which was used as the network cost function. This cerebellar network was then implemented in hardware by replicating digital neuronal elements via a low-power microcontroller. MAIN RESULTS: Persistent changes of synaptic strength - that mirrored neurophysiological observations - emerged via local (neurocentric) prediction error minimization, leading to the expression of associative learning. The same paradigm was effectively emulated in low-power hardware showing remarkably efficient performance compared to conventional neuromorphic architectures. SIGNIFICANCE: These findings show that: i) an ensemble of free energy minimizing neurons - organized in a biological plausible architecture - can recapitulate functional self-organization observed in nature, such as associative plasticity, and ii) a neuromorphic network of inference units can learn unsupervised tasks without embedding predefined learning rules in the circuit, thus providing a potential avenue to a novel form of brain-inspired artificial intelligence
    corecore