58 research outputs found

    Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    Get PDF
    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons

    Structure of Spontaneous UP and DOWN Transitions Self-Organizing in a Cortical Network Model

    Get PDF
    Synaptic plasticity is considered to play a crucial role in the experience-dependent self-organization of local cortical networks. In the absence of sensory stimuli, cerebral cortex exhibits spontaneous membrane potential transitions between an UP and a DOWN state. To reveal how cortical networks develop spontaneous activity, or conversely, how spontaneous activity structures cortical networks, we analyze the self-organization of a recurrent network model of excitatory and inhibitory neurons, which is realistic enough to replicate UP–DOWN states, with spike-timing-dependent plasticity (STDP). The individual neurons in the self-organized network exhibit a variety of temporal patterns in the two-state transitions. In addition, the model develops a feed-forward network-like structure that produces a diverse repertoire of precise sequences of the UP state. Our model shows that the self-organized activity well resembles the spontaneous activity of cortical networks if STDP is accompanied by the pruning of weak synapses. These results suggest that the two-state membrane potential transitions play an active role in structuring local cortical circuits

    Distributed Dynamical Computation in Neural Circuits with Propagating Coherent Activity Patterns

    Get PDF
    Activity in neural circuits is spatiotemporally organized. Its spatial organization consists of multiple, localized coherent patterns, or patchy clusters. These patterns propagate across the circuits over time. This type of collective behavior has ubiquitously been observed, both in spontaneous activity and evoked responses; its function, however, has remained unclear. We construct a spatially extended, spiking neural circuit that generates emergent spatiotemporal activity patterns, thereby capturing some of the complexities of the patterns observed empirically. We elucidate what kind of fundamental function these patterns can serve by showing how they process information. As self-sustained objects, localized coherent patterns can signal information by propagating across the neural circuit. Computational operations occur when these emergent patterns interact, or collide with each other. The ongoing behaviors of these patterns naturally embody both distributed, parallel computation and cascaded logical operations. Such distributed computations enable the system to work in an inherently flexible and efficient way. Our work leads us to propose that propagating coherent activity patterns are the underlying primitives with which neural circuits carry out distributed dynamical computation

    Airborne Signals from a Wounded Leaf Facilitate Viral Spreading and Induce Antibacterial Resistance in Neighboring Plants

    Get PDF
    Many plants release airborne volatile compounds in response to wounding due to pathogenic assault. These compounds serve as plant defenses and are involved in plant signaling. Here, we study the effects of pectin methylesterase (PME)-generated methanol release from wounded plants (“emitters”) on the defensive reactions of neighboring “receiver” plants. Plant leaf wounding resulted in the synthesis of PME and a spike in methanol released into the air. Gaseous methanol or vapors from wounded PME-transgenic plants induced resistance to the bacterial pathogen Ralstonia solanacearum in the leaves of non-wounded neighboring “receiver” plants. In experiments with different volatile organic compounds, gaseous methanol was the only airborne factor that could induce antibacterial resistance in neighboring plants. In an effort to understand the mechanisms by which methanol stimulates the antibacterial resistance of “receiver” plants, we constructed forward and reverse suppression subtractive hybridization cDNA libraries from Nicotiana benthamiana plants exposed to methanol. We identified multiple methanol-inducible genes (MIGs), most of which are involved in defense or cell-to-cell trafficking. We then isolated the most affected genes for further analysis: β-1,3-glucanase (BG), a previously unidentified gene (MIG-21), and non-cell-autonomous pathway protein (NCAPP). Experiments with Tobacco mosaic virus (TMV) and a vector encoding two tandem copies of green fluorescent protein as a tracer of cell-to-cell movement showed the increased gating capacity of plasmodesmata in the presence of BG, MIG-21, and NCAPP. The increased gating capacity is accompanied by enhanced TMV reproduction in the “receivers”. Overall, our data indicate that methanol emitted by a wounded plant acts as a signal that enhances antibacterial resistance and facilitates viral spread in neighboring plants

    Iron Behaving Badly: Inappropriate Iron Chelation as a Major Contributor to the Aetiology of Vascular and Other Progressive Inflammatory and Degenerative Diseases

    Get PDF
    The production of peroxide and superoxide is an inevitable consequence of aerobic metabolism, and while these particular "reactive oxygen species" (ROSs) can exhibit a number of biological effects, they are not of themselves excessively reactive and thus they are not especially damaging at physiological concentrations. However, their reactions with poorly liganded iron species can lead to the catalytic production of the very reactive and dangerous hydroxyl radical, which is exceptionally damaging, and a major cause of chronic inflammation. We review the considerable and wide-ranging evidence for the involvement of this combination of (su)peroxide and poorly liganded iron in a large number of physiological and indeed pathological processes and inflammatory disorders, especially those involving the progressive degradation of cellular and organismal performance. These diseases share a great many similarities and thus might be considered to have a common cause (i.e. iron-catalysed free radical and especially hydroxyl radical generation). The studies reviewed include those focused on a series of cardiovascular, metabolic and neurological diseases, where iron can be found at the sites of plaques and lesions, as well as studies showing the significance of iron to aging and longevity. The effective chelation of iron by natural or synthetic ligands is thus of major physiological (and potentially therapeutic) importance. As systems properties, we need to recognise that physiological observables have multiple molecular causes, and studying them in isolation leads to inconsistent patterns of apparent causality when it is the simultaneous combination of multiple factors that is responsible. This explains, for instance, the decidedly mixed effects of antioxidants that have been observed, etc...Comment: 159 pages, including 9 Figs and 2184 reference

    Spike-Based Bayesian-Hebbian Learning of Temporal Sequences

    Get PDF
    Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. We find that the formation of distributed memories, embodied by increased periods of firing in pools of excitatory neurons, together with asymmetrical associations between these distinct network states, can be acquired through plasticity. The model's feasibility is demonstrated using simulations of adaptive exponential integrate-and-fire model neurons (AdEx). We show that the learning and speed of sequence replay depends on a confluence of biophysically relevant parameters including stimulus duration, level of background noise, ratio of synaptic currents, and strengths of short-term depression and adaptation. Moreover, sequence elements are shown to flexibly participate multiple times in the sequence, suggesting that spiking attractor networks of this type can support an efficient combinatorial code. The model provides a principled approach towards understanding how multiple interacting plasticity mechanisms can coordinate hetero-associative learning in unison
    corecore