8 research outputs found

    Dynamic image recognition in a spiking neuron network supplied by astrocytes

    Full text link
    Mathematical model of spiking neuron network (SNN) supplied by astrocytes is investigated. The astrocytes are specific type of brain cells which are not electrically excitable but inducing chemical modulations of neuronal firing. We analyzed how the astrocytes influence on images encoded in the form of dynamic spiking pattern of the SNN. Serving at much slower time scale the astrocytic network interacting with the spiking neurons can remarkably enhance the image recognition quality. Spiking dynamics was affected by noise distorting the information image. We demonstrated that the activation of astrocyte can significantly suppress noise influence improving dynamic image representation by the SNN.Comment: arXiv admin note: text overlap with arXiv:2210.0101

    Astrocyte control bursting mode of spiking neuron network with memristor-implemented plasticity

    Full text link
    A mathematical model of a spiking neuron network accompanied by astrocytes is considered. The network is composed of excitatory and inhibitory neurons with synaptic connections supplied by a memristor-based model of plasticity. Another mechanism for changing the synaptic connections involves astrocytic regulations using the concept of tripartite synapses. In the absence of memristor-based plasticity, the connections between these neurons drive the network dynamics into a burst mode, as observed in many experimental neurobiological studies when investigating living networks in neuronal cultures. The memristive plasticity implementing synaptic plasticity in inhibitory synapses results in a shift in network dynamics towards an asynchronous mode. Next,it is found that accounting for astrocytic regulation in glutamatergic excitatory synapses enable the restoration of 'normal' burst dynamics. The conditions and parameters of such astrocytic regulation's impact on burst dynamics established

    Dynamic Image Representation in a Spiking Neural Network Supplied by Astrocytes

    No full text
    The mathematical model of the spiking neural network (SNN) supplied by astrocytes is investigated. The astrocytes are a specific type of brain cells which are not electrically excitable but induce chemical modulations of neuronal firing. We analyze how the astrocytes influence images encoded in the form of the dynamic spiking pattern of the SNN. Serving at a much slower time scale, the astrocytic network interacting with the spiking neurons can remarkably enhance the image representation quality. The spiking dynamics are affected by noise distorting the information image. We demonstrate that the activation of astrocytes can significantly suppress noise influence, improving the dynamic image representation by the SNN

    Dynamic Image Representation in a Spiking Neural Network Supplied by Astrocytes

    No full text
    The mathematical model of the spiking neural network (SNN) supplied by astrocytes is investigated. The astrocytes are a specific type of brain cells which are not electrically excitable but induce chemical modulations of neuronal firing. We analyze how the astrocytes influence images encoded in the form of the dynamic spiking pattern of the SNN. Serving at a much slower time scale, the astrocytic network interacting with the spiking neurons can remarkably enhance the image representation quality. The spiking dynamics are affected by noise distorting the information image. We demonstrate that the activation of astrocytes can significantly suppress noise influence, improving the dynamic image representation by the SNN

    Artificial Neural Network Model with Astrocyte-Driven Short-Term Memory

    No full text
    In this study, we introduce an innovative hybrid artificial neural network model incorporating astrocyte-driven short-term memory. The model combines a convolutional neural network with dynamic models of short-term synaptic plasticity and astrocytic modulation of synaptic transmission. The model’s performance was evaluated using simulated data from visual change detection experiments conducted on mice. Comparisons were made between the proposed model, a recurrent neural network simulating short-term memory based on sustained neural activity, and a feedforward neural network with short-term synaptic depression (STPNet) trained to achieve the same performance level as the mice. The results revealed that incorporating astrocytic modulation of synaptic transmission enhanced the model’s performance

    Bi-directional astrocytic regulation of neuronal activity within a network

    No full text
    The concept of a tripartite synapse holds that astrocytes can affect both the pre- and postsynaptic compartments through the Ca2+-dependent release of gliotransmitters. Because astrocytic Ca2+ transients usually last for a few seconds, we assumed that astrocytic regulation of synaptic transmission may also occur on the scale of seconds. Here, we considered the basic physiological functions of tripartite synapses and investigated astrocytic regulation at the level of neural network activity. The firing dynamics of individual neurons in a spontaneous firing network was described by the Hodgkin-Huxley model. The neurons received excitatory synaptic input driven by the Poisson spike train with variable frequency. The mean field concentration of the released neurotransmitter was used to describe the presynaptic dynamics. The amplitudes of the excitatory postsynaptic currents (PSCs) obeyed the gamma distribution law. In our model, astrocytes depressed the presynaptic release and enhanced the postsynaptic currents. As a result, low frequency synaptic input was suppressed while high frequency input was amplified. The analysis of the neuron spiking frequency as an indicator of network activity revealed that tripartite synaptic transmission dramatically changed the local network operation compared to bipartite synapses. Specifically, the astrocytes supported homeostatic regulation of the network activity by increasing or decreasing firing of the neurons. Thus, the astrocyte activation may modulate a transition of neural network into bistable regime of activity with two stable firing levels and spontaneous transitions between them

    Domain Adaptation Principal Component Analysis: Base Linear Method for Learning with Out-of-Distribution Data

    No full text
    Domain adaptation is a popular paradigm in modern machine learning which aims at tackling the problem of divergence (or shift) between the labeled training and validation datasets (source domain) and a potentially large unlabeled dataset (target domain). The task is to embed both datasets into a common space in which the source dataset is informative for training while the divergence between source and target is minimized. The most popular domain adaptation solutions are based on training neural networks that combine classification and adversarial learning modules, frequently making them both data-hungry and difficult to train. We present a method called Domain Adaptation Principal Component Analysis (DAPCA) that identifies a linear reduced data representation useful for solving the domain adaptation task. DAPCA algorithm introduces positive and negative weights between pairs of data points, and generalizes the supervised extension of principal component analysis. DAPCA is an iterative algorithm that solves a simple quadratic optimization problem at each iteration. The convergence of the algorithm is guaranteed, and the number of iterations is small in practice. We validate the suggested algorithm on previously proposed benchmarks for solving the domain adaptation task. We also show the benefit of using DAPCA in analyzing single-cell omics datasets in biomedical applications. Overall, DAPCA can serve as a practical preprocessing step in many machine learning applications leading to reduced dataset representations, taking into account possible divergence between source and target domains

    High-Dimensional Separability for One- and Few-Shot Learning

    No full text
    This work is driven by a practical question: corrections of Artificial Intelligence (AI) errors. These corrections should be quick and non-iterative. To solve this problem without modification of a legacy AI system, we propose special ‘external’ devices, correctors. Elementary correctors consist of two parts, a classifier that separates the situations with high risk of error from the situations in which the legacy AI system works well and a new decision that should be recommended for situations with potential errors. Input signals for the correctors can be the inputs of the legacy AI system, its internal signals, and outputs. If the intrinsic dimensionality of data is high enough then the classifiers for correction of small number of errors can be very simple. According to the blessing of dimensionality effects, even simple and robust Fisher’s discriminants can be used for one-shot learning of AI correctors. Stochastic separation theorems provide the mathematical basis for this one-short learning. However, as the number of correctors needed grows, the cluster structure of data becomes important and a new family of stochastic separation theorems is required. We refuse the classical hypothesis of the regularity of the data distribution and assume that the data can have a rich fine-grained structure with many clusters and corresponding peaks in the probability density. New stochastic separation theorems for data with fine-grained structure are formulated and proved. On the basis of these theorems, the multi-correctors for granular data are proposed. The advantages of the multi-corrector technology were demonstrated by examples of correcting errors and learning new classes of objects by a deep convolutional neural network on the CIFAR-10 dataset. The key problems of the non-classical high-dimensional data analysis are reviewed together with the basic preprocessing steps including the correlation transformation, supervised Principal Component Analysis (PCA), semi-supervised PCA, transfer component analysis, and new domain adaptation PCA
    corecore