10 research outputs found
Π Π΅Π°Π»ΠΈΠ·Π°ΡΠΈΡ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΡΠ΅ΡΠΊΠΈΡ ΡΡΠ½ΠΊΡΠΈΠΉ Π½Π° ΡΠΏΠ°ΠΉΠΊΠΎΠ²ΡΡ Π½Π΅ΠΉΡΠΎΠ½Π½ΡΡ ΡΠ΅ΡΡΡ
The question of behavioral functions modeling of animals (in particular, the modeling and implementation of the conditioned reflex) is considered. The analysis of the current state of neural networks with the possibility of structural reconfiguration is carried out. The modeling is carried out by means of neural networks, which are built on the basis of a compartmental spiking model of a neuron with the possibility of structural adaptation to the input pulse pattern. The compartmental spike model of a neuron is able to change its structure (the size of the cell body, the number and length of dendrites, the number of synapses) depending on the incoming pulse pattern at its inputs. A brief description of the compartmental spiking model of a neuron is given, and its main features are noted in terms of the possibility of its structural reconfiguration. The method of structural adaptation of the compartmental spiking model of the neuron to the input pulse pattern is described. To study the work of the proposed model of a neuron in a network, the choice of a conditioned reflex as a special case of the formation of associative connections is justified as an example. The structural scheme and algorithm of formation of a conditioned reflex with both positive and negative reinforcement are described. The article presents a step-by-step description of experiments on the associative connectionβs formation in general and conditioned reflex (both with positive and negative reinforcement), in particular. The conclusion is made about the prospects of using spiking compartmental models of neurons to improve the efficiency of the implementation of behavioral functions in neuromorphic control systems. Further promising directions for the development of neuromorphic systems based on spiking compartmental models of the neuron are considered.Π Π°ΡΡΠΌΠ°ΡΡΠΈΠ²Π°Π΅ΡΡΡ Π²ΠΎΠΏΡΠΎΡ ΠΌΠΎΠ΄Π΅Π»ΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΡΠ΅ΡΠΊΠΈΡ
ΡΡΠ½ΠΊΡΠΈΠΉ ΠΆΠΈΠ²ΠΎΡΠ½ΡΡ
, Π² ΡΠ°ΡΡΠ½ΠΎΡΡΠΈ, ΠΌΠΎΠ΄Π΅Π»ΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅ ΠΈ ΡΠ΅Π°Π»ΠΈΠ·Π°ΡΠΈΡ ΡΡΠ»ΠΎΠ²Π½ΠΎΠ³ΠΎ ΡΠ΅ΡΠ»Π΅ΠΊΡΠ°. ΠΡΠΎΠΈΠ·Π²ΠΎΠ΄ΠΈΡΡΡ Π°Π½Π°Π»ΠΈΠ· ΡΠΎΠ²ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠ³ΠΎ ΡΠΎΡΡΠΎΡΠ½ΠΈΡ Π½Π΅ΠΉΡΠΎΠ½Π½ΡΡ
ΡΠ΅ΡΠ΅ΠΉ Ρ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΡΡ ΡΡΡΡΠΊΡΡΡΠ½ΠΎΠ³ΠΎ ΡΠ΅ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ. ΠΠΎΠ΄Π΅Π»ΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅ ΠΎΡΡΡΠ΅ΡΡΠ²Π»ΡΠ΅ΡΡΡ ΠΏΠΎΡΡΠ΅Π΄ΡΡΠ²ΠΎΠΌ Π½Π΅ΠΉΡΠΎΠ½Π½ΡΡ
ΡΠ΅ΡΠ΅ΠΉ, ΠΊΠΎΡΠΎΡΡΠ΅ ΡΡΡΠΎΡΡΡΡ Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ ΡΠ΅Π³ΠΌΠ΅Π½ΡΠ½ΠΎΠΉ ΡΠΏΠ°ΠΉΠΊΠΎΠ²ΠΎΠΉ ΠΌΠΎΠ΄Π΅Π»ΠΈ Π½Π΅ΠΉΡΠΎΠ½Π° Ρ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΡΡ ΡΡΡΡΠΊΡΡΡΠ½ΠΎΠΉ Π°Π΄Π°ΠΏΡΠ°ΡΠΈΠΈ ΠΊ Π²Ρ
ΠΎΠ΄Π½ΠΎΠΌΡ ΠΏΠ°ΡΡΠ΅ΡΠ½Ρ ΠΈΠΌΠΏΡΠ»ΡΡΠΎΠ². Π‘Π΅Π³ΠΌΠ΅Π½ΡΠ½Π°Ρ ΡΠΏΠ°ΠΉΠΊΠΎΠ²Π°Ρ ΠΌΠΎΠ΄Π΅Π»Ρ Π½Π΅ΠΉΡΠΎΠ½Π° ΡΠΏΠΎΡΠΎΠ±Π½Π° ΠΈΠ·ΠΌΠ΅Π½ΡΡΡ ΡΠ²ΠΎΡ ΡΡΡΡΠΊΡΡΡΡ (ΡΠ°Π·ΠΌΠ΅Ρ ΡΠ΅Π»Π° ΠΊΠ»Π΅ΡΠΊΠΈ, ΠΊΠΎΠ»ΠΈΡΠ΅ΡΡΠ²ΠΎ ΠΈ Π΄Π»ΠΈΠ½Π° Π΄Π΅Π½Π΄ΡΠΈΡΠΎΠ², ΠΊΠΎΠ»ΠΈΡΠ΅ΡΡΠ²ΠΎ ΡΠΈΠ½Π°ΠΏΡΠΎΠ²) Π² Π·Π°Π²ΠΈΡΠΈΠΌΠΎΡΡΠΈ ΠΎΡ ΠΏΠΎΡΡΡΠΏΠ°ΡΡΠ΅Π³ΠΎ Π½Π° Π΅Ρ Π²Ρ
ΠΎΠ΄Ρ ΠΏΠ°ΡΡΠ΅ΡΠ½Π° ΠΈΠΌΠΏΡΠ»ΡΡΠΎΠ². ΠΡΠΈΠ²Π΅Π΄Π΅Π½ΠΎ ΠΊΡΠ°ΡΠΊΠΎΠ΅ ΠΎΠΏΠΈΡΠ°Π½ΠΈΠ΅ ΡΠ΅Π³ΠΌΠ΅Π½ΡΠ½ΠΎΠΉ ΡΠΏΠ°ΠΉΠΊΠΎΠ²ΠΎΠΉ ΠΌΠΎΠ΄Π΅Π»ΠΈ Π½Π΅ΠΉΡΠΎΠ½Π°, ΠΎΡΠΌΠ΅ΡΠ΅Π½Ρ Π΅Ρ ΠΎΡΠ½ΠΎΠ²Π½ΡΠ΅ ΠΎΡΠΎΠ±Π΅Π½Π½ΠΎΡΡΠΈ Ρ ΡΠΎΡΠΊΠΈ Π·ΡΠ΅Π½ΠΈΡ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΠΈ Π΅Ρ ΡΡΡΡΠΊΡΡΡΠ½ΠΎΠ³ΠΎ ΡΠ΅ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ. ΠΠΏΠΈΡΡΠ²Π°Π΅ΡΡΡ ΡΠΏΠΎΡΠΎΠ± ΡΡΡΡΠΊΡΡΡΠ½ΠΎΠΉ Π°Π΄Π°ΠΏΡΠ°ΡΠΈΠΈ ΡΠ΅Π³ΠΌΠ΅Π½ΡΠ½ΠΎΠΉ ΡΠΏΠ°ΠΉΠΊΠΎΠ²ΠΎΠΉ ΠΌΠΎΠ΄Π΅Π»ΠΈ Π½Π΅ΠΉΡΠΎΠ½Π° ΠΊ Π²Ρ
ΠΎΠ΄Π½ΠΎΠΌΡ ΠΏΠ°ΡΡΠ΅ΡΠ½Ρ ΠΈΠΌΠΏΡΠ»ΡΡΠΎΠ². ΠΠ»Ρ ΠΈΡΡΠ»Π΅Π΄ΠΎΠ²Π°Π½ΠΈΡ ΡΠ°Π±ΠΎΡΡ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½Π½ΠΎΠΉ ΠΌΠΎΠ΄Π΅Π»ΠΈ Π½Π΅ΠΉΡΠΎΠ½Π° Π² ΡΠ΅ΡΠΈ, Π² ΠΊΠ°ΡΠ΅ΡΡΠ²Π΅ ΠΏΡΠΈΠΌΠ΅ΡΠ° ΠΎΠ±ΠΎΡΠ½ΠΎΠ²ΡΠ²Π°Π΅ΡΡΡ Π²ΡΠ±ΠΎΡ ΡΡΠ»ΠΎΠ²Π½ΠΎΠ³ΠΎ ΡΠ΅ΡΠ»Π΅ΠΊΡΠ°, ΠΊΠ°ΠΊ ΡΠ°ΡΡΠ½ΠΎΠ³ΠΎ ΡΠ»ΡΡΠ°Ρ ΡΠΎΡΠΌΠΈΡΠΎΠ²Π°Π½ΠΈΡ Π°ΡΡΠΎΡΠΈΠ°ΡΠΈΠ²Π½ΡΡ
ΡΠ²ΡΠ·Π΅ΠΉ. ΠΡΠΈΠ²Π΅Π΄Π΅Π½ΠΎ ΠΎΠΏΠΈΡΠ°Π½ΠΈΠ΅ ΡΡΡΡΠΊΡΡΡΠ½ΠΎΠΉ ΡΡ
Π΅ΠΌΡ ΠΈ Π°Π»Π³ΠΎΡΠΈΡΠΌΠ° ΡΠΎΡΠΌΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΡΡΠ»ΠΎΠ²Π½ΠΎΠ³ΠΎ ΡΠ΅ΡΠ»Π΅ΠΊΡΠ° ΠΊΠ°ΠΊ Ρ ΠΏΠΎΠ»ΠΎΠΆΠΈΡΠ΅Π»ΡΠ½ΡΠΌ, ΡΠ°ΠΊ ΠΈ Ρ ΠΎΡΡΠΈΡΠ°ΡΠ΅Π»ΡΠ½ΡΠΌ ΠΏΠΎΠ΄ΠΊΡΠ΅ΠΏΠ»Π΅Π½ΠΈΠ΅ΠΌ. ΠΡΠ΅Π΄ΡΡΠ°Π²Π»Π΅Π½ΠΎ ΠΏΠΎΡΠ°Π³ΠΎΠ²ΠΎΠ΅ ΠΎΠΏΠΈΡΠ°Π½ΠΈΠ΅ ΡΠΊΡΠΏΠ΅ΡΠΈΠΌΠ΅Π½ΡΠΎΠ² ΠΏΠΎ ΡΠΎΡΠΌΠΈΡΠΎΠ²Π°Π½ΠΈΡ Π°ΡΡΠΎΡΠΈΠ°ΡΠΈΠ²Π½ΡΡ
ΡΠ²ΡΠ·Π΅ΠΉ Π²ΠΎΠΎΠ±ΡΠ΅ ΠΈ ΡΡΠ»ΠΎΠ²Π½ΠΎΠ³ΠΎ ΡΠ΅ΡΠ»Π΅ΠΊΡΠ° (ΠΊΠ°ΠΊ Ρ ΠΏΠΎΠ»ΠΎΠΆΠΈΡΠ΅Π»ΡΠ½ΡΠΌ, ΡΠ°ΠΊ ΠΈ Ρ ΠΎΡΡΠΈΡΠ°ΡΠ΅Π»ΡΠ½ΡΠΌ ΠΏΠΎΠ΄ΠΊΡΠ΅ΠΏΠ»Π΅Π½ΠΈΠ΅ΠΌ), Π² ΡΠ°ΡΡΠ½ΠΎΡΡΠΈ. Π‘Π΄Π΅Π»Π°Π½ Π²ΡΠ²ΠΎΠ΄ ΠΎ ΠΏΠ΅ΡΡΠΏΠ΅ΠΊΡΠΈΠ²Π½ΠΎΡΡΠΈ ΠΏΡΠΈΠΌΠ΅Π½Π΅Π½ΠΈΡ ΡΠΏΠ°ΠΉΠΊΠΎΠ²ΡΡ
ΡΠ΅Π³ΠΌΠ΅Π½ΡΠ½ΡΡ
ΠΌΠΎΠ΄Π΅Π»Π΅ΠΉ Π½Π΅ΠΉΡΠΎΠ½ΠΎΠ² Π΄Π»Ρ ΠΏΠΎΠ²ΡΡΠ΅Π½ΠΈΡ ΡΡΡΠ΅ΠΊΡΠΈΠ²Π½ΠΎΡΡΠΈ ΡΠ΅Π°Π»ΠΈΠ·Π°ΡΠΈΠΈ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΡΠ΅ΡΠΊΠΈΡ
ΡΡΠ½ΠΊΡΠΈΠΉ Π² Π½Π΅ΠΉΡΠΎΠΌΠΎΡΡΠ½ΡΡ
ΡΠΈΡΡΠ΅ΠΌΠ°Ρ
ΡΠΏΡΠ°Π²Π»Π΅Π½ΠΈΡ. Π Π°ΡΡΠΌΠΎΡΡΠ΅Π½Ρ Π΄Π°Π»ΡΠ½Π΅ΠΉΡΠΈΠ΅ ΠΏΠ΅ΡΡΠΏΠ΅ΠΊΡΠΈΠ²Π½ΡΠ΅ Π½Π°ΠΏΡΠ°Π²Π»Π΅Π½ΠΈΡ ΡΠ°Π·Π²ΠΈΡΠΈΡ Π½Π΅ΠΉΡΠΎΠΌΠΎΡΡΠ½ΡΡ
ΡΠΈΡΡΠ΅ΠΌ, ΠΎΡΠ½ΠΎΠ²Π°Π½Π½ΡΡ
Π½Π° ΡΠΏΠ°ΠΉΠΊΠΎΠ²ΡΡ
ΡΠ΅Π³ΠΌΠ΅Π½ΡΠ½ΡΡ
ΠΌΠΎΠ΄Π΅Π»ΡΡ
Π½Π΅ΠΉΡΠΎΠ½Π°
Π Π΅Π°Π»ΠΈΠ·Π°ΡΠΈΡ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΡΠ΅ΡΠΊΠΈΡ ΡΡΠ½ΠΊΡΠΈΠΉ Π½Π° ΡΠΏΠ°ΠΉΠΊΠΎΠ²ΡΡ Π½Π΅ΠΉΡΠΎΠ½Π½ΡΡ ΡΠ΅ΡΡΡ
Π Π°ΡΡΠΌΠ°ΡΡΠΈΠ²Π°Π΅ΡΡΡ Π²ΠΎΠΏΡΠΎΡ ΠΌΠΎΠ΄Π΅Π»ΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΡΠ΅ΡΠΊΠΈΡ
ΡΡΠ½ΠΊΡΠΈΠΉ ΠΆΠΈΠ²ΠΎΡΠ½ΡΡ
, Π² ΡΠ°ΡΡΠ½ΠΎΡΡΠΈ, ΠΌΠΎΠ΄Π΅Π»ΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅ ΠΈ ΡΠ΅Π°Π»ΠΈΠ·Π°ΡΠΈΡ ΡΡΠ»ΠΎΠ²Π½ΠΎΠ³ΠΎ ΡΠ΅ΡΠ»Π΅ΠΊΡΠ°. ΠΡΠΎΠΈΠ·Π²ΠΎΠ΄ΠΈΡΡΡ Π°Π½Π°Π»ΠΈΠ· ΡΠΎΠ²ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠ³ΠΎ ΡΠΎΡΡΠΎΡΠ½ΠΈΡ Π½Π΅ΠΉΡΠΎΠ½Π½ΡΡ
ΡΠ΅ΡΠ΅ΠΉ Ρ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΡΡ ΡΡΡΡΠΊΡΡΡΠ½ΠΎΠ³ΠΎ ΡΠ΅ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ. ΠΠΎΠ΄Π΅Π»ΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅ ΠΎΡΡΡΠ΅ΡΡΠ²Π»ΡΠ΅ΡΡΡ ΠΏΠΎΡΡΠ΅Π΄ΡΡΠ²ΠΎΠΌ Π½Π΅ΠΉΡΠΎΠ½Π½ΡΡ
ΡΠ΅ΡΠ΅ΠΉ, ΠΊΠΎΡΠΎΡΡΠ΅ ΡΡΡΠΎΡΡΡΡ Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ ΡΠ΅Π³ΠΌΠ΅Π½ΡΠ½ΠΎΠΉ ΡΠΏΠ°ΠΉΠΊΠΎΠ²ΠΎΠΉ ΠΌΠΎΠ΄Π΅Π»ΠΈ Π½Π΅ΠΉΡΠΎΠ½Π° Ρ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΡΡ ΡΡΡΡΠΊΡΡΡΠ½ΠΎΠΉ Π°Π΄Π°ΠΏΡΠ°ΡΠΈΠΈ ΠΊ Π²Ρ
ΠΎΠ΄Π½ΠΎΠΌΡ ΠΏΠ°ΡΡΠ΅ΡΠ½Ρ ΠΈΠΌΠΏΡΠ»ΡΡΠΎΠ². Π‘Π΅Π³ΠΌΠ΅Π½ΡΠ½Π°Ρ ΡΠΏΠ°ΠΉΠΊΠΎΠ²Π°Ρ ΠΌΠΎΠ΄Π΅Π»Ρ Π½Π΅ΠΉΡΠΎΠ½Π° ΡΠΏΠΎΡΠΎΠ±Π½Π° ΠΈΠ·ΠΌΠ΅Π½ΡΡΡ ΡΠ²ΠΎΡ ΡΡΡΡΠΊΡΡΡΡ (ΡΠ°Π·ΠΌΠ΅Ρ ΡΠ΅Π»Π° ΠΊΠ»Π΅ΡΠΊΠΈ, ΠΊΠΎΠ»ΠΈΡΠ΅ΡΡΠ²ΠΎ ΠΈ Π΄Π»ΠΈΠ½Π° Π΄Π΅Π½Π΄ΡΠΈΡΠΎΠ², ΠΊΠΎΠ»ΠΈΡΠ΅ΡΡΠ²ΠΎ ΡΠΈΠ½Π°ΠΏΡΠΎΠ²) Π² Π·Π°Π²ΠΈΡΠΈΠΌΠΎΡΡΠΈ ΠΎΡ ΠΏΠΎΡΡΡΠΏΠ°ΡΡΠ΅Π³ΠΎ Π½Π° Π΅Ρ Π²Ρ
ΠΎΠ΄Ρ ΠΏΠ°ΡΡΠ΅ΡΠ½Π° ΠΈΠΌΠΏΡΠ»ΡΡΠΎΠ². ΠΡΠΈΠ²Π΅Π΄Π΅Π½ΠΎ ΠΊΡΠ°ΡΠΊΠΎΠ΅ ΠΎΠΏΠΈΡΠ°Π½ΠΈΠ΅ ΡΠ΅Π³ΠΌΠ΅Π½ΡΠ½ΠΎΠΉ ΡΠΏΠ°ΠΉΠΊΠΎΠ²ΠΎΠΉ ΠΌΠΎΠ΄Π΅Π»ΠΈ Π½Π΅ΠΉΡΠΎΠ½Π°, ΠΎΡΠΌΠ΅ΡΠ΅Π½Ρ Π΅Ρ ΠΎΡΠ½ΠΎΠ²Π½ΡΠ΅ ΠΎΡΠΎΠ±Π΅Π½Π½ΠΎΡΡΠΈ Ρ ΡΠΎΡΠΊΠΈ Π·ΡΠ΅Π½ΠΈΡ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΠΈ Π΅Ρ ΡΡΡΡΠΊΡΡΡΠ½ΠΎΠ³ΠΎ ΡΠ΅ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ. ΠΠΏΠΈΡΡΠ²Π°Π΅ΡΡΡ ΡΠΏΠΎΡΠΎΠ± ΡΡΡΡΠΊΡΡΡΠ½ΠΎΠΉ Π°Π΄Π°ΠΏΡΠ°ΡΠΈΠΈ ΡΠ΅Π³ΠΌΠ΅Π½ΡΠ½ΠΎΠΉ ΡΠΏΠ°ΠΉΠΊΠΎΠ²ΠΎΠΉ ΠΌΠΎΠ΄Π΅Π»ΠΈ Π½Π΅ΠΉΡΠΎΠ½Π° ΠΊ Π²Ρ
ΠΎΠ΄Π½ΠΎΠΌΡ ΠΏΠ°ΡΡΠ΅ΡΠ½Ρ ΠΈΠΌΠΏΡΠ»ΡΡΠΎΠ². ΠΠ»Ρ ΠΈΡΡΠ»Π΅Π΄ΠΎΠ²Π°Π½ΠΈΡ ΡΠ°Π±ΠΎΡΡ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½Π½ΠΎΠΉ ΠΌΠΎΠ΄Π΅Π»ΠΈ Π½Π΅ΠΉΡΠΎΠ½Π° Π² ΡΠ΅ΡΠΈ, Π² ΠΊΠ°ΡΠ΅ΡΡΠ²Π΅ ΠΏΡΠΈΠΌΠ΅ΡΠ° ΠΎΠ±ΠΎΡΠ½ΠΎΠ²ΡΠ²Π°Π΅ΡΡΡ Π²ΡΠ±ΠΎΡ ΡΡΠ»ΠΎΠ²Π½ΠΎΠ³ΠΎ ΡΠ΅ΡΠ»Π΅ΠΊΡΠ°, ΠΊΠ°ΠΊ ΡΠ°ΡΡΠ½ΠΎΠ³ΠΎ ΡΠ»ΡΡΠ°Ρ ΡΠΎΡΠΌΠΈΡΠΎΠ²Π°Π½ΠΈΡ Π°ΡΡΠΎΡΠΈΠ°ΡΠΈΠ²Π½ΡΡ
ΡΠ²ΡΠ·Π΅ΠΉ. ΠΡΠΈΠ²Π΅Π΄Π΅Π½ΠΎ ΠΎΠΏΠΈΡΠ°Π½ΠΈΠ΅ ΡΡΡΡΠΊΡΡΡΠ½ΠΎΠΉ ΡΡ
Π΅ΠΌΡ ΠΈ Π°Π»Π³ΠΎΡΠΈΡΠΌΠ° ΡΠΎΡΠΌΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΡΡΠ»ΠΎΠ²Π½ΠΎΠ³ΠΎ ΡΠ΅ΡΠ»Π΅ΠΊΡΠ° ΠΊΠ°ΠΊ Ρ ΠΏΠΎΠ»ΠΎΠΆΠΈΡΠ΅Π»ΡΠ½ΡΠΌ, ΡΠ°ΠΊ ΠΈ Ρ ΠΎΡΡΠΈΡΠ°ΡΠ΅Π»ΡΠ½ΡΠΌ ΠΏΠΎΠ΄ΠΊΡΠ΅ΠΏΠ»Π΅Π½ΠΈΠ΅ΠΌ. ΠΡΠ΅Π΄ΡΡΠ°Π²Π»Π΅Π½ΠΎ ΠΏΠΎΡΠ°Π³ΠΎΠ²ΠΎΠ΅ ΠΎΠΏΠΈΡΠ°Π½ΠΈΠ΅ ΡΠΊΡΠΏΠ΅ΡΠΈΠΌΠ΅Π½ΡΠΎΠ² ΠΏΠΎ ΡΠΎΡΠΌΠΈΡΠΎΠ²Π°Π½ΠΈΡ Π°ΡΡΠΎΡΠΈΠ°ΡΠΈΠ²Π½ΡΡ
ΡΠ²ΡΠ·Π΅ΠΉ Π²ΠΎΠΎΠ±ΡΠ΅ ΠΈ ΡΡΠ»ΠΎΠ²Π½ΠΎΠ³ΠΎ ΡΠ΅ΡΠ»Π΅ΠΊΡΠ° (ΠΊΠ°ΠΊ Ρ ΠΏΠΎΠ»ΠΎΠΆΠΈΡΠ΅Π»ΡΠ½ΡΠΌ, ΡΠ°ΠΊ ΠΈ Ρ ΠΎΡΡΠΈΡΠ°ΡΠ΅Π»ΡΠ½ΡΠΌ ΠΏΠΎΠ΄ΠΊΡΠ΅ΠΏΠ»Π΅Π½ΠΈΠ΅ΠΌ), Π² ΡΠ°ΡΡΠ½ΠΎΡΡΠΈ. Π‘Π΄Π΅Π»Π°Π½ Π²ΡΠ²ΠΎΠ΄ ΠΎ ΠΏΠ΅ΡΡΠΏΠ΅ΠΊΡΠΈΠ²Π½ΠΎΡΡΠΈ ΠΏΡΠΈΠΌΠ΅Π½Π΅Π½ΠΈΡ ΡΠΏΠ°ΠΉΠΊΠΎΠ²ΡΡ
ΡΠ΅Π³ΠΌΠ΅Π½ΡΠ½ΡΡ
ΠΌΠΎΠ΄Π΅Π»Π΅ΠΉ Π½Π΅ΠΉΡΠΎΠ½ΠΎΠ² Π΄Π»Ρ ΠΏΠΎΠ²ΡΡΠ΅Π½ΠΈΡ ΡΡΡΠ΅ΠΊΡΠΈΠ²Π½ΠΎΡΡΠΈ ΡΠ΅Π°Π»ΠΈΠ·Π°ΡΠΈΠΈ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΡΠ΅ΡΠΊΠΈΡ
ΡΡΠ½ΠΊΡΠΈΠΉ Π² Π½Π΅ΠΉΡΠΎΠΌΠΎΡΡΠ½ΡΡ
ΡΠΈΡΡΠ΅ΠΌΠ°Ρ
ΡΠΏΡΠ°Π²Π»Π΅Π½ΠΈΡ. Π Π°ΡΡΠΌΠΎΡΡΠ΅Π½Ρ Π΄Π°Π»ΡΠ½Π΅ΠΉΡΠΈΠ΅ ΠΏΠ΅ΡΡΠΏΠ΅ΠΊΡΠΈΠ²Π½ΡΠ΅ Π½Π°ΠΏΡΠ°Π²Π»Π΅Π½ΠΈΡ ΡΠ°Π·Π²ΠΈΡΠΈΡ Π½Π΅ΠΉΡΠΎΠΌΠΎΡΡΠ½ΡΡ
ΡΠΈΡΡΠ΅ΠΌ, ΠΎΡΠ½ΠΎΠ²Π°Π½Π½ΡΡ
Π½Π° ΡΠΏΠ°ΠΉΠΊΠΎΠ²ΡΡ
ΡΠ΅Π³ΠΌΠ΅Π½ΡΠ½ΡΡ
ΠΌΠΎΠ΄Π΅Π»ΡΡ
Π½Π΅ΠΉΡΠΎΠ½Π°
Learning stimulus-stimulus association in spatio-temporal neural networks
We propose a stimulus-stimulus association learning by coupling firing rate and precise spike timing encoding for spatio-temporal neural networks.We simulate a generic recurrent network with random and sparse connectivity consisting of Izhikevich spiking neurons.The magnitude of weight adjustment in learning is dependent on pre- and postsynaptic spikes based on their spikes count and time correlation. As a result of learning, synchronisation of activity among inter- and intra-subpopulation neurons demonstrates association between two stimuli.The associations show in spill-over of activity between the two stimuli involved
Multi-objective evolutionary algorithms of spiking neural networks
Spiking neural network (SNN) is considered as the third generation of artificial neural networks. Although there are many models of SNN, Evolving Spiking Neural Network (ESNN) is widely used in many recent research works. Among the many important issues that need to be explored in ESNN are determining the optimal pre-synaptic neurons and parameters values for a given data set. Moreover, previous studies have not investigated the performance of the multi-objective approach with ESNN. In this study, the aim is to find the optimal pre-synaptic neurons and parameter values for ESNN simultaneously by proposing several integrations between ESNN and differential evolution (DE). The proposed algorithms applied to address these problems include DE with evolving spiking neural network (DE-ESNN) and DE for parameter tuning with evolving spiking neural network (DEPT-ESNN). This study also utilized the approach of multi-objective (MOO) with ESNN for better learning structure and classification accuracy. Harmony Search (HS) and memetic approach was used to improve the performance of MOO with ESNN. Consequently, Multi- Objective Differential Evolution with Evolving Spiking Neural Network (MODE-ESNN), Harmony Search Multi-Objective Differential Evolution with Evolving Spiking Neural Network (HSMODE-ESNN) and Memetic Harmony Search Multi-Objective Differential Evolution with Evolving Spiking Neural Network (MEHSMODE-ESNN) were applied to improve ESNN structure and accuracy rates. The hybrid methods were tested by using seven benchmark data sets from the machine learning repository. The performance was evaluated using different criteria such as accuracy (ACC), geometric mean (GM), sensitivity (SEN), specificity (SPE), positive predictive value (PPV), negative predictive value (NPV) and average site performance (ASP) using k-fold cross validation. Evaluation analysis shows that the proposed methods demonstrated better classification performance as compared to the standard ESNN especially in the case of imbalanced data sets. The findings revealed that the MEHSMODE-ESNN method statistically outperformed all the other methods using the different data sets and evaluation criteria. It is concluded that multi objective proposed methods have been evinced as the best proposed methods for most of the data sets used in this study. The findings have proven that the proposed algorithms attained the optimal presynaptic neurons and parameters values and MOO approach was applicable for the ESNN
A review of learning in biologically plausible spiking neural networks
Artificial neural networks have been used as a powerful processing tool in various areas such as pattern recognition, control, robotics, and bioinformatics. Their wide applicability has encouraged researchers to improve artificial neural networks by investigating the biological brain. Neurological research has significantly progressed in recent years and continues to reveal new characteristics of biological neurons. New technologies can now capture temporal changes in the internal activity of the brain in more detail and help clarify the relationship between brain activity and the perception of a given stimulus. This new knowledge has led to a new type of artificial neural network, the Spiking Neural Network (SNN), that draws more faithfully on biological properties to provide higher processing abilities. A review of recent developments in learning of spiking neurons is presented in this paper. First the biological background of SNN learning algorithms is reviewed. The important elements of a learning algorithm such as the neuron model, synaptic plasticity, information encoding and SNN topologies are then presented. Then, a critical review of the state-of-the-art learning algorithms for SNNs using single and multiple spikes is presented. Additionally, deep spiking neural networks are reviewed, and challenges and opportunities in the SNN field are discussed
An online supervised learning method for spiking neural networks with adaptive structure
A novel online learning algorithm for Spiking Neural Networks (SNNs) with dynamically adaptive structure is presented. The main contribution of this work lies in the fact that the proposed adaptive SNN is able to classify spike-based spatio-temporal inputs after just one presentation of the training set, i.e. in one pass only, and does not require the entire training set to be available at once. Both the structure and weights of the SNN are learned dynamically through a combination of unsupervised and supervised learning paradigms. The proposed feed-forward SNN consists of three layers of spiking neurons: an input layer which temporally encodes real valued features into spike-based spatio-temporal patterns, a hidden layer of dynamically grown and pruned neurons which perform spatio-temporal clustering, and an output layer for classification. An unsupervised spiking-based clustering algorithm is implemented by the hidden layer whose spiking neurons are trained to compute a temporal Radial Basis Function (RBF) where incoming inputs will selectively activate hidden neurons based on how close the inputs are to the preferred inputs of the hidden neurons. The centre of each hidden RBF spiking neuron is represented by its time to first spike. In addition, a growing and pruning strategy is proposed to adjust the structure of the hidden layer βon-the-flyβ as inputs are presented to the SNN. Both the weights and the centres of the hidden RBF neurons are learned in an unsupervised way and classification at the output layer is achieved through supervised learning where the learning windows proposed for STDP and anti-STDP are used to adjust the weights of the output neurons afferent connections. Competition at both the hidden and the output layers is achieved through the use of lateral inhibitory connections between the neurons of each layer. The proposed online learning algorithm is validated on several benchmark datasets. The evaluation results demonstrate that SNNs trained with the proposed approach require only one pass through the training set in order to classify the inputs with comparable accuracies to existing SNN-based approaches as well as traditional representative classifiers