13,869 research outputs found
Artificial Neural Network Methods in Quantum Mechanics
In a previous article we have shown how one can employ Artificial Neural
Networks (ANNs) in order to solve non-homogeneous ordinary and partial
differential equations. In the present work we consider the solution of
eigenvalue problems for differential and integrodifferential operators, using
ANNs. We start by considering the Schr\"odinger equation for the Morse
potential that has an analytically known solution, to test the accuracy of the
method. We then proceed with the Schr\"odinger and the Dirac equations for a
muonic atom, as well as with a non-local Schr\"odinger integrodifferential
equation that models the system in the framework of the resonating
group method. In two dimensions we consider the well studied Henon-Heiles
Hamiltonian and in three dimensions the model problem of three coupled
anharmonic oscillators. The method in all of the treated cases proved to be
highly accurate, robust and efficient. Hence it is a promising tool for
tackling problems of higher complexity and dimensionality.Comment: Latex file, 29pages, 11 psfigs, submitted in CP
Signal Propagation in Feedforward Neuronal Networks with Unreliable Synapses
In this paper, we systematically investigate both the synfire propagation and
firing rate propagation in feedforward neuronal network coupled in an
all-to-all fashion. In contrast to most earlier work, where only reliable
synaptic connections are considered, we mainly examine the effects of
unreliable synapses on both types of neural activity propagation in this work.
We first study networks composed of purely excitatory neurons. Our results show
that both the successful transmission probability and excitatory synaptic
strength largely influence the propagation of these two types of neural
activities, and better tuning of these synaptic parameters makes the considered
network support stable signal propagation. It is also found that noise has
significant but different impacts on these two types of propagation. The
additive Gaussian white noise has the tendency to reduce the precision of the
synfire activity, whereas noise with appropriate intensity can enhance the
performance of firing rate propagation. Further simulations indicate that the
propagation dynamics of the considered neuronal network is not simply
determined by the average amount of received neurotransmitter for each neuron
in a time instant, but also largely influenced by the stochastic effect of
neurotransmitter release. Second, we compare our results with those obtained in
corresponding feedforward neuronal networks connected with reliable synapses
but in a random coupling fashion. We confirm that some differences can be
observed in these two different feedforward neuronal network models. Finally,
we study the signal propagation in feedforward neuronal networks consisting of
both excitatory and inhibitory neurons, and demonstrate that inhibition also
plays an important role in signal propagation in the considered networks.Comment: 33pages, 16 figures; Journal of Computational Neuroscience
(published
Decorrelation of neural-network activity by inhibitory feedback
Correlations in spike-train ensembles can seriously impair the encoding of
information by their spatio-temporal structure. An inevitable source of
correlation in finite neural networks is common presynaptic input to pairs of
neurons. Recent theoretical and experimental studies demonstrate that spike
correlations in recurrent neural networks are considerably smaller than
expected based on the amount of shared presynaptic input. By means of a linear
network model and simulations of networks of leaky integrate-and-fire neurons,
we show that shared-input correlations are efficiently suppressed by inhibitory
feedback. To elucidate the effect of feedback, we compare the responses of the
intact recurrent network and systems where the statistics of the feedback
channel is perturbed. The suppression of spike-train correlations and
population-rate fluctuations by inhibitory feedback can be observed both in
purely inhibitory and in excitatory-inhibitory networks. The effect is fully
understood by a linear theory and becomes already apparent at the macroscopic
level of the population averaged activity. At the microscopic level,
shared-input correlations are suppressed by spike-train correlations: In purely
inhibitory networks, they are canceled by negative spike-train correlations. In
excitatory-inhibitory networks, spike-train correlations are typically
positive. Here, the suppression of input correlations is not a result of the
mere existence of correlations between excitatory (E) and inhibitory (I)
neurons, but a consequence of a particular structure of correlations among the
three possible pairings (EE, EI, II)
Convolutional Recurrent Neural Networks for Polyphonic Sound Event Detection
Sound events often occur in unstructured environments where they exhibit wide
variations in their frequency content and temporal structure. Convolutional
neural networks (CNN) are able to extract higher level features that are
invariant to local spectral and temporal variations. Recurrent neural networks
(RNNs) are powerful in learning the longer term temporal context in the audio
signals. CNNs and RNNs as classifiers have recently shown improved performances
over established methods in various sound recognition tasks. We combine these
two approaches in a Convolutional Recurrent Neural Network (CRNN) and apply it
on a polyphonic sound event detection task. We compare the performance of the
proposed CRNN method with CNN, RNN, and other established methods, and observe
a considerable improvement for four different datasets consisting of everyday
sound events.Comment: Accepted for IEEE Transactions on Audio, Speech and Language
Processing, Special Issue on Sound Scene and Event Analysi
- …