508 research outputs found
Recognizing Speech in a Novel Accent: The Motor Theory of Speech Perception Reframed
The motor theory of speech perception holds that we perceive the speech of
another in terms of a motor representation of that speech. However, when we
have learned to recognize a foreign accent, it seems plausible that recognition
of a word rarely involves reconstruction of the speech gestures of the speaker
rather than the listener. To better assess the motor theory and this
observation, we proceed in three stages. Part 1 places the motor theory of
speech perception in a larger framework based on our earlier models of the
adaptive formation of mirror neurons for grasping, and for viewing extensions
of that mirror system as part of a larger system for neuro-linguistic
processing, augmented by the present consideration of recognizing speech in a
novel accent. Part 2 then offers a novel computational model of how a listener
comes to understand the speech of someone speaking the listener's native
language with a foreign accent. The core tenet of the model is that the
listener uses hypotheses about the word the speaker is currently uttering to
update probabilities linking the sound produced by the speaker to phonemes in
the native language repertoire of the listener. This, on average, improves the
recognition of later words. This model is neutral regarding the nature of the
representations it uses (motor vs. auditory). It serve as a reference point for
the discussion in Part 3, which proposes a dual-stream neuro-linguistic
architecture to revisits claims for and against the motor theory of speech
perception and the relevance of mirror neurons, and extracts some implications
for the reframing of the motor theory
Temporal Correlations of Local Network Losses
We introduce a continuum model describing data losses in a single node of a
packet-switched network (like the Internet) which preserves the discrete nature
of the data loss process. {\em By construction}, the model has critical
behavior with a sharp transition from exponentially small to finite losses with
increasing data arrival rate. We show that such a model exhibits strong
fluctuations in the loss rate at the critical point and non-Markovian power-law
correlations in time, in spite of the Markovian character of the data arrival
process. The continuum model allows for rather general incoming data packet
distributions and can be naturally generalized to consider the buffer server
idleness statistics
Random Walks in Local Dynamics of Network Losses
We suggest a model for data losses in a single node of a packet-switched
network (like the Internet) which reduces to one-dimensional discrete random
walks with unusual boundary conditions. The model shows critical behavior with
an abrupt transition from exponentially small to finite losses as the data
arrival rate increases. The critical point is characterized by strong
fluctuations of the loss rate. Although we consider the packet arrival being a
Markovian process, the loss rate exhibits non-Markovian power-law correlations
in time at the critical point.Comment: 4 pages, 2 figure
A new class of symbolic abstract neural nets
Starting from the way the inter-cellular communication takes place by means of protein channels and also from the standard knowledge about neuron functioning, we propose a computing model called a tissue P system, which processes symbols in a multiset rewriting sense, in a net of cells similar to a neural net. Each cell has a finite state memory, processes multisets of symbol-impulses, and can send impulses (?excitations?) to the neighboring cells. Such cell nets are shown to be rather powerful: they can simulate a Turing machine even when using a small number of cells, each of them having a small number of states. Moreover, in the case when each cell works in the maximal manner and it can excite all the cells to which it can send impulses, then one can easily solve the Hamiltonian Path Problem in linear time. A new characterization of the Parikh images of ET0L languages are also obtained in this framework
Recommended from our members
Spring School on Language, Music, and Cognition: Organizing Events in Time
The interdisciplinary spring school âLanguage, music, and cognition: Organizing events in timeâ was held from February 26 to March 2, 2018 at the Institute of Musicology of the University of Cologne. Language, speech, and music as events in time were explored from different perspectives including evolutionary biology, social cognition, developmental psychology, cognitive neuroscience of speech, language, and communication, as well as computational and biological approaches to language and music. There were 10 lectures, 4 workshops, and 1 student poster session.
Overall, the spring school investigated language and music as neurocognitive systems and focused on a mechanistic approach exploring the neural substrates underlying musical, linguistic, social, and emotional processes and behaviors. In particular, researchers approached questions concerning cognitive processes, computational procedures, and neural mechanisms underlying the temporal organization of language and music, mainly from two perspectives: one was concerned with syntax or structural representations of language and music as neurocognitive systems (i.e., an intrapersonal perspective), while the other emphasized social interaction and emotions in their communicative function (i.e., an interpersonal perspective). The spring school not only acted as a platform for knowledge transfer and exchange but also generated a number of important research questions as challenges for future investigations
Analysis of Oscillator Neural Networks for Sparsely Coded Phase Patterns
We study a simple extended model of oscillator neural networks capable of
storing sparsely coded phase patterns, in which information is encoded both in
the mean firing rate and in the timing of spikes. Applying the methods of
statistical neurodynamics to our model, we theoretically investigate the
model's associative memory capability by evaluating its maximum storage
capacities and deriving its basins of attraction. It is shown that, as in the
Hopfield model, the storage capacity diverges as the activity level decreases.
We consider various practically and theoretically important cases. For example,
it is revealed that a dynamically adjusted threshold mechanism enhances the
retrieval ability of the associative memory. It is also found that, under
suitable conditions, the network can recall patterns even in the case that
patterns with different activity levels are stored at the same time. In
addition, we examine the robustness with respect to damage of the synaptic
connections. The validity of these theoretical results is confirmed by
reasonable agreement with numerical simulations.Comment: 23 pages, 11 figure
A Markovian event-based framework for stochastic spiking neural networks
In spiking neural networks, the information is conveyed by the spike times,
that depend on the intrinsic dynamics of each neuron, the input they receive
and on the connections between neurons. In this article we study the Markovian
nature of the sequence of spike times in stochastic neural networks, and in
particular the ability to deduce from a spike train the next spike time, and
therefore produce a description of the network activity only based on the spike
times regardless of the membrane potential process.
To study this question in a rigorous manner, we introduce and study an
event-based description of networks of noisy integrate-and-fire neurons, i.e.
that is based on the computation of the spike times. We show that the firing
times of the neurons in the networks constitute a Markov chain, whose
transition probability is related to the probability distribution of the
interspike interval of the neurons in the network. In the cases where the
Markovian model can be developed, the transition probability is explicitly
derived in such classical cases of neural networks as the linear
integrate-and-fire neuron models with excitatory and inhibitory interactions,
for different types of synapses, possibly featuring noisy synaptic integration,
transmission delays and absolute and relative refractory period. This covers
most of the cases that have been investigated in the event-based description of
spiking deterministic neural networks
An excitable electronic circuit as a sensory neuron model
An electronic circuit device, inspired on the FitzHugh-Nagumo model of
neuronal excitability, was constructed and shown to operate with
characteristics compatible with those of biological sensory neurons. The
nonlinear dynamical model of the electronics quantitatively reproduces the
experimental observations on the circuit, including the Hopf bifurcation at the
onset of tonic spiking. Moreover, we have implemented an analog noise generator
as a source to study the variability of the spike trains. When the circuit is
in the excitable regime, coherence resonance is observed. At sufficiently low
noise intensity the spike trains have Poisson statistics, as in many biological
neurons. The transfer function of the stochastic spike trains has a dynamic
range of 6 dB, close to experimental values for real olfactory receptor
neurons.Comment: 10 pages, 6 figure
Dynamical principles in neuroscience
Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and FundaciĂłn BBVA
- âŠ