4,447 research outputs found
Modified Spatio-Temporal Matched Filtering for Brain Responses Classification
In this article, we apply the method of spatio-temporal filtering (STF) to electroencephalographic (EEG) data processing for brain responses classification. The method operates similarly to linear discriminant analysis (LDA) but contrary to most applied classifiers, it uses the whole recorded EEG signal as a source of information instead of the precisely selected brain responses, only.
This way it avoids the limitations of LDA and improves the classification accuracy. We emphasize the significance of the STF learning phase. To preclude the negative influence of super–Gaussian
artifacts on accomplishment of this phase, we apply the discrete cosine transform (DCT) based method for their rejection. Later, we estimate the noise covariance matrix using all data available, and we
improve the STF template construction. The further modifications are related with the constructed filters operation and consist in the changes of the STF interpretation rules. Consequently, a new
tool for evoked potentials (EPs) classification has been developed. Applied to the analysis of signals stored in a publicly available database, prepared for the assessment of modern algorithms aimed
in EPs detection (in the frames of the 2019 IFMBE Scientific Challenge), it allowed to achieve the second best result, very close to the best one, and significantly better than the ones achieved by other contestants of the challeng
Towards a Unified Theory of Neocortex: Laminar Cortical Circuits for Vision and Cognition
A key goal of computational neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how laminar neocortical circuits give rise to biological intelligence. These circuits embody two new and revolutionary computational paradigms: Complementary Computing and Laminar Computing. Circuit properties include a novel synthesis of feedforward and feedback processing, of digital and analog processing, and of pre-attentive and attentive processing. This synthesis clarifies the appeal of Bayesian approaches but has a far greater predictive range that naturally extends to self-organizing processes. Examples from vision and cognition are summarized. A LAMINART architecture unifies properties of visual development, learning, perceptual grouping, attention, and 3D vision. A key modeling theme is that the mechanisms which enable development and learning to occur in a stable way imply properties of adult behavior. It is noted how higher-order attentional constraints can influence multiple cortical regions, and how spatial and object attention work together to learn view-invariant object categories. In particular, a form-fitting spatial attentional shroud can allow an emerging view-invariant object category to remain active while multiple view categories are associated with it during sequences of saccadic eye movements. Finally, the chapter summarizes recent work on the LIST PARSE model of cognitive information processing by the laminar circuits of prefrontal cortex. LIST PARSE models the short-term storage of event sequences in working memory, their unitization through learning into sequence, or list, chunks, and their read-out in planned sequential performance that is under volitional control. LIST PARSE provides a laminar embodiment of Item and Order working memories, also called Competitive Queuing models, that have been supported by both psychophysical and neurobiological data. These examples show how variations of a common laminar cortical design can embody properties of visual and cognitive intelligence that seem, at least on the surface, to be mechanistically unrelated.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624
General-purpose and special-purpose visual systems
The information that eyes supply supports a wide variety of functions, from the guidance systems that enable an animal to navigate successfully around the environment, to the detection and identification of predators, prey, and conspecifics. The eyes with which we are most familiar the single-chambered eyes of vertebrates and cephalopod molluscs, and the compound eyes of insects and higher crustaceans allow these animals to perform the full range of visual tasks. These eyes have evidently evolved in conjunction with brains that are capable of subjecting the raw visual information to many different kinds of analysis, depending on the nature of the task that the animal is engaged in. However, not all eyes evolved to provide such comprehensive information. For example, in bivalve molluscs we find eyes of very varied design (pinholes, concave mirrors, and apposition compound eyes) whose only function is to detect approaching predators and thereby allow the animal to protect itself by closing its shell. Thus, there are special-purpose eyes as well as eyes with multiple functions
Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding
Precise spike timing as a means to encode information in neural networks is
biologically supported, and is advantageous over frequency-based codes by
processing input features on a much shorter time-scale. For these reasons, much
recent attention has been focused on the development of supervised learning
rules for spiking neural networks that utilise a temporal coding scheme.
However, despite significant progress in this area, there still lack rules that
have a theoretical basis, and yet can be considered biologically relevant. Here
we examine the general conditions under which synaptic plasticity most
effectively takes place to support the supervised learning of a precise
temporal code. As part of our analysis we examine two spike-based learning
methods: one of which relies on an instantaneous error signal to modify
synaptic weights in a network (INST rule), and the other one on a filtered
error signal for smoother synaptic weight modifications (FILT rule). We test
the accuracy of the solutions provided by each rule with respect to their
temporal encoding precision, and then measure the maximum number of input
patterns they can learn to memorise using the precise timings of individual
spikes as an indication of their storage capacity. Our results demonstrate the
high performance of FILT in most cases, underpinned by the rule's
error-filtering mechanism, which is predicted to provide smooth convergence
towards a desired solution during learning. We also find FILT to be most
efficient at performing input pattern memorisations, and most noticeably when
patterns are identified using spikes with sub-millisecond temporal precision.
In comparison with existing work, we determine the performance of FILT to be
consistent with that of the highly efficient E-learning Chronotron, but with
the distinct advantage that FILT is also implementable as an online method for
increased biological realism.Comment: 26 pages, 10 figures, this version is published in PLoS ONE and
incorporates reviewer comment
Idealized computational models for auditory receptive fields
This paper presents a theory by which idealized models of auditory receptive
fields can be derived in a principled axiomatic manner, from a set of
structural properties to enable invariance of receptive field responses under
natural sound transformations and ensure internal consistency between
spectro-temporal receptive fields at different temporal and spectral scales.
For defining a time-frequency transformation of a purely temporal sound
signal, it is shown that the framework allows for a new way of deriving the
Gabor and Gammatone filters as well as a novel family of generalized Gammatone
filters, with additional degrees of freedom to obtain different trade-offs
between the spectral selectivity and the temporal delay of time-causal temporal
window functions.
When applied to the definition of a second-layer of receptive fields from a
spectrogram, it is shown that the framework leads to two canonical families of
spectro-temporal receptive fields, in terms of spectro-temporal derivatives of
either spectro-temporal Gaussian kernels for non-causal time or the combination
of a time-causal generalized Gammatone filter over the temporal domain and a
Gaussian filter over the logspectral domain. For each filter family, the
spectro-temporal receptive fields can be either separable over the
time-frequency domain or be adapted to local glissando transformations that
represent variations in logarithmic frequencies over time. Within each domain
of either non-causal or time-causal time, these receptive field families are
derived by uniqueness from the assumptions.
It is demonstrated how the presented framework allows for computation of
basic auditory features for audio processing and that it leads to predictions
about auditory receptive fields with good qualitative similarity to biological
receptive fields measured in the inferior colliculus (ICC) and primary auditory
cortex (A1) of mammals.Comment: 55 pages, 22 figures, 3 table
- …
