693 research outputs found

    A spike-based head-movement and echolocation model of the bat superior colliculus

    Get PDF
    Echolocating bats use sonar to sense their environment and hunt for food in darkness. To understand this unusual sensory system from a computational perspective with aspirations towards developing high performance electronic implementations, we study the bat brain. The midbrain superior colliculus (SC) has been shown (in many species) to support multisensory integration and orientation behaviors, namely eye saccades and head turns. Previous computational models of the SC have emphasized the behavior typical to monkeys, barn owls, and cats. Using unique neurobiological data for the bat and incorporating knowledge from other species, a computational spiking model has been developed to produce both head-movement and sonar vocalization. The model accomplishes this with simple neuron equations and synapses, which is promising for implementation on a VLSI chip. This model can serve as a foundation for further developments, using new data from bat experiments, and be easily connected to spiking motor and vocalization systems

    Information processing in a midbrain visual pathway

    Get PDF
    Visual information is processed in brain via the intricate interactions between neurons. We investigated a midbrain visual pathway: optic tectum and its isthmic nucleus) that is motion sensitive and is thought as part of attentional system. We determined the physiological properties of individual neurons as well as their synaptic connections with intracellular recordings. We reproduced the center-surround receptive field structure of tectal neurons in a dynamical recurrent feedback loop. We reveal in a computational model that the anti-topographic inhibitory feedback could mediate competitive stimulus selection in a complex visual scene. We also investigated the dynamics of the competitive selection in a rate model. The isthmotectal feedback loop gates the information transfer from tectum to thalamic rotundus. We discussed the role of a localized feedback projection in contributing to the gating mechanisms with both experimental and numerical approaches. We further discussed the dynamics of the isthmotectal system by considering the propagation delays between different components. We conclude that the isthmotectal system is involved in attention-like competitive stimulus selection and control the information coding in the motion sensitive SGC-I neurons by modulating the retino-tectal synaptic transmission

    The Mechanisms And Roles Of Feedback Loops For Visual Processing

    Get PDF
    Signal flow in the brain is not unidirectional; feedback represents a key element in neural signal processing. To address the question on how do neural feedback loops work in terms of synapses, microcircuitry, and systems dynamics, we developed a chick midbrain slice preparation to study and characterize one important feedback loop within the avian visual system: isthmotectal feedbackloop. The isthmotectal feedback loop consists of the optic tectum: OT) and three nucleus isthmi: Imc, Ipc and SLu. The tectal layer 10 neurons project to ipsilateral Imc, Ipc and SLu in a topographic way. In turn Ipc and SLu send back topographical: local) cholinergic terminals to the OT, whereas Imc sends non-topographical: global) GABAergic projections to the OT, and also to the Ipc and the SLu. We first study the cellular properties of Ipc neurons and found that almost all Ipc cells exhibited spontaneous activity characterized with a barrage of EPSPs and occasional spikes. Further experiments reveal the involvement of GABA in mediating the spontaneous synaptic inputs to the Ipc neurons. Next we investigate the mechanisms of oscillatory bursting in Ipc, which is observed in vivo, by building a model network based on the in vitro experimental results. Our simulation results conclude that strong feedforward excitation and spike-rate adaptation can generate oscillatory bursting in Ipc neuron in response to a constant input. Then we consider the effect of distributed synaptic delays measured within the isthmotectal feedback loop and elucidate that distributed delays can stabilize the system and lead to an increased range of parameters for which the system converges to a stable fixed point. Next we explore the functional features of GABAergic projection from Imc to Ipc and find that Imc has a regulatory role on actions of Ipc neurons in that stimulating Imc can evoke action potentials in Ipc neurons while it also can suppress the firing in Ipc neurons which is generated by somatic current injection. The mechanism of regulatory action is further studied by a two-compartment neuron model. Last, we lay out several open questions in this area which may worth further investigation

    Orienting behaviours and attentional processes in the mouse and macaque : neuroanatomy, electrophysiology and optogenetics

    Get PDF
    PhD ThesisThe neuronal basis of orienting and attentional behaviours has been widely researched in higher animals such as non-human primates (NHPs). However the organisation of these behaviours and processes in rodent models has been less well characterised. This thesis is motivated to delineate the key neuroanatomical pathways and neuronal mechanisms that account for orienting behaviours in the mouse model and compare them, in part, to those seen in the macaque. A better understanding of the processes and networks involved with attention and orienting is necessary in order to relate findings in the mouse model to those seen in humans and NHPs. Further to this, the availability of highly targeted manipulations in the mouse, such as optogenetics, requires a more detailed picture of the neurophysiology underpinning those behaviours to effectively interpret findings and design experiments to exploit these techniques and animal models for maximum benefit. In this thesis, study one focuses on the neuroanatomical pathways that terminate in subregions of the midbrain superior colliculus (SC) in the mouse (mus musculus) using iontophoretic injection of the retrograde tracer fluorogold. This region has been implicated in various forms of orienting behaviours in both macaques and mice (Albano et al., 1982, Dean et al., 1988b, Felsen and Mainen, 2008). Furthermore study one examines the prefrontal connectivity that links to the SC subsections and which may govern approach and avoidance behaviours (motor cortex area 2 (M2) and cingulate area (Cg)) in the mouse via pressure injection of the anterograde tracer biotinylated dextran amine into these regions. It was found that the medial and lateral SC receive differential prefrontal input from the Cg and M2 respectively. And that these areas project to brain networks related to avoidance or approach. This section furthers our understanding of the partially segregated networks which exist in the prefrontal cortex and midbrain of the mouse, which are important in mediation of different orienting behaviours Study two focuses on the effects of one type of orienting, namely bottom-up attention (BU) in visual areas. This exogenous (automatic) form of visual attention has been studied extensively in human psychophysics (Posner, 1980, Nakayama and Mackeben, 1989) and the areas involved in the human brain have been delineated using brain imaging (Corbetta and Shulman, 2002, Liu et al., 2005). To understand the neurophysiology involved, some electrophysiological invasive studies have been performed in the macaque monkeys, II ( Luck et al., 1997, Buschman and Miller, 2007), but our understanding of the mechanisms involved is relatively sparse when compared to top-down (endogenous) attentional processing. To understand the similarities in this mechanism between macaques and mice it is therefore important to study both model systems using similar approaches. The research of this chapter aims to make direct comparisons between these two model species via electrophysiological recordings in a bottom-up attentional paradigm. It was found that in the macaque BU cues increased responses to visual stimuli in both V1 and V4, but no obvious pattern was seen in the mouse V1 and SC. This study goes some way in describing the similarities and differences in neural responses in visual areas of different species which are utilised for attention based paradigms Finally study three focuses on linking the previous two studies. In study two we investigated bottom-up attentional processes, which are thought to involve early, fast visuomotor pathways. Whereas in study one we found that SC and V1, areas known for their involvement in and ability to coordinate rapid visuomotor responses, respectively, also receive clear and structured input from higher-level prefrontal areas. Therefore we hypothesized that stimulating these prefrontal areas could modulate bottom-up attention. This is achieved by using optogenetic stimulation of prefrontal control regions, such as Cg, identified in this research whilst preforming electrophysiological recordings in a bottom-up attentional paradigm. In V1 is was found that optogenetic stimulation had no effect on neuronal activation. However in SC optogenetic activation increased the sustained stimulus response, regardless of cuing condition. Taken together, this research further investigates some brain regions involved in orienting and attention in both mice and macaques and partially bridges the gap in understanding between these two animal models

    Cortical And Subcortical Mechanisms For Sound Processing

    Get PDF
    The auditory cortex is essential for encoding complex and behaviorally relevant sounds. Many questions remain concerning whether and how distinct cortical neuronal subtypes shape and encode both simple and complex sound properties. In chapter 2, we tested how neurons in the auditory cortex encode water-like sounds perceived as natural by human listeners, but that we could precisely parametrize. The stimuli exhibit scale-invariant statistics, specifically temporal modulation within spectral bands scaled with the center frequency of the band. We used chronically implanted tetrodes to record neuronal spiking in rat primary auditory cortex during exposure to our custom stimuli at different rates and cycle-decay constants. We found that, although neurons exhibited selectivity for subsets of stimuli with specific statistics, over the population responses were stable. These results contribute to our understanding of how auditory cortex processes natural sound statistics. In chapter 3, we review studies examining the role of different cortical inhibitory interneurons in shaping sound responses in auditory cortex. We identify the findings that support each other and the mechanisms that remain unexplored. In chapter 4, we tested how direct feedback from auditory cortex to the inferior colliculus modulated sound responses in the inferior colliculus. We optogenetically activated or suppressed cortico-collicular feedback while recording neuronal spiking in the mouse inferior colliculus in response to pure tones and dynamic random chords. We found that feedback modulated sound responses by reducing sound selectivity by decreasing responsiveness to preferred frequencies and increasing responsiveness to less preferred frequencies. Furthermore, we tested the effects of perturbing intra-cortical inhibitory-excitatory networks on sound responses in the inferior colliculus. We optogenetically activated or suppressed parvalbumin-positive (PV) and somatostatin-positive (SOM) interneurons while recording neuronal spiking in mouse auditory cortex and inferior colliculus. We found that modulation of neither PV- nor SOM-interneurons affected sound-evoked responses in the inferior colliculus, despite significant modulation of cortical responses. Our findings imply that cortico-collicular feedback can modulate responses to simple and complex auditory stimuli independently of cortical inhibitory interneurons. These experiments elucidate the role of descending auditory feedback in shaping sound responses. Together these results implicate the importance of the auditory cortex in sound processing

    A physiologically inspired model for solving the cocktail party problem.

    Get PDF
    At a cocktail party, we can broadly monitor the entire acoustic scene to detect important cues (e.g., our names being called, or the fire alarm going off), or selectively listen to a target sound source (e.g., a conversation partner). It has recently been observed that individual neurons in the avian field L (analog to the mammalian auditory cortex) can display broad spatial tuning to single targets and selective tuning to a target embedded in spatially distributed sound mixtures. Here, we describe a model inspired by these experimental observations and apply it to process mixtures of human speech sentences. This processing is realized in the neural spiking domain. It converts binaural acoustic inputs into cortical spike trains using a multi-stage model composed of a cochlear filter-bank, a midbrain spatial-localization network, and a cortical network. The output spike trains of the cortical network are then converted back into an acoustic waveform, using a stimulus reconstruction technique. The intelligibility of the reconstructed output is quantified using an objective measure of speech intelligibility. We apply the algorithm to single and multi-talker speech to demonstrate that the physiologically inspired algorithm is able to achieve intelligible reconstruction of an "attended" target sentence embedded in two other non-attended masker sentences. The algorithm is also robust to masker level and displays performance trends comparable to humans. The ideas from this work may help improve the performance of hearing assistive devices (e.g., hearing aids and cochlear implants), speech-recognition technology, and computational algorithms for processing natural scenes cluttered with spatially distributed acoustic objects.R01 DC000100 - NIDCD NIH HHSPublished versio

    Decoding neural responses to temporal cues for sound localization

    Get PDF
    The activity of sensory neural populations carries information about the environment. This may be extracted from neural activity using different strategies. In the auditory brainstem, a recent theory proposes that sound location in the horizontal plane is decoded from the relative summed activity of two populations in each hemisphere, whereas earlier theories hypothesized that the location was decoded from the identity of the most active cells. We tested the performance of various decoders of neural responses in increasingly complex acoustical situations, including spectrum variations, noise, and sound diffraction. We demonstrate that there is insufficient information in the pooled activity of each hemisphere to estimate sound direction in a reliable way consistent with behavior, whereas robust estimates can be obtained from neural activity by taking into account the heterogeneous tuning of cells. These estimates can still be obtained when only contralateral neural responses are used, consistently with unilateral lesion studies. DOI: http://dx.doi.org/10.7554/eLife.01312.001

    Noise processing in the auditory system with applications in speech enhancement

    Get PDF
    Abstract: The auditory system is extremely efficient in extracting auditory information in the presence of background noise. However, speech enhancement algorithms, aimed at removing the background noise from a degraded speech signal, are not achieving results that are near the efficacy of the auditory system. The purpose of this study is thus to first investigate how noise affects the spiking activity of neurons in the auditory system and then use the brain activity in the presence of noise to design better speech enhancement algorithms. In order to investigate how noise affects the spiking activity of neurons, we first design a generalized linear model that relates the spiking activity of neurons to intrinsic and extrinsic covariates that can affect their activity, such as noise. From this model, we extract two metrics, one that shows the effects of noise on the spiking activity and another the relative effects of vocalization compared to noise. We use these metrics to analyze neural data, recorded from a structure of the auditory system named the inferior colliculus (IC), while presenting noisy vocalizations. We studied the effect of different kinds of noises (non-stationary, white and natural stationary), different vocalizations, different input sound levels and signal-to-noise ratios (SNR). We found that the presence of non-stationary noise increases the spiking activity of neurons, regardless of the SNR, input level or vocalization type. The presence of white or natural stationary noises however causes a great diversity of responses where the activity of sites could increase, decrease or remain unchanged. This shows that the noise invariance previously reported in the IC depends on the noisy conditions, which had not been observed before. We then address the problem of speech enhancement using information from the brain's processing in the presence of noise. It has been shown before that the brain waves of a listener strongly correlates with the speaker to which the listener attends. Given this, we design two speech enhancement algorithms with a denoising autoencoder structure, namely the Brain Enhanced Speech Denoiser (BESD) and U-shaped Brain Enhanced Speech Denoiser (U-BESD). These algorithms take advantage of the attended auditory information present in the brain activity of the listener to denoise a multi-talker speech. The U-BESD is built upon the BESD with the addition of skip connections and dilated convolutions. Compared to previously proposed approaches, BESD and U-BESD are trained in a single neural architecture, lowering the complexity of the algorithm. We investigate two experimental settings. In the first one, the attended speaker is known, referred to as the speaker-specific setting, and in the second one no prior information is available about the attended speaker, referred to as the speaker-independent setting. In the speaker-specific setting, we show that both the BESD and U-BESD algorithms surpass a similar denoising autoencoder. Moreover, we also show that in the speaker-independent setting, U-BESD surpasses the performance of the only known approach that also uses the brain's activity.Le système auditif est extrêmement efficace pour extraire de l’information pertinente en présence d’un bruit de fond. Par contre, les algorithmes de rehaussement de la parole, visant à supprimer le bruit d’un signal de parole bruité, n’atteignent pas des résultats proches de l’efficacité du système auditif. Le but de cette étude est donc d’abord d’étudier comment le bruit affecte l’activité neuronale dans le système auditif, puis d’utiliser l’activité cérébrale en présence de bruit pour concevoir de meilleurs algorithmes de rehaussement. Afin d’étudier comment le bruit peut affecter l’activité des neurones, nous concevons d’abord un modèle linéaire généralisé qui relie l’activité des neurones aux covariables intrinsèques et extrinsèques qui peuvent affecter leur activité, comme le bruit. De ce modèle, nous extrayons deux métriques, l’une qui permet d’étudier les effets du bruit sur l’activité neuronale et l’autre les effets relatifs sur cette activité de la vocalisation par rapport au bruit. Nous utilisons ces métriques pour analyser l’activité neuronale d’une structure du système auditif, nomée le colliculus inférieur (IC), enregistrée lors de la présentation de vocalisations bruitées. Nous avons étudié l’effet de différents types de bruits, différentes vocalisations, différents niveaux sonores d’entrée et différents rapports signal sur bruit (SNR). Nous avons constaté que la présence de bruit non stationnaire augmente l’activité des neurones, quel que soit le SNR, le niveau d’entrée ou le type de vocalisation. La présence de bruits stationnaires blancs ou naturels provoque cependant une grande diversité de réponses où l’activité des sites d’enregistrement pouvait augmenter, diminuer ou rester inchangée. Cela montre que l’invariance du bruit précédemment signalée dans l’IC dépend des conditions de bruit, ce qui n’avait pas été observé auparavant. Nous abordons ensuite le problème du rehaussement de la parole en utilisant de l’information provenant du cerveau. Il a été démontré auparavant que les ondes cérébrales d’un auditeur sont fortement corrélées avec le locuteur auquel l’auditeur porte attention. Compte tenu de cette corrélation, nous concevons deux algorithmes de rehaussement de la parole, le Brain Enhanced Speech Denoiser (BESD) et le U-shaped Brain Enhanced Speech Denoiser (U-BESD), qui tirent parti de l’information présente dans l’activité cérébrale de l’auditeur pour débruiter un signal de parole multi-locuteurs. L’U-BESD est construit à partir du BESD avec l’ajout de sauts de connexions (skip connections) et de convolutions dilatées. De plus, BESD et U-BESD sont constitués respectivement d’un seul réseau qui nécessite un seul entraînement, ce qui réduit la complexité de l’algorithme en comparaison avec les approches existantes. Nous étudions deux conditions expérimentales. Dans la première, le locuteur auquel l’auditeur porte attention est connu, et dans la seconde, ce locuteur n’est pas connu. Dans le cadre du locuteur connu, nous montrons que les algorithmes BESD et U-BESD surpassent un autoencodeur similaire. De plus, nous montrons également que dans le cadre du locuteur inconnu, le U-BESD surpasse les performances de la seule approche existante connue qui utilise également l’activité cérébrale
    • …
    corecore