191 research outputs found

    Bayesian Modeling of the Dynamics of Phase Modulations and their Application to Auditory Event Related Potentials at Different Loudness Scales

    Get PDF
    We study the effect of long-term habituation signatures of auditory selective attention reflected in the instantaneous phase information of the auditory event-related potentials (ERPs) at four distinct stimuli levels of 60, 70, 80, and 90 dB SPL. The analysis is based on the single-trial level. The effect of habituation can be observed in terms of the changes (jitter) in the instantaneous phase information of ERPs. In particular, the absence of habituation is correlated with a consistently high phase synchronization over ERP trials. We estimate the changes in phase concentration over trials using a Bayesian approach, in which the phase is modeled as being drawn from a von Mises distribution with a concentration parameter which varies smoothly over trials. The smoothness assumption reflects the fact that habituation is a gradual process. We differentiate between different stimuli based on the relative changes and absolute values of the estimated concentration parameter using the proposed Bayesian model

    Bayesian Modeling of the Dynamics of Phase Modulations and their Application to Auditory Event Related Potentials at Different Loudness Scales

    Get PDF
    We study the effect of long-term habituation signatures of auditory selective attention reflected in the instantaneous phase information of the auditory event-related potentials (ERPs) at four distinct stimuli levels of 60, 70, 80, and 90 dB SPL. The analysis is based on the single-trial level. The effect of habituation can be observed in terms of the changes (jitter) in the instantaneous phase information of ERPs. In particular, the absence of habituation is correlated with a consistently high phase synchronization over ERP trials. We estimate the changes in phase concentration over trials using a Bayesian approach, in which the phase is modeled as being drawn from a von Mises distribution with a concentration parameter which varies smoothly over trials. The smoothness assumption reflects the fact that habituation is a gradual process. We differentiate between different stimuli based on the relative changes and absolute values of the estimated concentration parameter using the proposed Bayesian model

    Decoding Electrophysiological Correlates of Selective Attention by Means of Circular Data

    Get PDF
    Sustaining our attention to a relevant sensory input in a complex listening environment, is of great importance for a successful auditory communication. To avoid the overload of the auditory system, the importance of the stimuli is estimated in the higher levels of the auditory system. Based on these information, the attention is drifted away from the irrelevant and unimportant stimuli. Long-term habituation, a gradual process independent from sensory adaptation, plays a major role in drifting away our attention from irrelevant stimuli. A better understanding of attention-modulated neural activity is important for shedding light on the encoding process of auditory streams. For instance, these information can have a direct impact on developing smarter hearing aid devices in which more accurate objective measures can be used to re ect the hearing capabilities of patients with hearing pathologies. As an example, an objective measures of long-term habituation with respect to di erent level of sound stimuli can be used more accurately for adjustment of hearing aid devices in comparison to verbal reports. The main goal of this thesis is to analyze the neural decoding signatures of long-term habituation and neural modulations of selective attention by exploiting circular regularities in electrophysiological (EEG) data, in which we can objectively measure the level of attentional-binding to di erent stimuli. We study, in particular, the modulations of the instantaneous phase (IP) in event related potentials (ERPs) over trials for di erent experimental settings. This is in contrast to the common approach where the ERP component of interest is computed through averaging a su ciently large number of ERP trials. It is hypothesized that a high attentional binding to a stimulus is related to a high level of IP cluster. As the attention binding reduces, IP is spread more uniformly on a unit circle. This work is divided into three main parts. In the initial part, we investigate the dynamics of long-term habituation with di erent acoustical stimuli (soft vs. loud) over ERP trials. The underlying temporal dynamics in IP and the level of phase cluster of the ERPs are assessed by tting circular probability functions (pdf) over data segments. To increase the temporal resolution of detecting times at which a signi cant change in IP occurs, an abrupt change point model at di erent pure-tone stimulations is used. In a second study, we improve upon the results and methodology by relaxing some of the constrains in order to integrate the gradual process of long-term habituation into the model. For this means, a Bayesian state-space model is proposed. In all of the aforementioned studies, we successfully classi ed between di erent stimulation levels, using solely the IP of ERPs over trials. In the second part of the thesis, the experimental setting is expanded to contain longer and more complex auditory stimuli as in real-world scenarios. Thereby, we study the neural-correlates of attention in spontaneous modulations of EEG (ongoing activity) which uses the complete temporal resolution of the signal. We show a mapping between the ERP results and the ongoing EEG activity based on IP. A Markov-based model is developed for removing spurious variations that can occur in ongoing signals. We believe the proposed method can be incorporated as an important preprocessing step for a more reliable estimation of objective measures of the level of selective attention. The proposed model is used to pre-process and classify between attending and un-attending states in a seminal dichotic tone detection experiment. In the last part of this thesis, we investigate the possibility of measuring a mapping between the neural activities of the cortical laminae with the auditory evoked potentials (AEP) in vitro. We show a strong correlation between the IP of AEPs and the neural activities at the granular layer, using mutual information.Die Aufmerksamkeit auf ein relevantes auditorisches Signal in einer komplexen H orumgebung zu lenken ist von gro er Bedeutung f ur eine erfolgreiche akustische Kommunikation. Um eine Uberlastung des H orsystems zu vermeiden, wird die Bedeutung der Reize in den h oheren Ebenen des auditorischen Systems bewertet. Basierend auf diesen Informationen wird die Aufmerksamkeit von den irrelevanten und unwichtigen Reizen abgelenkt. Dabei spielt die sog. Langzeit- Habituation, die einen graduellen Prozess darstellt der unabh angig von der sensorischen Adaptierung ist, eine wichtige Rolle. Ein besseres Verst andnis der aufmerksamkeits-modulierten neuronalen Aktivit at ist wichtig, um den Kodierungsprozess von sog. auditory streams zu beleuchten. Zum Beispiel k onnen diese Informationen einen direkten Ein uss auf die Entwicklung intelligenter H orsysteme haben bei denen genauere, objektive Messungen verwendet werden k onnen, um die H orf ahigkeiten von Patienten mit H orpathologien widerzuspiegeln. So kann beispielsweise ein objektives Ma f ur die Langzeit- Habituation an unterschiedliche Schallreize genutzt werden um - im Vergleich zu subjektiven Selbsteinsch atzungen - eine genauere Anpassung der H orsysteme zu erreichen. Das Hauptziel dieser Dissertation ist die Analyse neuronaler Dekodierungssignaturen der Langzeit- Habituation und neuronaler Modulationen der selektiver Aufmerksamkeit durch Nutzung zirkul arer Regularit aten in elektroenzephalogra schen Daten, in denen wir objektiv den Grad der Aufmerksamkeitsbindung an verschiedene Reize messen k onnen. Wir untersuchen insbesondere die Modulation der Momentanphase (engl. Instantaneous phase, IP) in ereigniskorrelierten Potenzialen (EKPs) in verschiedenen experimentellen Settings. Dies steht im Gegensatz zu dem traditionellen Ansatz, bei dem die interessierenden EKP-Komponenten durch Mittelung einer ausreichend gro en Anzahl von Einzelantworten im Zeitbereich ermittelt werden. Es wird vermutet, dass eine hohe Aufmerksamkeitsbindung an einen Stimulus mit einem hohen Grad an IP-Clustern verbunden ist. Nimmt die Aufmerksamkeitsbindung hingegen ab, so ist die Momentanphase uniform auf dem Einheitskreis verteilt. Diese Arbeit gliedert sich in drei Teile. Im ersten Teil untersuchen wir die Dynamik der Langzeit-Habituation mit verschiedenen akustischen Reizen (leise vs. laut) in EKP-Studien. Die zugrundeliegende zeitliche Dynamik der Momentanphase und die Ebene des Phasenclusters der EKPs werden durch die Anpassung von zirkul aren Wahrscheinlichkeitsfunktionen (engl. probability density function, pdf) uber Datensegmente bewertet. Mithilfe eines sog. abrupt change-point Modells wurde die zeitliche Au osung der Daten erh oht, sodass signi kante Anderungen in der Momentanphase bei verschiedenen Reintonstimulationen detektierbar sind. In einer zweiten Studie verbessern wir die Ergebnisse und die Methodik, indem wir einige der Einschr ankungen lockern, um den gradualen Prozess der Langzeit-Habituation in das abrupt changepoint Modell zu integrieren. Dazu wird ein bayes`sches Zustands-Raum-Modell vorgeschlagen. In den zuvor genannten Studien konnte erfolgreich mithilfe der Momentanphase zwischen verschiedenen Stimulationspegeln unterschieden werden. Im zweiten Teil der Arbeit wird der experimentelle Rahmen erweitert, um komplexere auditorische Reize wie in realen H orsituationen untersuchen zu k onnen. Dabei analysieren wir die neuronalen Korrelate der Aufmerksamkeit anhand spontaner Modulationen der kontinuierlichen EEG-Aktivit at, die eine zeitliche Au osung erm oglicht. Wir zeigen eine Abbildung zwischen den EKP-Ergebnissen und der kontinuierlichen EEG-Aktivit at auf Basis der Momentanphase. Ein Markov-basiertes Modell wird entwickelt, um st orende Variationen zu entfernen, die in kontinuierlichen EEG-Signalen auftreten k onnen. Wir glauben, dass die vorgeschlagene Methode als wichtiger Vorverarbeitungsschritt zur soliden objektiven Absch atzung des Aufmerksamkeitsgrades mithilfe von EEG-Daten verwendet werden kann. In einem dichotischen Tonerkennungsexperiment wird das vorgeschlagene Modell zur Vorverarbeitung der EEG-Daten und zur Klassi zierung zwischen gerichteten und ungerichteten Aufmerksamkeitszust anden erfolgreich verwendet. Im letzten Teil dieser Arbeit untersuchen wir den Zusammenhang zwischen den neuronalen Aktivit aten der kortikalen Laminae und auditorisch evozierten Potentialen (AEP) in vitro im Tiermodell. Wir zeigen eine starke Korrelation zwischen der Momentanphase der AEPs und den neuronalen Aktivit aten in der Granularschicht unter Verwendung der Transinformation

    Neural oscillatory signatures of auditory and audiovisual illusions

    Get PDF
    Questions of the relationship between human perception and brain activity can be approached from different perspectives: in the first, the brain is mainly regarded as a recipient and processor of sensory data. The corresponding research objective is to establish mappings of neural activity patterns and external stimuli. Alternatively, the brain can be regarded as a self-organized dynamical system, whose constantly changing state affects how incoming sensory signals are processed and perceived. The research reported in this thesis can chiefly be located in the second framework, and investigates the relationship between oscillatory brain activity and the perception of ambiguous stimuli. Oscillations are here considered as a mechanism for the formation of transient neural assemblies, which allows efficient information transfer. While the relevance of activity in distinct frequency bands for auditory and audiovisual perception is well established, different functional architectures of sensory integration can be derived from the literature. This dissertation therefore aims to further clarify the role of oscillatory activity in the integration of sensory signals towards unified perceptual objects, using illusion paradigms as tools of study. In study 1, we investigate the role of low frequency power modulations and phase alignment in auditory object formation. We provide evidence that auditory restoration is associated with a power reduction, while the registration of an additional object is reflected by an increase in phase locking. In study 2, we analyze oscillatory power as a predictor of auditory influence on visual perception in the sound-induced flash illusion. We find that increased beta-/ gamma-band power over occipitotemporal electrodes shortly before stimulus onset predicts the illusion, suggesting a facilitation of processing in polymodal circuits. In study 3, we address the question of whether visual influence on auditory perception in the ventriloquist illusion is reflected in primary sensory or higher-order areas. We establish an association between reduced theta-band power in mediofrontal areas and the occurrence of illusion, which indicates a top-down influence on sensory decision-making. These findings broaden our understanding of the functional relevance of neural oscillations by showing that different processing modes, which are reflected in specific spatiotemporal activity patterns, operate in different instances of sensory integration.Fragen nach dem Zusammenhang zwischen menschlicher Wahrnehmung und Hirnaktivität können aus verschiedenen Perspektiven adressiert werden: in der einen wird das Gehirn hauptsächlich als Empfänger und Verarbeiter von sensorischen Daten angesehen. Das entsprechende Forschungsziel wäre eine Zuordnung von neuronalen Aktivitätsmustern zu externen Reizen. Dieser Sichtweise gegenüber steht ein Ansatz, der das Gehirn als selbstorganisiertes dynamisches System begreift, dessen sich ständig verändernder Zustand die Verarbeitung und Wahrnehmung von sensorischen Signalen beeinflusst. Die Arbeiten, die in dieser Dissertation zusammengefasst sind, können vor allem in der zweitgenannten Forschungsrichtung verortet werden, und untersuchen den Zusammenhang zwischen oszillatorischer Hirnaktivität und der Wahrnehmung von mehrdeutigen Stimuli. Oszillationen werden hier als ein Mechanismus für die Formation von transienten neuronalen Zusammenschlüssen angesehen, der effizienten Informationstransfer ermöglicht. Obwohl die Relevanz von Aktivität in verschiedenen Frequenzbändern für auditorische und audiovisuelle Wahrnehmung gut belegt ist, können verschiedene funktionelle Architekturen der sensorischen Integration aus der Literatur abgeleitet werden. Das Ziel dieser Dissertation ist deshalb eine Präzisierung der Rolle oszillatorischer Aktivität bei der Integration von sensorischen Signalen zu einheitlichen Wahrnehmungsobjekten mittels der Nutzung von Illusionsparadigmen. In der ersten Studie untersuchen wir die Rolle von Leistung und Phasenanpassung in niedrigen Frequenzbändern bei der Formation von auditorischen Objekten. Wir zeigen, dass die Wiederherstellung von Tönen mit einer Reduktion der Leistung zusammenhängt, während die Registrierung eines zusätzlichen Objekts durch einen erhöhten Phasenangleich widergespiegelt wird. In der zweiten Studie analysieren wir oszillatorische Leistung als Prädiktor von auditorischem Einfluss auf visuelle Wahrnehmung in der sound-induced flash illusion. Wir stellen fest, dass erhöhte Beta-/Gamma-Band Leistung über occipitotemporalen Elektroden kurz vor der Reizdarbietung das Auftreten der Illusion vorhersagt, was auf eine Begünstigung der Verarbeitung in polymodalen Arealen hinweist. In der dritten Studie widmen wir uns der Frage, ob ein visueller Einfluss auf auditorische Wahrnehmung in der ventriloquist illusion sich in primären sensorischen oder übergeordneten Arealen widerspiegelt. Wir weisen einen Zusammenhang von reduzierter Theta-Band Leistung in mediofrontalen Arealen und dem Auftreten der Illusion nach, was einen top-down Einfluss auf sensorische Entscheidungsprozesse anzeigt. Diese Befunde erweitern unser Verständnis der funktionellen Bedeutung neuronaler Oszillationen, indem sie aufzeigen, dass verschiedene Verarbeitungsmodi, die sich in spezifischen räumlich-zeitlichen Aktivitätsmustern spiegeln, in verschiedenen Phänomenen von sensorischer Integration wirksam sind

    Reducing the Effect of Spurious Phase Variations in Neural Oscillatory Signals

    Get PDF
    The phase-reset model of oscillatory EEG activity has received a lot of attention in the last decades for decoding different cognitive processes. Based on this model, the ERPs are assumed to be generated as a result of phase reorganization in ongoing EEG. Alignment of the phase of neuronal activities can be observed within or between different assemblies of neurons across the brain. Phase synchronization has been used to explore and understand perception, attentional binding and considering it in the domain of neuronal correlates of consciousness. The importance of the topic and its vast exploration in different domains of the neuroscience presses the need for appropriate tools and methods for measuring the level of phase synchronization of neuronal activities. Measuring the level of instantaneous phase (IP) synchronization has been used extensively in numerous studies of ERPs as well as oscillatory activity for a better understanding of the underlying cognitive binding with regard to different set of stimulations such as auditory and visual. However, the reliability of results can be challenged as a result of noise artifact in IP. Phase distortion due to environmental noise artifacts as well as different pre-processing steps on signals can lead to generation of artificial phase jumps. One of such effects presented recently is the effect of low envelope on the IP of signal. It has been shown that as the instantaneous envelope of the analytic signal approaches zero, the variations in the phase increase, effectively leading to abrupt transitions in the phase. These abrupt transitions can distort the phase synchronization results as they are not related to any neurophysiological effect. These transitions are called spurious phase variation. In this study, we present a model to remove generated artificial phase variations due to the effect of low envelope. The proposed method is based on a simplified form of a Kalman smoother, that is able to model the IP behavior in narrow-bandpassed oscillatory signals. In this work we first explain the details of the proposed Kalman smoother for modeling the dynamics of the phase variations in narrow-bandpassed signals and then evaluate it on a set of synthetic signals. Finally, we apply the model on ongoing-EEG signals to assess the removal of spurious phase variations

    Computational and Perceptual Characterization of Auditory Attention

    Get PDF
    Humans are remarkably capable at making sense of a busy acoustic environment in real-time, despite the constant cacophony of sounds reaching our ears. Attention is a key component of the system that parses sensory input, allocating limited neural resources to elements with highest informational value to drive cognition and behavior. The focus of this thesis is the perceptual, neural, and computational characterization of auditory attention. Pioneering studies exploring human attention to natural scenes came from the visual domain, spawning a number of hypotheses on how attention operates among the visual pathway, as well as a considerable amount of computational work that attempt to model human perception. Comparatively, our understanding of auditory attention is yet very elementary, particularly pertaining to attention automatically drawn to salient sounds in the environment, such as a loud explosion. In this work, we explore how human perception is affected by the saliency of sound, characterized across a variety of acoustic features, such as pitch, loudness, and timbre. Insight from psychoacoustical data is complemented with neural measures of attention recorded directly from the brain using electroencephalography (EEG). A computational model of attention is presented, tracking the statistical regularities of incoming sound among a high-dimensional feature space to build predictions of future feature values. The model determines salient time points that will attract attention by comparing its predictions to the observed sound features. The high degree of agreement between the model and human experimental data suggests predictive coding as a potential mechanism of attention in the auditory pathway. We investigate different modes of volitional attention to natural acoustic scenes with a "cocktail-party" simulation. We argue that the auditory system can direct attention in at least three unique ways (globally, based on features, and based on objects) and that perception can be altered depending on how attention is deployed. Further, we illustrate how the saliency of sound affects the various modes of attention. The results of this work improve our understanding of auditory attention, highlighting the temporally evolving nature of sound as a significant distinction between audition and vision, with a focus on using natural scenes that engage the full capability of human attention

    Cortico-muscular coherence in sensorimotor synchronisation

    Get PDF
    This thesis sets out to investigate the neuro-muscular control mechanisms underlying the ubiquitous phenomenon of sensorimotor synchronisation (SMS). SMS is the coordination of movement to external rhythms, and is commonly observed in everyday life. A large body of research addresses the processes underlying SMS at the levels of behaviour and brain. Comparatively, little is known about the coupling between neural and behavioural processes, i.e. neuro-muscular processes. Here, the neuro-muscular processes underlying SMS were investigated in the form of cortico-muscular coherence measured based on Electroencephalography (EEG) and Electromyography (EMG) recorded in human healthy participants. These neuro-muscular processes were investigated at three levels of engagement: passive listening and observation of rhythms in the environment, imagined SMS, and executed SMS, which resulted in the testing of three hypotheses: (i) Rhythms in the environment, such as music, spontaneously modulate cortico-muscular coupling, (ii) Movement intention modulates cortico-muscular coupling, and (iii) Cortico-muscular coupling is dynamically modulated during SMS time-locked to the stimulus rhythm. These three hypotheses were tested through two studies that used Electroencephalography (EEG) and Electromyography (EMG) recordings to measure Cortico-muscular coherence (CMC). First, CMC was tested during passive music listening, to test whether temporal and spectral properties of music stimuli known to induce groove, i.e., the subjective experience of wanting to move, can spontaneously modulate the overall strength of the communication between the brain and the muscles. Second, imagined and executed movement synchronisation was used to investigate the role of movement intention and dynamics on CMC. The two studies indicate that both top-down, and somatosensory and/or proprioceptive processes modulate CMC during SMS tasks. Although CMC dynamics might be linked to movement dynamics, no direct correlation between movement performance and CMC was found. Furthermore, purely passive auditory or visual rhythmic stimulation did not affect CMC. Together, these findings thus indicate that movement intention and active engagement with rhythms in the environment might be critical in modulating CMC. Further investigations of the mechanisms and function of CMC are necessary, as they could have important implications for clinical and elderly populations, as well as athletes, where optimisation of motor control is necessary to compensate for impaired movement or to achieve elite performance

    Modulating consciousness with acoustic-electric stimulation

    Get PDF

    Sound Object Recognition

    Get PDF
    Humans are constantly exposed to a variety of acoustic stimuli ranging from music and speech to more complex acoustic scenes like a noisy marketplace. The human auditory perception mechanism is able to analyze these different kinds of sounds and extract meaningful information suggesting that the same processing mechanism is capable of representing different sound classes. In this thesis, we test this hypothesis by proposing a high dimensional sound object representation framework, that captures the various modulations of sound by performing a multi-resolution mapping. We then show that this model is able to capture a wide variety of sound classes (speech, music, soundscapes) by applying it to the tasks of speech recognition, speaker verification, musical instrument recognition and acoustic soundscape recognition. We propose a multi-resolution analysis approach that captures the detailed variations in the spectral characterists as a basis for recognizing sound objects. We then show how such a system can be fine tuned to capture both the message information (speech content) and the messenger information (speaker identity). This system is shown to outperform state-of-art system for noise robustness at both automatic speech recognition and speaker verification tasks. The proposed analysis scheme with the included ability to analyze temporal modulations was used to capture musical sound objects. We showed that using a model of cortical processing, we were able to accurately replicate the human perceptual similarity judgments and also were able to get a good classification performance on a large set of musical instruments. We also show that neither just the spectral feature or the marginals of the proposed model are sufficient to capture human perception. Moreover, we were able to extend this model to continuous musical recordings by proposing a new method to extract notes from the recordings. Complex acoustic scenes like a sports stadium have multiple sources producing sounds at the same time. We show that the proposed representation scheme can not only capture these complex acoustic scenes, but provides a flexible mechanism to adapt to target sources of interest. The human auditory perception system is known to be a complex system where there are both bottom-up analysis pathways and top-down feedback mechanisms. The top-down feedback enhances the output of the bottom-up system to better realize the target sounds. In this thesis we propose an implementation of top-down attention module which is complimentary to the high dimensional acoustic feature extraction mechanism. This attention module is a distributed system operating at multiple stages of representation, effectively acting as a retuning mechanism, that adapts the same system to different tasks. We showed that such an adaptation mechanism is able to tremendously improve the performance of the system at detecting the target source in the presence of various distracting background sources

    The sound of actions: a mismatch negativity (MMN) study

    Get PDF
    The ability to derive the intentions of others from the sound produced by their actions is quintessential to effective social behaviour. Many neuroscientists believe that this ability depends on the brain’s mirror-neuron system, which provides a direct link between action and perception. Precisely how intentions can be inferred through actionperception, however, has provoked much debate. One challenge in inferring the cause of a perceived action, is the fact that the problem is ill-posed, because identical movements can be made to perform different actions with different goals. Here, we show how, in the auditory modality, identification of most likely cause of a human action-related sound is highly subject to inferences. Using multi-channel, event-related potentials (ERPs), we determined the temporal dynamics of the ability to decipher action sounds by recording the mismatch negativity (MMN) generated in response to multi-deviant stimuli consisting of 3 different human action-related sounds (click of the tongue, hand clapping, and footsteps) and a non-human action-related sound (water drop). Subjects listened to the original sound-stimulus and to sounds obtained by altering 1 (low degree of disguise) or more complex (high degree of disguise) acoustic parameters of the original sound
    • …
    corecore