26 research outputs found

    Electrophysiological signatures of conscious perception: The influence of cognitive, cortical and pathological states on multisensory integration

    Get PDF
    At any given moment, information reaches us via our different sensory systems. In order to navigate this multitude of information, associated information needs to be integrated to a coherent percept. In recent years, the hypothesis that synchronous neural oscillations play a prominent role in unisensory and multisensory processing has received substantial support. Current findings further convey the idea that local oscillations and functional connectivity reflect bottom-up as well as top-down processes during multisensory integration and perception. In the current work, I review recent findings on the role of neural oscillations for conscious multisensory perception. Subsequently, I present an integrative network model for multisensory integration that describes the cortical correlates of conscious multisensory perception, the influence of fluctuations of oscillatory neural activity on subsequent perception, and the influence of cognitive processes on neural oscillations and perception. I propose that neural oscillations in distinct, coexisting frequencies reflect the various processing steps underlying multisensory perception.Jederzeit erreichen uns Informationen über unsere verschiedenen Sinnesorgane und Wahrnehmungssysteme. Um in dieser Menge an Informationen den Überblick zu behalten, müssen zusammengehörige Informationen zu einer kohärente Wahrnehmung zusammengefügt werden. In den letzten Jahren hat die Hypothese, dass synchrone neuronale Oszillationen eine wichtige Rolle bei der Verarbeitung von unisensorischen und multisensorischen Reizen spielen, viel Unterstützung erfahren. Neueste Befunde befördern weiterhin die Idee, dass lokale Oszillationen und funktionale Konnektivität aufsteigende und absteigende Prozesse bei multisensorischer Integration und Wahrnehmung widerspiegeln. In dieser Arbeit werde ich einen Überblick über die neuesten Befunde zur Rolle neuronaler Oszillationen bei bewusster, multisensorischer Wahrnehmung geben. Anschließend werde ich ein integratives Netzwerkmodell multisensorischer Wahrnehmung präsentieren, welches die kortikalen Korrelate bewusster, multisensorischer Wahrnehmung, den Einfluss von Schwankungen oszillatorischer neuronaler Aktivität auf darauffolgende Wahrnehmung, sowie den Einfluss kognitiver Prozesse auf neuronale Oszillationen und Wahrnehmung beschreibt. Ich schlage vor, dass neuronale Oszillationen in umschriebenen, gleichzeitig aktiven Frequenzbändern die verschiedenen Verarbeitungsschritte widerspiegeln, welche multisensorischer Wahrnehmung zugrunde liegen

    Evaluating Acupuncture Point and Nonacupuncture Point Stimulation with EEG: A High-Frequency Power Spectrum Analysis

    Get PDF
    To identify physical and sensory responses to acupuncture point stimulation (APS), nonacupuncture point stimulation (NAPS) and no stimulation (NS), changes in the high-frequency power spectrum before and after stimulation were evaluated with electroencephalography (EEG). A total of 37 healthy subjects received APS at the LI4 point, NAPS, or NS with their eyes closed. Background brain waves were measured before, during, and after stimulation using 8 channels. Changes in the power spectra of gamma waves and high beta waves before, during, and after stimulation were comparatively analyzed. After NAPS, absolute high beta power (AHBP), relative high beta power (RHBP), absolute gamma power (AGP), and relative gamma power (RGP) tended to increase in all channels. But no consistent notable changes were found for APS and NS. NAPS is believed to cause temporary reactions to stress, tension, and sensory responses of the human body, while APS responds stably compared to stimulation of other parts of the body

    The neurophysiological correlates of illusory hand ownership

    Get PDF
    The rubber hand illusion has been established as one of the most important tools in the quest for understanding body ownership. Such understanding may be vital to neuro-rehabilitative and neurosurgical therapies that aim to modulate this phenomenon. Numerous brain imaging and TMS studies indicate that a wide ranging network of brain areas is associated with illusory hand ownership in the RHI. However, while we have a good idea of where neural activity related to the RHI occurs, the question of how these networks interact on the temporal basis is still rather unexplored as the few EEG studies that have investigated this question have relied on problematic stimulation methods or have failed to induce a strong sense of illusion in participants. Avoiding these limitations the experiments in this thesis provide insights into the temporal dynamics of body ownership in the brain. Experiment One (presented in Chapter Three) focussed on establishing that the purpose-built, automated setup induced the Rubber Hand Illusion reliably as measured by proprioceptive drift measurements and questionnaire ratings. The evoked visual and tactile responses elicited by the setup were identified and timing and intensity of illusory hand ownership were found to be comparable to the existing literature. The results of this experiment provided guidance regarding necessary adjustments to the RHI setup for the following experiments in order to avoid confounds induced by avoidable differences between conditions. Experiment Two (presented in Chapter Four) used a setup adjusted according to the findings of Experiment One and recorded evoked responses and oscillatory responses in participants who felt the rubber hand illusion. A combination of experimental conditions was applied to rule out confounds of attention and body-stimulus position. In addition two control conditions were applied to reveal the neural correlates of illusory hand ownership. The experiment revealed a reduction of alpha and beta power as well as an attenuation of evoked responses around 330 ms over central electrodes associated with illusory hand ownership. Also, the results indicate that body-stimulus processing and illusion processing as measured by evoked potentials might emanate from the same cortical network. Experiment Three (presented in Chapter Four) tested if the findings of the second experiment in regard to illusion effects were robust against changes in stimulus duration. The reduction in alpha and beta power and the attenuation of evoked responses at 330 ms were found to be robust against changes in stimulus duration. Together with the results from Experiment Two, these findings provide the first EEG marker of illusion related activity in the RHI induced by an automated setup with varying stimuli length. Experiment four (presented in Chapter Five) investigated if the neural correlates identified in the Experiment Two and Experiment Three were indeed related to the feeling of illusory hand ownership in the RHI and not to a mere remapping of visual receptive fields. To test this, evoked and oscillatory responses were recorded during the somatic rubber hand illusion, a non-visual variant of the RHI. The somatic rubber hand illusion was found to be associated with an attenuation around 330ms post-stimulus on central electrodes, similar to the classic RHI in Experiment Two and Three. This indicated that this illusion effect in evoked responses was not related to a remapping of visual receptive fields as a result of the RHI but to the neurophysiological processes of the RHI itself. To summarise, the results of the experiments presented in this thesis indicate that an attenuation at 330ms in evoked potentials is associated with illusory hand ownership in both, the classic RHI and the somatic RHI. Further, attenuation in alpha and beta band power is associated with illusory hand ownership in the classic RHI

    Basic prediction mechanisms as a precursor for schizophrenia studies

    Get PDF
    Traditionally, early visual cortex (V1-3) was thought of as merely a relay centre for feedforward retinal input, providing entry to the cortical visual processing steam. However, in addition to feedforward retinal input, V1 receives a large amount of intracortical information through feedback and lateral connections. Human visual perception is constructed from combining feedforward inputs with these feedback and lateral contributions. Feedback connections allow the visual cortical response to feedforward information to be affected by expectation, knowledge, and context; even at the level of early visual cortex. In Chapter 1 we discuss the feedforward and feedback visual processing streams. We consider historical philosophical and scientific propositions about constructive vision. We introduce modern theories of constructive vision, which suggest that vision is an active process that aims to infer or predict the cause of sensory inputs. We discuss how V1 therefore represents not only retinal input but also high-level effects related to constructive predictive perception. Visual illusions are a ‘side effect’ of constructive and inferential visual perception. For the vast majority of stimulus inputs, integration with context and knowledge facilitates clearer, more veridical perception. In illusion these constructive mechanisms produce incorrect percepts. Illusory effects can be observed in early visual cortex, even when there is no change in the feedforward visual input. We suggest that illusions therefore provide us with a tool to probe feedforward and feedback integration, as they exploit the difference between retinal stimulation and resulting perception. Thus, illusions allow us to see the changes in activation and perception induced only by feedback without changes in feedforward input. We discuss a few specific examples of illusion generation through feedback and the accompanying effects on V1 processing. In Schizophrenia, the integration of feedback and feedforward information is thought to be dysfunctional, with unbalanced contributions of the two sources. This is evidenced by disrupted contextual binding in visual perception and corresponding deficits in contextual illusion perception. We propose that illusions can provide a window into constructive and inferential visual perception in Schizophrenia. Use of illusion paradigms could help elucidate the deficits existing within feedback and feedforward integration. If we can establish clear effects of illusory feedback to V1 in a typical population, we can apply this knowledge to clinical subjects to observe the differences in feedback and feedforward information. Chapter 2 describes a behavioural study of the rubber hand illusion. We probe how multimodal illusory experience arises under varying reliabilities of visuotactile feedforward input. We recorded Likert ratings of illusion experience from subjects, after their hidden hand was stimulated either synchronously or asynchronously with a visible rubber hand (200, 300, 400, or 600ms visuotactile asynchronicity). We used two groups, assessed by a questionnaire measuring a subject’s risk of developing Schizophrenia - moderate/high scorers and a control group of zero-scorers. We therefore consider how schizotypal symptoms contribute to rubber hand illusory experience and interact with visuotactile reliability. Our results reveal that the impact of feedforward information on higher level illusory body schema is modulated by its reliability. Less reliable feedforward inputs (increasing asynchronicity) reduce illusion perception. Our data suggests that some illusions may not be affected on a spectrum of schizotypal traits but only in the full schizophrenic disorder, as we found no effect of group on illusion perception. In Chapter 3 we present an fMRI investigation of the rubber hand illusion in typical participants. Cortical feedback allows information about other modalities and about cognitive states to be represented at the level of V1. Using a multimodal illusion, we investigated whether crossmodal and illusory states could be represented in early visual cortex in the absence of differential visual input. We found increased BOLD activity in motion area V5 and global V1 when the feedforward tactile information and the illusory outcome were incoherent (for example when the subject was experiencing the illusion during asynchronous stimulation). This is suggestive of increased predictive error, supporting predictive coding models of cognitive function. Additionally, we reveal that early visual cortex contains pattern representations specific to the illusory state, irrespective of tactile stimulation and under identical feedforward visual input. In Chapter 4 we use the motion-induced blindness illusion to demonstrate that feedback modulates stimulus representations in V1 during illusory disappearance. We recorded fMRI data from subjects viewing a 2D cross array rotating around a central axis, passing over an oriented Gabor patch target (45°/ 135°). We attempted to decode the target orientation from V1 when the target was either visible or invisible to subjects. Target information could be decoded during target visibility but not during motion-induced blindness. This demonstrates that the target representation in V1 is distorted or destroyed when the target is perceptually invisible. This illusion therefore has effects not only at higher cortical levels, as previously shown, but also in early sensory areas. The representation of the stimulus in V1 is related to perceptual awareness. Importantly, Chapter 4 demonstrated that intracortical processing can disturb constant feedforward information and overwrite feedforward representations. We suggest that the distortion observed occurs through feedback from V5 about the cross array in motion, overwriting feedforward orientation information. The flashed face distortion illusion is a relatively newly discovered illusion in which quickly presented faces become monstrously distorted. The neural underpinnings of the illusion remain unclear; however it has been hypothesised to be a face-specific effect. In Chapter 5 we challenged this account by exploiting two hallmarks of face-specific processing - the other-race effect and left visual field superiority. In two experiments, two ethnic groups of subjects viewed faces presented bilaterally in the visual periphery. We varied the race of the faces presented (same or different than subject), the visual field that the faces were presented in, and the duration of successive presentations (250, 500, 750 or 1000ms per face before replacement). We found that perceived distortion was not affected by stimulus race, visual field, or duration of successive presentations (measured by forced choice in experiment 1 and Likert scale in experiment 2). We therefore provide convincing evidence that FFD is not face-specific and instead suggest that it is an object-general effect created by comparisons between successive stimuli. These comparisons are underlined by a fed back higher level model which dictates that objects cannot immediately replace one another in the same retinotopic space without movement. In Chapter 6 we unify these findings. We discuss how our data show fed back effects on perception to produce visual illusion; effects which cannot be explained through purely feedforward activity processing. We deliberate how lateral connections and attention effects may contribute to our results. We describe known neural mechanisms which allow for the integration of feedback and feedforward information. We discuss how this integration allows V1 to represent the content of visual awareness, including during some of the illusions presented in this thesis. We suggest that a unifying theory of brain computation, Predictive Coding, may explain why feedback exerts top-down effects on feedforward processing. Lastly we discuss how our findings, and others that demonstrate feedback and prediction effects, could help develop the study and understanding of schizophrenia, including our understanding of the underlying neurological pathologies

    The Neural Correlates of Bodily Self-Consciousness in Virtual Worlds

    Full text link
    Bodily Self-Consciousness (BSC) is the cumulative integration of multiple sensory modalities that contribute to our sense of self. Sensory modalities, which include proprioception, vestibulation, vision, and touch are updated dynamically to map the specific, local representation of ourselves in space. BSC is closely associated with bottom-up and top-down aspects of consciousness. Recently, virtual- and augmented-reality technology have been used to explore perceptions of BSC. These recent achievements are partly attributed to advances in modern technology, and partly due to the rise of virtual and augmented reality markets. Virtual reality head-mounted displays can alter aspects of perception and consciousness unlike ever before. Consequently, many strides have been made regarding BSC research. Previous research suggests that BSC results from the perceptions of embodiment (i.e., the feeling of ownership towards a real or virtual extremity) and presence (i.e., feeling physically located in a real or virtual space). Though physiological mechanisms serving embodiment and presence in the real world have been proposed by others, how these perceptual experiences interact and whether they can be dissociated is still poorly understood. Additionally, less is known about the physiological mechanisms underlying the perception of presence and embodiment in virtual environments. Therefore, five experiments were conducted to examine the perceptions of embodiment and presence in virtual environments to determine which physiological mechanisms support these perceptions. These studies compared performance between normal or altered embodiment/presence conditions. Results from a novel experimental paradigm using virtual reality (Experiment 4) are consistent with studies in the literature that reported synchronous sensorimotor feedback corresponded with greater strength of the embodiment illusion. In Experiment 4, participants recorded significantly faster reaction times and better accuracy in correlated feedback conditions compared to asynchronous feedback conditions. Reaction times were also significantly faster, and accuracy was higher for conditions where participants experienced the game from a first- versus third-person perspective. Functional magnetic resonance imaging (fMRI) data from Experiment 5 revealed that many frontoparietal networks contribute to the perception of embodiment, which include premotor cortex (PMC) and intraparietal sulcus (IPS). fMRI data revealed that activity in temporoparietal networks, including the temporoparietal junction and right precuneus, corresponded with manipulations thought to affect the perception of presence. Furthermore, data suggest that networks associated with embodiment and presence overlap, and brain areas that support perception may be predicated upon those that support embodiment. The results of these experiments offer further clues into the psychophysiological mechanisms underlying BSC

    Sensory integration through the scope of body ownership

    Get PDF
    Sensory integration is the process by which the brain combines distinct sensory modalities, such that the merged information can be efficiently used to interact with the environment. Body ownership is an example of a subjective experience that emerges through sensory integration. The mechanisms of sensory integration are not yet fully understood. By employing illusions such as the body ownership illusion, where a person falsely perceives an artificial limb as part of their body, brain processes governing sensory integration can be investigated. In this PhD project, a virtual reality platform capable of eliciting a body ownership illusion via accurately timed visuo-tactile stimulation was developed, and used as a tool for studying sensory integration. A threat perception experiment, and an experiment inducing visuo-tactile stimulation with temporal delay were conducted using this platform. Biophysical and behavioural results from this study showed that threat perception and body ownership are not necessarily correlated, but can be viewed as parallel processes within the context of embodiment, and can be observed in distinct neural correlates of brain activity. Based on the results from these studies, it is proposed that the experience of body ownership is not an all-or-nothing, binary experience, but instead, can be considered as a graded experience and having multiple levels

    Visual-somatosensory interactions in mental representations of the body and the face

    Get PDF
    The body is represented in the brain at levels that incorporate multisensory information. This thesis focused on interactions between vision and cutaneous sensations (i.e., touch and pain). Experiment 1 revealed that there are partially dissociable pathways for visual enhancement of touch (VET) depending upon whether one sees one’s own body or the body of another person. This indicates that VET, a seeming low-level effect on spatial tactile acuity, is actually sensitive to body identity. Experiments 2-4 explored the effect of viewing one’s own body on pain perception. They demonstrated that viewing the body biases pain intensity judgments irrespective of actual stimulus intensity, and, more importantly, reduces the discriminative capacities of the nociceptive pathway encoding noxious stimulus intensity. The latter effect only occurs if the pain-inducing event itself is not visible, suggesting that viewing the body alone and viewing a stimulus event on the body have distinct effects on cutaneous sensations. Experiment 5 replicated an enhancement of visual remapping of touch (VRT) when viewing fearful human faces being touched, and further demonstrated that VRT does not occur for observed touch on non-human faces, even fearful ones. This suggests that the facial expressions of non-human animals may not be simulated within the somatosensory system of the human observer in the same way that the facial expressions of other humans are. Finally, Experiment 6 examined the enfacement illusion, in which synchronous visuo-tactile inputs cause another’s face to be assimilated into the mental self-face representation. The strength of enfacement was not affected by the other’s facial expression, supporting an asymmetric relationship between processing of facial identity and facial expressions. Together, these studies indicate that multisensory representations of the body in the brain link low-level perceptual processes with the perception of emotional cues and body/face identity, and interact in complex ways depending upon contextual factors

    Peripersonal space representation in the first year of life: a behavioural and electroencephalographic investigation of the perception of unimodal and multimodal events taking place in the space surrounding the body

    Get PDF
    In my PhD research project, I wanted to investigate infants’ representation of the peripersonal space, which is the portion of environment between the self and the others. In the last three decades research provided evidence on newborns’ and infants’ perception of their own bodies and of other individuals, whereas not many studies investigated infants’ perception of the portion of space where they can interact with both others and objects, namely the peripersonal space. Considering the importance of the peripersonal space, especially in light of its defensive and interactive functions, I decided to investigate the development of its representation focusing on two aspects. On one side, I wanted to study how newborns and infants processed the space around them, if they differentiated between near and far space, possibly perceiving and integrating depth cues across sensory modalities and when and how they started to respond to different movements occurring in the space surrounding their bodies. On the other side, I was interested in understanding whether already at birth the peripersonal space could be considered as a delimited portion of space with special characteristics and, relatedly, if its boundaries could be determined. In order to respond to my first question, I investigated newborns’ and infants’ looking behaviour in response to visual and audio-visual stimuli depicting different trajectories taking place in the space immediately surrounding their body. Taken together, the results of these studies demonstrated that humans show, since the earliest stages of their development, a rudimentary processing of the space surrounding them. Newborns seemed, in fact, to already differentiate the space around them, through an efficient discrimination of different moving trajectories and a visual preference for those directed towards their own body, possibly due to their higher adaptive relevance. They also seemed to integrate multimodal, audio-visual information about stimuli moving in the near space, showing a facilitated processing of congruent audio-visual approaching stimuli. Furthermore, the results of these studies could help understand the development of the integration of multimodal stimuli with an adaptive valence during infancy. When newborns’ and infants were presented with unimodal, visual stimuli, they all directed their visual preferences to the stimuli moving towards their bodies. Conversely, their pattern of looking times was more complex when they were presented with congruent and incongruent audiovisual stimuli. Right after birth infants showed a spontaneous visual preference for congruent audio-visual stimuli, which was challenged by a similarly strong visual preference for adaptively important visual stimuli moving towards their bodies. The looking behaviours of 5-month-old infants, instead, seemed to be driven only by a spontaneous preference for multimodal congruent stimuli, i.e. depicting motion along the same trajectory, irrespective of the adaptive value of the information conveyed by either of the two sensory components of the stimulus. Nine-month-old infants, finally, seemed to flexibly integrate multisensory integration principles with the necessity of directing their attention to ethologically salient stimuli, as shown by the fact that their visual preference for unexpected, incongruent audio-visual stimuli was challenged by the simultaneous presence of adaptively relevant stimuli. Similarly to what happened with newborns, presenting 9-month-old infants with the two categories of preferred stimuli simultaneously led to the absence of a visual preference. Within my project I also investigated the electroencephalographic correlates of the processing of unimodal, visual and auditory, stimuli depicting different trajectories in a sample of 5-month-old infants. The results seemed to provide evidence in support of the role of the primary sensory cortices in the processing of crossmodal stimuli. Furthermore, they seemed to support the possibility that infants’ brain could allocate, already during the earliest stages of processing, different amounts of attention to stimuli with different adaptive valence. Two further studies addressed my second question, namely whether already at birth the peripersonal space could be considered as a delimited portion of space with special characteristics and if its boundaries could be determined. In these studies I measured newborns’ saccadic reaction times (RTs) to tactile stimuli presented simultaneously to a sound perceived at different distances from their body. The results showed that newborns’ RTs were modulated by the perceived position of the sound and that their modulation was very similar to that shown by adults, suggesting that the boundary of newborns’ peripersonal space could be identified in the perceived sound position in whose correspondence the drop of RTs happened. This suggested that at birth the space immediately surrounding the body seems to be already invested of a special salience and characterised by a more efficient integration of multimodal stimuli. As a consequence, it might be considered as a rudimentary representation of the peripersonal space, possibly serving, as a working space representation, early interactions between newly born humans and their environment. Overall, these findings provide a first understanding of how humans start to process the space surrounding them, which, importantly, is the space linking them with others and the space where their first interactions will take place

    Investigating the Cognitive and Neural Mechanisms underlying Multisensory Perceptual Decision-Making in Humans

    Get PDF
    On a frequent day-to-day basis, we encounter situations that require the formation of decisions based on ambiguous and often incomplete sensory information. Perceptual decision-making defines the process by which sensory information is consolidated and accumulated towards one of multiple possible choice alternatives, which inform our behavioural responses. Perceptual decision-making can be understood both theoretically and neurologically as a process of stochastic sensory evidence accumulation towards some choice threshold. Once this threshold is exceeded, a response is facilitated, informing the overt actions undertaken. Prevalent progress has been made towards understanding the cognitive and neural mechanisms underlying perceptual decision-making. Analyses of Reaction Time (RTs; typically constrained to milliseconds) and choice accuracy; reflecting decision-making behaviour, can be coupled with neuroimaging methodologies; notably electroencephalography (EEG) and functional Magnetic Resonance Imaging (fMRI), to identify spatiotemporal components representative of the neural signatures corresponding to such accumulation-to-bound decision formation on a single-trial basis. Taken together, these provide us with an experimental framework conceptualising the key computations underlying perceptual decision-making. Despite this, relatively little remains known about the enhancements or alternations to the process of perceptual decision-making from the integration of information across multiple sensory modalities. Consolidating the available sensory evidence requires processing information presented in more than one sensory modality, often near-simultaneously, to exploit the salient percepts for what we term as multisensory (perceptual) decision-making. Specifically, multisensory integration must be considered within the perceptual decision-making framework in order to understand how information becomes stochastically accumulated to inform overt sensory-motor choice behaviours. Recently, substantial progress in research has been made through the application of behaviourally-informed, and/or neurally-informed, modelling approaches to benefit our understanding of multisensory decision-making. In particular, these approaches fit a number of model parameters to behavioural and/or neuroimaging datasets, in order to (a) dissect the constituent internal cognitive and neural processes underlying perceptual decision-making with both multisensory and unisensory information, and (b) mechanistically infer how multisensory enhancements arise from the integration of information across multiple sensory modalities to benefit perceptual decision formation. Despite this, the spatiotemporal locus of the neural and cognitive underpinnings of enhancements from multisensory integration remains subject to debate. In particular, our understanding of which brain regions are predictive of such enhancements, where they arise, and how they influence decision-making behaviours requires further exploration. The current thesis outlines empirical findings from three studies aimed at providing a more complete characterisation of multisensory perceptual decision-making, utilising EEG and accumulation-to-bound modelling methodologies to incorporate both behaviourally-informed and neurally-informed modelling approaches, investigating where, when, and how perceptual improvements arise during multisensory perceptual decision-making. Pointedly, these modelling approaches sought to probe the exerted modulatory influences of three factors: unisensory formulated cross-modal associations (Chapter 2), natural ageing (Chapter 3), and perceptual learning (Chapter 4), on the integral cognitive and neural mechanisms underlying observable benefits towards multisensory decision formation. Chapter 2 outlines secondary analyses, utilising a neurally-informed modelling approach, characterising the spatiotemporal dynamics of neural activity underlying auditory pitch-visual size cross-modal associations. In particular, how unisensory auditory pitch-driven associations benefit perceptual decision formation was functionally probed. EEG measurements were recorded from participants during performance of an Implicit Association Test (IAT), a two-alternative forced-choice (2AFC) paradigm which presents one unisensory stimulus feature per trial for participants to categorise, but manipulates the stimulus feature-response key mappings of auditory pitch-visual size cross-modal associations from unisensory stimuli alone, thus overcoming the issue of mixed selectivity in recorded neural activity prevalent in previous cross-modal associative research, which near-simultaneously presented multisensory stimuli. Categorisations were faster (i.e., lower RTs) when stimulus feature-response key mappings were associatively congruent, compared to associatively incongruent, between the two associative counterparts, thus demonstrating a behavioural benefit to perceptual decision formation. Multivariate Linear Discriminant Analysis (LDA) was used to characterise the spatiotemporal dynamics of EEG activity underpinning IAT performance, in which two EEG components were identified that discriminated neural activity underlying the benefits of associative congruency of stimulus feature-response key mappings. Application of a neurally-informed Hierarchical Drift Diffusion Model (HDDM) demonstrated early sensory processing benefits, with increases in the duration of non-decisional processes with incongruent stimulus feature-response key mappings, and late post-sensory alterations to decision dynamics, with congruent stimulus feature-response key mappings decreasing the quantity of evidence required to facilitate a decision. Hence, we found that the trial-by-trial variability in perceptual decision formation from unisensory facilitated cross-modal associations could be predicted by neural activity within our neurally-informed modelling approach. Next, Chapter 3 outlines cognitive research investigating age-related impacts on the behavioural indices of multisensory perceptual decision-making (i.e., RTs and choice accuracy). Natural ageing has been demonstrated to diversely affect multisensory perceptual decision-making dynamics. However, the constituent cognitive processes affected remain unclear. Specifically, a mechanistic insight reconciling why older adults may exhibit preserved multisensory integrative benefits, yet display generalised perceptual deficits, relative to younger adults, remains inconclusive. To address this limitation, 212 participants performed an online variant of a well-established audiovisual object categorisation paradigm, whereby age-related differences in RTs and choice accuracy (binary responses) between audiovisual (AV), visual (V), and auditory (A) trial types could be assessed between Younger Adults (YAs; Mean ± Standard Deviation = 27.95 ± 5.82 years) and Older Adults (OAs; Mean ± Standard Deviation = 60.96 ± 10.35 years). Hierarchical Drift Diffusion Modelling (HDDM) was fitted to participants’ RTs and binary responses in order to probe age-related impacts on the latent underlying processes of multisensory decision formation. Behavioural results found that whereas OAs were typically slower (i.e., ↑ RTs) and less accurate (i.e., ↓ choice accuracy), relative to YAs across all sensory trial types, they exhibited greater differences in RTs between AV and V trials (i.e., ↑ AV-V RT difference), with no significant effects of choice accuracy, implicating preserved benefits of multisensory integration towards perceptual decision formation. HDDM demonstrated parsimonious fittings for characterising these behavioural discrepancies between YAs and OAs. Notably we found slower rates of sensory evidence accumulation (i.e., ↓ drift rates) for OAs across all sensory trial types, coupled with (1) higher rates of sensory evidence accumulation (i.e., ↑ drift rates) for OAs between AV versus V trial types irrespective of stimulus difficulty, coupled with (2) increased response caution (i.e., ↑ decision boundaries) between AV versus V trial types, and (3) decreased non-decisional processing duration (i.e., ↓ non-decision times) between AV versus V trial types for stimuli of increased difficulty respectively. Our findings suggest that older adults trade-off multisensory decision-making speed for accuracy to preserve enhancements towards perceptual decision formation relative to younger adults. Hence, they display an increased reliance on integrating multimodal information; through the principle of inverse effectiveness, as a compensatory mechanism for a generalised cognitive slowing when processing unisensory information. Overall, our findings demonstrate how computational modelling can reconcile contrasting hypotheses of age-related changes in processes underlying multisensory perceptual decision-making behaviour. Finally, Chapter 4 outlines research probing the exerted influence of perceptual learning on multisensory perceptual decision-making. Views of unisensory perceptual learning imply that improvements in perceptual sensitivity may be due to enhancements in early sensory representations and/or modulations to post-sensory decision dynamics. We sought to assess whether these views could account for improvements in perceptual sensitivity for multisensory stimuli, or even exacerbations of multisensory enhancements towards decision formation, by consolidating the spatiotemporal locus of where and when in the brain they may be observed. We recorded EEG activity from participants who completed the same audiovisual object categorisation paradigm (as outlined in Chapter 3), over three consecutive days. We used single-trial multivariate LDA to characterise the spatiotemporal trajectory of the decision dynamics underlying any observed multisensory benefits both (a) within and (b) between visual, auditory, and audiovisual trial types. While found significant decreases were found in RTs and increases in choice accuracy over testing days, we did not find any significant effects of perceptual learning on multisensory nor unisensory perceptual decision formation. Similarly, EEG analysis did not find any neural components indicative of early or late modulatory effects from perceptual learning in brain activity, which we attribute to (1) a long duration of stimulus presentations (300ms), and (2) a lack of sufficient statistical power for our LDA classifier to discriminate face-versus-car trial types. We end this chapter with considerations for discerning multisensory benefits towards perceptual decision formation, and recommendations for altering our experimental design to observe the effects of perceptual learning as a decision neuromodulator. These findings contribute to literature justifying the increasing relevance of utilising behaviourally-informed and/or neurally-informed modelling approaches for investigating multisensory perceptual decision-making. In particular, a discussion of the underlying cognitive and/or neural mechanisms that can be attributed to the benefits of multisensory integration towards perceptual decision formation, as well as the modulatory impact of the decision modulators in question, can contribute to a theoretical reconciliation that multisensory integrative benefits are not ubiquitous to specific spatiotemporal neural dynamics nor cognitive processes

    Exploring the electrophysiological responses to sudden sensory events

    Get PDF
    Living in rapidly changing and potentially dangerous environments has shaped animal nervous systems toward high sensitivity to sudden and intense sensory events - often signalling threats or affordances requiring swift motor reactions. Unsurprisingly, such events can elicit both rapid behavioural responses (e.g. the defensive eye-blink) and one of the largest electrocortical responses recordable from the scalp of several animals: the widespread Vertex Potential (VP). While generally assumed to reflect sensory-specific processing, growing evidence suggests that the VP instead largely reflects supramodal neural activity, sensitive to the behavioural-relevance of the eliciting stimulus. In this thesis, I investigate the relationship between sudden events and the brain responses and behaviours they elicit. In Chapters 1-3, I give a general introduction to the topic. In Chapter 4, I dissect the sensitivity of the VP to stimulus intensity - showing that its amplitude is sensitive only to the relative increase of intensity, and not the absolute intensity. In Chapter 5, I show that both increases and decreases of auditory and somatosensory stimulus intensity elicit the same supramodal VP, demonstrating that the VP is sensitive to any sufficiently abrupt sensory change, regardless of its direction or sensory modality. In Chapter 6, I observe strong correlations between the magnitudes of the VP and the eye-blink elicited by somatosensory stimuli (hand-blink reflex; HBR), demonstrating a tight relationship between cortical activity and behaviour elicited by sudden stimuli. In Chapter 7, I explore this relationship further, showing that the HBR is sensitive to high-level environmental dynamics. In Chapter 8, I propose an account of the underlying neural substrate of the VP, consistent with my results and the literature, which elucidates the relationship between the VP and behaviour. I also detail future experiments using fMRI and intracranial recordings to test this hypothesis, using the knowledge gained from this thesis
    corecore