519 research outputs found

    CORTICAL DYNAMICS OF AUDITORY-VISUAL SPEECH: A FORWARD MODEL OF MULTISENSORY INTEGRATION.

    Get PDF
    In noisy settings, seeing the interlocutor's face helps to disambiguate what is being said. For this to happen, the brain must integrate auditory and visual information. Three major problems are (1) bringing together separate sensory streams of information, (2) extracting auditory and visual speech information, and (3) identifying this information as a unified auditory-visual percept. In this dissertation, a new representational framework for auditory visual (AV) speech integration is offered. The experimental work (psychophysics and electrophysiology (EEG)) suggests specific neural mechanisms for solving problems (1), (2), and (3) that are consistent with a (forward) 'analysis-by-synthesis' view of AV speech integration. In Chapter I, multisensory perception and integration are reviewed. A unified conceptual framework serves as background for the study of AV speech integration. In Chapter II, psychophysics testing the perception of desynchronized AV speech inputs show the existence of a ~250ms temporal window of integration in AV speech integration. In Chapter III, an EEG study shows that visual speech modulates early on the neural processing of auditory speech. Two functionally independent modulations are (i) a ~250ms amplitude reduction of auditory evoked potentials (AEPs) and (ii) a systematic temporal facilitation of the same AEPs as a function of the saliency of visual speech. In Chapter IV, an EEG study of desynchronized AV speech inputs shows that (i) fine-grained (gamma, ~25ms) and (ii) coarse-grained (theta, ~250ms) neural mechanisms simultaneously mediate the processing of AV speech. In Chapter V, a new illusory effect is proposed, where non-speech visual signals modify the perceptual quality of auditory objects. EEG results show very different patterns of activation as compared to those observed in AV speech integration. An MEG experiment is subsequently proposed to test hypotheses on the origins of these differences. In Chapter VI, the 'analysis-by-synthesis' model of AV speech integration is contrasted with major speech theories. From a Cognitive Neuroscience perspective, the 'analysis-by-synthesis' model is argued to offer the most sensible representational system for AV speech integration. This thesis shows that AV speech integration results from both the statistical nature of stimulation and the inherent predictive capabilities of the nervous system

    The cognitive neuroscience of visual working memory

    Get PDF
    Visual working memory allows us to temporarily maintain and manipulate visual information in order to solve a task. The study of the brain mechanisms underlying this function began more than half a century ago, with Scoville and Milner’s (1957) seminal discoveries with amnesic patients. This timely collection of papers brings together diverse perspectives on the cognitive neuroscience of visual working memory from multiple fields that have traditionally been fairly disjointed: human neuroimaging, electrophysiological, behavioural and animal lesion studies, investigating both the developing and the adult brain

    Investigating the Cognitive and Neural Mechanisms underlying Multisensory Perceptual Decision-Making in Humans

    Get PDF
    On a frequent day-to-day basis, we encounter situations that require the formation of decisions based on ambiguous and often incomplete sensory information. Perceptual decision-making defines the process by which sensory information is consolidated and accumulated towards one of multiple possible choice alternatives, which inform our behavioural responses. Perceptual decision-making can be understood both theoretically and neurologically as a process of stochastic sensory evidence accumulation towards some choice threshold. Once this threshold is exceeded, a response is facilitated, informing the overt actions undertaken. Prevalent progress has been made towards understanding the cognitive and neural mechanisms underlying perceptual decision-making. Analyses of Reaction Time (RTs; typically constrained to milliseconds) and choice accuracy; reflecting decision-making behaviour, can be coupled with neuroimaging methodologies; notably electroencephalography (EEG) and functional Magnetic Resonance Imaging (fMRI), to identify spatiotemporal components representative of the neural signatures corresponding to such accumulation-to-bound decision formation on a single-trial basis. Taken together, these provide us with an experimental framework conceptualising the key computations underlying perceptual decision-making. Despite this, relatively little remains known about the enhancements or alternations to the process of perceptual decision-making from the integration of information across multiple sensory modalities. Consolidating the available sensory evidence requires processing information presented in more than one sensory modality, often near-simultaneously, to exploit the salient percepts for what we term as multisensory (perceptual) decision-making. Specifically, multisensory integration must be considered within the perceptual decision-making framework in order to understand how information becomes stochastically accumulated to inform overt sensory-motor choice behaviours. Recently, substantial progress in research has been made through the application of behaviourally-informed, and/or neurally-informed, modelling approaches to benefit our understanding of multisensory decision-making. In particular, these approaches fit a number of model parameters to behavioural and/or neuroimaging datasets, in order to (a) dissect the constituent internal cognitive and neural processes underlying perceptual decision-making with both multisensory and unisensory information, and (b) mechanistically infer how multisensory enhancements arise from the integration of information across multiple sensory modalities to benefit perceptual decision formation. Despite this, the spatiotemporal locus of the neural and cognitive underpinnings of enhancements from multisensory integration remains subject to debate. In particular, our understanding of which brain regions are predictive of such enhancements, where they arise, and how they influence decision-making behaviours requires further exploration. The current thesis outlines empirical findings from three studies aimed at providing a more complete characterisation of multisensory perceptual decision-making, utilising EEG and accumulation-to-bound modelling methodologies to incorporate both behaviourally-informed and neurally-informed modelling approaches, investigating where, when, and how perceptual improvements arise during multisensory perceptual decision-making. Pointedly, these modelling approaches sought to probe the exerted modulatory influences of three factors: unisensory formulated cross-modal associations (Chapter 2), natural ageing (Chapter 3), and perceptual learning (Chapter 4), on the integral cognitive and neural mechanisms underlying observable benefits towards multisensory decision formation. Chapter 2 outlines secondary analyses, utilising a neurally-informed modelling approach, characterising the spatiotemporal dynamics of neural activity underlying auditory pitch-visual size cross-modal associations. In particular, how unisensory auditory pitch-driven associations benefit perceptual decision formation was functionally probed. EEG measurements were recorded from participants during performance of an Implicit Association Test (IAT), a two-alternative forced-choice (2AFC) paradigm which presents one unisensory stimulus feature per trial for participants to categorise, but manipulates the stimulus feature-response key mappings of auditory pitch-visual size cross-modal associations from unisensory stimuli alone, thus overcoming the issue of mixed selectivity in recorded neural activity prevalent in previous cross-modal associative research, which near-simultaneously presented multisensory stimuli. Categorisations were faster (i.e., lower RTs) when stimulus feature-response key mappings were associatively congruent, compared to associatively incongruent, between the two associative counterparts, thus demonstrating a behavioural benefit to perceptual decision formation. Multivariate Linear Discriminant Analysis (LDA) was used to characterise the spatiotemporal dynamics of EEG activity underpinning IAT performance, in which two EEG components were identified that discriminated neural activity underlying the benefits of associative congruency of stimulus feature-response key mappings. Application of a neurally-informed Hierarchical Drift Diffusion Model (HDDM) demonstrated early sensory processing benefits, with increases in the duration of non-decisional processes with incongruent stimulus feature-response key mappings, and late post-sensory alterations to decision dynamics, with congruent stimulus feature-response key mappings decreasing the quantity of evidence required to facilitate a decision. Hence, we found that the trial-by-trial variability in perceptual decision formation from unisensory facilitated cross-modal associations could be predicted by neural activity within our neurally-informed modelling approach. Next, Chapter 3 outlines cognitive research investigating age-related impacts on the behavioural indices of multisensory perceptual decision-making (i.e., RTs and choice accuracy). Natural ageing has been demonstrated to diversely affect multisensory perceptual decision-making dynamics. However, the constituent cognitive processes affected remain unclear. Specifically, a mechanistic insight reconciling why older adults may exhibit preserved multisensory integrative benefits, yet display generalised perceptual deficits, relative to younger adults, remains inconclusive. To address this limitation, 212 participants performed an online variant of a well-established audiovisual object categorisation paradigm, whereby age-related differences in RTs and choice accuracy (binary responses) between audiovisual (AV), visual (V), and auditory (A) trial types could be assessed between Younger Adults (YAs; Mean ± Standard Deviation = 27.95 ± 5.82 years) and Older Adults (OAs; Mean ± Standard Deviation = 60.96 ± 10.35 years). Hierarchical Drift Diffusion Modelling (HDDM) was fitted to participants’ RTs and binary responses in order to probe age-related impacts on the latent underlying processes of multisensory decision formation. Behavioural results found that whereas OAs were typically slower (i.e., ↑ RTs) and less accurate (i.e., ↓ choice accuracy), relative to YAs across all sensory trial types, they exhibited greater differences in RTs between AV and V trials (i.e., ↑ AV-V RT difference), with no significant effects of choice accuracy, implicating preserved benefits of multisensory integration towards perceptual decision formation. HDDM demonstrated parsimonious fittings for characterising these behavioural discrepancies between YAs and OAs. Notably we found slower rates of sensory evidence accumulation (i.e., ↓ drift rates) for OAs across all sensory trial types, coupled with (1) higher rates of sensory evidence accumulation (i.e., ↑ drift rates) for OAs between AV versus V trial types irrespective of stimulus difficulty, coupled with (2) increased response caution (i.e., ↑ decision boundaries) between AV versus V trial types, and (3) decreased non-decisional processing duration (i.e., ↓ non-decision times) between AV versus V trial types for stimuli of increased difficulty respectively. Our findings suggest that older adults trade-off multisensory decision-making speed for accuracy to preserve enhancements towards perceptual decision formation relative to younger adults. Hence, they display an increased reliance on integrating multimodal information; through the principle of inverse effectiveness, as a compensatory mechanism for a generalised cognitive slowing when processing unisensory information. Overall, our findings demonstrate how computational modelling can reconcile contrasting hypotheses of age-related changes in processes underlying multisensory perceptual decision-making behaviour. Finally, Chapter 4 outlines research probing the exerted influence of perceptual learning on multisensory perceptual decision-making. Views of unisensory perceptual learning imply that improvements in perceptual sensitivity may be due to enhancements in early sensory representations and/or modulations to post-sensory decision dynamics. We sought to assess whether these views could account for improvements in perceptual sensitivity for multisensory stimuli, or even exacerbations of multisensory enhancements towards decision formation, by consolidating the spatiotemporal locus of where and when in the brain they may be observed. We recorded EEG activity from participants who completed the same audiovisual object categorisation paradigm (as outlined in Chapter 3), over three consecutive days. We used single-trial multivariate LDA to characterise the spatiotemporal trajectory of the decision dynamics underlying any observed multisensory benefits both (a) within and (b) between visual, auditory, and audiovisual trial types. While found significant decreases were found in RTs and increases in choice accuracy over testing days, we did not find any significant effects of perceptual learning on multisensory nor unisensory perceptual decision formation. Similarly, EEG analysis did not find any neural components indicative of early or late modulatory effects from perceptual learning in brain activity, which we attribute to (1) a long duration of stimulus presentations (300ms), and (2) a lack of sufficient statistical power for our LDA classifier to discriminate face-versus-car trial types. We end this chapter with considerations for discerning multisensory benefits towards perceptual decision formation, and recommendations for altering our experimental design to observe the effects of perceptual learning as a decision neuromodulator. These findings contribute to literature justifying the increasing relevance of utilising behaviourally-informed and/or neurally-informed modelling approaches for investigating multisensory perceptual decision-making. In particular, a discussion of the underlying cognitive and/or neural mechanisms that can be attributed to the benefits of multisensory integration towards perceptual decision formation, as well as the modulatory impact of the decision modulators in question, can contribute to a theoretical reconciliation that multisensory integrative benefits are not ubiquitous to specific spatiotemporal neural dynamics nor cognitive processes

    State-Dependent Cortical Network Dynamics

    Get PDF
    Neuropsychiatric illness represents a major health burden in the United States with a paucity of effective treatment. Many neuropsychiatric illnesses are network disorders, exhibiting aberrant organization of coordinated activity within and between brain areas. Cortical oscillations, arising from the synchronized activity of groups of neurons, are important in mediating both local and long-range communication in the brain and are particularly affected in neuropsychiatric diseases. A promising treatment approach for such network disorders entails ‘correcting’ abnormal oscillatory activity through non-invasive brain stimulation. However, we lack a clear understanding of the functional role of oscillatory activity in both health and disease. Thus, basic science and translational work is needed to elucidate the role of oscillatory activity and other network dynamics in neuronal processing and behavior. Organized activity in the brain occurs at many spatial and temporal scales, ranging from the millisecond duration of individual action potentials to the daily circadian rhythm. The studies comprising this dissertation focused on organization in cortex at the time scale of milliseconds, assessing local field potential and spiking activity, and contribute to understanding (1) the effects of non-invasive brain stimulation on behavioral responses, (2) network dynamics within and across cortical areas during different states, and (3) how oscillatory activity organizes spiking activity locally and long-range during sustained attention. Taken together, this work provides insight into the physiological organization of network dynamics and can provide the basis for future rational design of non-invasive brain stimulation treatments.Doctor of Philosoph

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
    • …
    corecore