1,491 research outputs found

    Multisensory Congruency as a Mechanism for Attentional Control over Perceptual Selection

    Get PDF
    The neural mechanisms underlying attentional selection of competing neural signals for awareness remains an unresolved issue. We studied attentional selection, using perceptually ambiguous stimuli in a novel multisensory paradigm that combined competing auditory and competing visual stimuli. We demonstrate that the ability to select, and attentively hold, one of the competing alternatives in either sensory modality is greatly enhanced when there is a matching cross-modal stimulus. Intriguingly, this multimodal enhancement of attentional selection seems to require a conscious act of attention, as passively experiencing the multisensory stimuli did not enhance control over the stimulus. We also demonstrate that congruent auditory or tactile information, and combined auditory–tactile information, aids attentional control over competing visual stimuli and visa versa. Our data suggest a functional role for recently found neurons that combine voluntarily initiated attentional functions across sensory modalities. We argue that these units provide a mechanism for structuring multisensory inputs that are then used to selectively modulate early (unimodal) cortical processing, boosting the gain of task-relevant features for willful control over perceptual awareness

    The Malleability of Cognitive Control and its Effects on Language Skills

    Get PDF
    Cognitive control, or executive function (EF), refers to the mental ability to regulate and adjust behavior across domains in the face of interference, conflict, or new rules. Evidence from psycholinguistics suggests a role for cognitive control in a range of language processing tasks including syntactic ambiguity resolution and verbal fluency. Separate work demonstrates that EF abilities are malleable with extensive practice, such that training improvements transfer across domains to novel tasks that rely on the same underlying EF mechanisms (an effect dubbed 'process-specificity'). In uniting these two growing literatures, this dissertation investigated the (causal) role of cognitive control for language processing through two longitudinal training interventions. In one study, I demonstrated that practicing a battery of cognitive tasks conferred selective benefits on untrained reading tasks requiring syntactic ambiguity resolution. Compared to controls, individuals who responded most to an EF training task exhibited (1) higher accuracy to comprehension questions indexing offline reinterpretation, and (2) faster real-time recovery efforts to resolve among conflicting interpretations. A second experiment extended these findings by addressing the degree to which training on a single EF task was necessary and sufficient to confer transfer to untrained, related language measures. Participants were assigned to practice a single training task that was minimally different from other training groups' tasks in terms of EF demands. By and large, participants who practiced a high-EF training task were exclusive in demonstrating a cross-assessment improvement profile consistent with a process-specific account: Pre/post benefits across a range of ostensibly different linguistic (verbal fluency, syntactic ambiguity resolution) and non-linguistic (Stroop, recognition memory) tasks were observed selectively for conditions with high-EF demands; no benefits were seen for cases when the need for cognitive control was minimized. Together, these findings provide support for the malleability of EF skills and suggest a critical (and perhaps causal) role for domain-general cognitive control in language processing. Further, the present studies indicate that within the right framework, and having appropriate linking hypotheses, cognitive training may be a viable way to improve language use

    Overt Visual Attention as a Causal Factor of Perceptual Awareness

    Get PDF
    Our everyday conscious experience of the visual world is fundamentally shaped by the interaction of overt visual attention and object awareness. Although the principal impact of both components is undisputed, it is still unclear how they interact. Here we recorded eye-movements preceding and following conscious object recognition, collected during the free inspection of ambiguous and corresponding unambiguous stimuli. Using this paradigm, we demonstrate that fixations recorded prior to object awareness predict the later recognized object identity, and that subjects accumulate more evidence that is consistent with their later percept than for the alternative. The timing of reached awareness was verified by a reaction-time based correction method and also based on changes in pupil dilation. Control experiments, in which we manipulated the initial locus of visual attention, confirm a causal influence of overt attention on the subsequent result of object perception. The current study thus demonstrates that distinct patterns of overt attentional selection precede object awareness and thereby directly builds on recent electrophysiological findings suggesting two distinct neuronal mechanisms underlying the two phenomena. Our results emphasize the crucial importance of overt visual attention in the formation of our conscious experience of the visual world

    Bistable perception in normal aging: perceptual reversibility and its relation to cognition

    Full text link
    The effects of age on the ability to resolve perceptual ambiguity are unknown, though it depends on fronto-parietal attentional networks known to change with age. We presented the bistable Necker cube to 24 middle-aged and older adults (OA; 56–78 years) and 20 younger adults (YA; 18–24 years) under passive-viewing and volitional control conditions: Hold one cube percept and Switch between cube percepts. During passive viewing, OA had longer dominance durations (time spent on each percept) than YA. In the Hold condition, OA were less able than YA to increase dominance durations. In the Switch condition, OA and YA did not differ in performance. Dominance durations in either condition correlated with performance on tests of executive function mediated by the frontal lobes. Eye movements (fixation deviations) did not differ between groups. These results suggest that OA’s reduced ability to hold a percept may arise from reduced selective attention. The lack of correlation of performance between Hold and executive-function measures suggests at least a partial segregation of underlying mechanisms.Published versionAccepted manuscrip

    Shades of meaning: Uncovering the geometry of ambiguous word representations through contextualised language models

    Full text link
    Lexical ambiguity presents a profound and enduring challenge to the language sciences. Researchers for decades have grappled with the problem of how language users learn, represent and process words with more than one meaning. Our work offers new insight into psychological understanding of lexical ambiguity through a series of simulations that capitalise on recent advances in contextual language models. These models have no grounded understanding of the meanings of words at all; they simply learn to predict words based on the surrounding context provided by other words. Yet, our analyses show that their representations capture fine-grained meaningful distinctions between unambiguous, homonymous, and polysemous words that align with lexicographic classifications and psychological theorising. These findings provide quantitative support for modern psychological conceptualisations of lexical ambiguity and raise new challenges for understanding of the way that contextual information shapes the meanings of words across different timescales

    Reconciling Predictive Coding and Biased Competition Models of Cortical Function

    Get PDF
    A simple variation of the standard biased competition model is shown, via some trivial mathematical manipulations, to be identical to predictive coding. Specifically, it is shown that a particular implementation of the biased competition model, in which nodes compete via inhibition that targets the inputs to a cortical region, is mathematically equivalent to the linear predictive coding model. This observation demonstrates that these two important and influential rival theories of cortical function are minor variations on the same underlying mathematical model

    Multiscale sampling model for motion integration

    Full text link
    Biologically plausible strategies for visual scene integration across spatial and temporal domains continues to be a challenging topic. The fundamental question we address is whether classical problems in motion integration, such as the aperture problem, can be solved in a model that samples the visual scene at multiple spatial and temporal scales in parallel. We hypothesize that fast interareal connections that allow feedback of information between cortical layers are the key processes that disambiguate motion direction. We developed a neural model showing how the aperture problem can be solved using different spatial sampling scales between LGN, V1 layer 4, V1 layer 6, and area MT. Our results suggest that multiscale sampling, rather than feedback explicitly, is the key process that gives rise to end-stopped cells in V1 and enables area MT to solve the aperture problem without the need for calculating intersecting constraints or crafting intricate patterns of spatiotemporal receptive fields. Furthermore, the model explains why end-stopped cells no longer emerge in the absence of V1 layer 6 activity (Bolz & Gilbert, 1986), why V1 layer 4 cells are significantly more end-stopped than V1 layer 6 cells (Pack, Livingstone, Duffy, & Born, 2003), and how it is possible to have a solution to the aperture problem in area MT with no solution in V1 in the presence of driving feedback. In summary, while much research in the field focuses on how a laminar architecture can give rise to complicated spatiotemporal receptive fields to solve problems in the motion domain, we show that one can reframe motion integration as an emergent property of multiscale sampling achieved concurrently within lamina and across multiple visual areas.This work was supported in part by CELEST, a National Science Foundation Science of Learning Center; NSF SBE-0354378 and OMA-0835976; ONR (N00014-11-1-0535); and AFOSR (FA9550-12-1-0436). (CELEST, a National Science Foundation Science of Learning Center; SBE-0354378 - NSF; OMA-0835976 - NSF; N00014-11-1-0535 - ONR; FA9550-12-1-0436 - AFOSR)Published versio

    Visual world studies of conversational perspective taking: similar findings, diverging interpretations

    Get PDF
    Visual-world eyetracking greatly expanded the potential for insight into how listeners access and use common ground during situated language comprehension. Past reviews of visual world studies on perspective taking have largely taken the diverging findings of the various studies at face value, and attributed these apparently different findings to differences in the extent to which the paradigms used by different labs afford collaborative interaction. Researchers are asking questions about perspective taking of an increasingly nuanced and sophisticated nature, a clear indicator of progress. But this research has the potential not only to improve our understanding of conversational perspective taking. Grappling with problems of data interpretation in such a complex domain has the unique potential to drive visual world researchers to a deeper understanding of how to best map visual world data onto psycholinguistic theory. I will argue against this interactional affordances explanation, on two counts. First, it implies that interactivity affects the overall ability to form common ground, and thus provides no straightforward explanation of why, within a single noninteractive study, common ground can have very large effects on some aspects of processing (referential anticipation) while having negligible effects on others (lexical processing). Second, and more importantly, the explanation accepts the divergence in published findings at face value. However, a closer look at several key studies shows that the divergences are more likely to reflect inconsistent practices of analysis and interpretation that have been applied to an underlying body of data that is, in fact, surprisingly consistent. The diverging interpretations, I will argue, are the result of differences in the handling of anticipatory baseline effects (ABEs) in the analysis of visual world data. ABEs arise in perspective-taking studies because listeners have earlier access to constraining information about who knows what than they have to referential speech, and thus can already show biases in visual attention even before the processing of any referential speech has begun. To be sure, these ABEs clearly indicate early access to common ground; however, access does not imply integration, since it is possible that this information is not used later to modulate the processing of incoming speech. Failing to account for these biases using statistical or experimental controls leads to over-optimistic assessments of listeners’ ability to integrate this information with incoming speech. I will show that several key studies with varying degrees of interactional affordances all show similar temporal profiles of common ground use during the interpretive process: early anticipatory effects, followed by bottom-up effects of lexical processing that are not modulated by common ground, followed (optionally) by further late effects that are likely to be post-lexical. Furthermore, this temporal profile for common ground radically differs from the profile of contextual effects related to verb semantics. Together, these findings are consistent with the proposal that lexical processes are encapsulated from common ground, but cannot be straightforwardly accounted for by probabilistic constraint-based approaches
    corecore