1,418 research outputs found

    Similar mechanisms of temporary bindings for identity and location of objects in healthy ageing:An eye tracking study with naturalistic scenes

    Get PDF
    The ability to maintain visual working memory (VWM) associations about the identity and location of objects has at times been found to decrease with age. To date, however, this age-related difficulty was mostly observed in artificial visual contexts (e.g., object arrays), and so it is unclear whether it may manifest in naturalistic contexts, and in which ways. In this eye-tracking study, 26 younger and 24 healthy older adults were asked to detect changes in a critical object situated in a photographic scene (192 in total), about its identity (the object becomes a different object but maintains the same position), location (the object only changes position) or both (the object changes in location and identity). Aging was associated with a lower change detection performance. A change in identity was harder to detect than a location change, and performance was best when both features changed, especially in younger adults. Eye movements displayed minor differences between age groups (e.g., shorter saccades in older adults) but were similarly modulated by the type of change. Latencies to the first fixation were longer and the amplitude of incoming saccades was larger when the critical object changed in location. Once fixated, the target object was inspected for longer when it only changed in identity compared to location. Visually salient objects were fixated earlier, but saliency did not affect any other eye movement measures considered, nor did it interact with the type of change. Our findings suggest that even though aging results in lower performance, it does not selectively disrupt temporary bindings of object identity, location, or their association in VWM, and highlight the importance of using naturalistic contexts to discriminate the cognitive processes that undergo detriment from those that are instead spared by aging

    Coordination of vision and language in cross-modal referential processing

    Get PDF
    This thesis investigates the mechanisms underlying the formation, maintenance, and sharing of reference in tasks in which language and vision interact. Previous research in psycholinguistics and visual cognition has provided insights into the formation of reference in cross-modal tasks. The conclusions reached are largely independent, with the focus on mechanisms pertaining to either linguistic or visual processing. In this thesis, we present a series of eye-tracking experiments that aim to unify these distinct strands of research by identifying and quantifying factors that underlie the cross-modal interaction between scene understanding and sentence processing. Our results show that both low-level (imagebased) and high-level (object-based) visual information interacts actively with linguistic information during situated language processing tasks. In particular, during language understanding (Chapter 3), image-based information, i.e., saliency, is used to predict the upcoming arguments of the sentence, when the linguistic material alone is not sufficient to make such predictions. During language production (Chapter 4), visual attention has the active role of sourcing referential information for sentence encoding. We show that two important factors influencing this process are the visual density of the scene, i.e., clutter, and the animacy of the objects described. Both factors influence the type of linguistic encoding observed and the associated visual responses. We uncover a close relationship between linguistic descriptions and visual responses, triggered by the cross-modal interaction of scene and object properties, which implies a general mechanism of cross-modal referential coordination. Further investigation (Chapter 5) shows that visual attention and sentence processing are closely coordinated during sentence production: similar sentences are associated with similar scan patterns. This finding holds across different scenes, which suggests that coordination goes beyond the well-known scene-based effects guiding visual attention, again supporting the existence of a general mechanism for the cross-modal coordination of referential information. The extent to which cross-modal mechanisms are activated depends on the nature of the task performed. We compare the three tasks of visual search, object naming, and scene description (Chapter 6) and explore how the modulation of cross-modal reference is reflected in the visual responses of participants. Our results show that the cross-modal coordination required in naming and description triggers longer visual processing and higher scan pattern similarity than in search. This difference is due to the coordination required to integrate and organize visual and linguistic referential processing. Overall, this thesis unifies explanations of distinct cognitive processes (visual and linguistic) based on the principle of cross-modal referentiality, and provides a new framework for unraveling the mechanisms that allow scene understanding and sentence processing to share and integrate information during cross-modal processing

    Classification of Visual and Linguistic Tasks using Eye-movement Features

    Get PDF
    The role of task has received special attention in visual cognition research because it can provide causal explanations of goal-directed eye-movement responses. The dependency between visual attention and task suggests that eye-movements can be used to classify the task being performed. A recent study by Greene, Liu, and Wolfe (2012), however, fails to achieve accurate classification of visual tasks based on eye-movement features. In the present study, we hypothesize that tasks can be successfully classified when they differ with the respect to the involvement of other cognitive domains such as language processing. We extract the eye-movement features used by Greene et. al., as well as additional features, from the data of three different tasks: visual search, object naming, and scene description. First, we demonstrate that eye-movement responses make it possible to characterize the goals of these tasks. Then, we train three different types of classifiers and predict the task participants performed with an accuracy well above chance (a maximum of 88 % for visual search). An analysis of the relative importance of features for classification accuracy reveals that just one feature, i.e., initiation time, is sufficient for above-chance performance (a maximum of 79 % accuracy in object naming). Crucially, this feature is independent of task duration, which differs systematically across the three tasks we investigate. Overall, the best task classification performance is obtained with a set of seven features that include both spatial information (e.g., entropy of attention allocation) and temporal components (e.g., total fixation on objects) of the eye-movement record. This result confirms the task-dependent allocation of visual attention, and extends previous work by showing that task classification is possible when tasks differ in the cognitive processes involved (purely visual tasks such as search vs. communicative tasks such as scene description)

    Locations of objects are better remembered than their identities in naturalistic scenes:An eye-tracking experiment in mild cognitive impairment

    Get PDF
    Objective: Retaining the identity or location of decontextualized objects in visual short-term working memory (VWM) is impaired by healthy and pathological ageing, but research remains inconclusive on whether these two features are equally impacted by it. Moreover, it is unclear whether similar impairments would manifest in naturalistic visual contexts. Method: 30 people with mild cognitive impairment (MCI) and 32 age-matched control participants (CPs) were eye-tracked within a change detection paradigm. They viewed 120 naturalistic scenes, and after a retention interval (1 s) asked whether a critical object in the scene had (or not) changed on either: identity (became a different object), location (same object but changed location), or both (changed in location and identity). Results: MCIs performed worse than CP but there was no interaction with the type of change. Changes in both were easiest while changes in identity alone were hardest. The latency to first fixation and first-pass duration to the critical object during successful recognition was not different between MCIs and CPs. Objects that changed in both features took longer to be fixated for the first time but required a shorter first pass compared to changes in identity alone which displayed the opposite pattern. Conclusions: Locations of objects are better remembered than their identities; memory for changes is best when involving both features. These mechanisms are spared by pathological ageing as indicated by the similarity between groups besides trivial differences in overall performance. These findings demonstrate that VWM mechanisms in the context of naturalistic scene information are preserved in people with MCI.info:eu-repo/semantics/acceptedVersio

    Unidimensional and Multidimensional Methods for Recurrence Quantification Analysis with crqa

    Full text link
    Recurrence quantification analysis is a widely used method for characterizing patterns in time series. This article presents a comprehensive survey for conducting a wide range of recurrence-based analyses to quantify the dynamical structure of single and multivariate time series, and to capture coupling properties underlying leader-follower relationships. The basics of recurrence quantification analysis (RQA) and all its variants are formally introduced step-by-step from the simplest auto-recurrence to the most advanced multivariate case. Importantly, we show how such RQA methods can be deployed under a single computational framework in R using a substantially renewed version our crqa 2.0 package. This package includes implementations of several recent advances in recurrence-based analysis, among them applications to multivariate data, and improved entropy calculations for categorical data. We show concrete applications of our package to example data, together with a detailed description of its functions and some guidelines on their usage.Comment: Describes R package crqa v. 2.0: https://cran.r-project.org/web/packages/crqa

    The Impact of Attentional, Linguistic and Visual Features during Object Naming

    Get PDF
    Object detection and identification are fundamental to human vision, and there is mounting evidence that objects guide the allocation of visual attention. However, the role of objects in tasks involving multiple modalities is less clear. To address this question, we investigate object naming, a task in which participants have to verbally identify objects they see in photorealistic scenes. We report an eye-tracking study that investigates which features (attentional, visual, and linguistic) influence object naming. We find that the amount of visual attention directed toward an object, its position and saliency, along with linguistic factors such as word frequency, animacy, and semantic proximity, significantly influence whether the object will be named or not. We then ask how features from different modalities are combined during naming, and find significant interactions between saliency and position, saliency and linguistic features, and attention and position. We conclude that when the cognitive system performs tasks such as object naming, it uses input from one modality to constraint or enhance the processing of other modalities, rather than processing each input modality independently
    corecore