26,289 research outputs found

    Cortical Dynamics of Contextually-Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    Full text link
    How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.CELEST, an NSF Science of Learning Center (SBE-0354378); SyNAPSE program of Defense Advanced Research Projects Agency (HR0011-09-3-0001, HR0011-09-C-0011

    Does visual letter similarity modulate masked form priming in young readers of Arabic?

    Get PDF
    Available online 19 January 2018 Supplementary data associated with this article can be found, in the online version, at https://doi. org/10.1016/j.jecp.2017.12.004.Supplementary data associated with this article can be found, in the online version, at https://doi. org/10.1016/j.jecp.2017.12.004.We carried out a masked priming lexical decision experiment to study whether visual letter similarity plays a role during the initial phases of word processing in young readers of Arabic (fifth graders). Arabic is ideally suited to test these effects because most Arabic letters share their basic shape with at least one other letter and differ only in the number/position of diacritical points (e.g., ض - ص ;ظ - ط ;غ - ع ;ث - ت - ن ب ;ذ - د ;خ - ح - ج ;ق - ف ;ش - س ;ز - ر). We created two one-letter-different priming conditions for each target word, in which a letter from the consonantal root was substituted by another letter that did or did not keep the same shape (e.g., خدمة - حدمة vs. خدمة - فدمة). Another goal of the current experiment was to test the presence of masked orthographic priming effects, which are thought to be unreliable in Semitic languages. To that end, we included an unrelated priming condition. We found a sizable masked orthographic priming effect relative to the unrelated condition regardless of visual letter similarity, thereby revealing that young readers are able to quickly process the diacritical points of Arabic letters. Furthermore, the presence of masked orthographic priming effects in Arabic suggests that the word identification stream in Indo-European and Semitic languages is more similar than previously thought.This article was made possible by a National Priorities Research Program (NPRP) award (Grant No. 6-378-5-035z) from the Qatar National Research Fund (a member of the Qatar Foundation)

    Does visual letter similarity modulate masked form priming in young readers of Arabic?

    Get PDF
    Available online 19 January 2018 Supplementary data associated with this article can be found, in the online version, at https://doi. org/10.1016/j.jecp.2017.12.004.Supplementary data associated with this article can be found, in the online version, at https://doi. org/10.1016/j.jecp.2017.12.004.We carried out a masked priming lexical decision experiment to study whether visual letter similarity plays a role during the initial phases of word processing in young readers of Arabic (fifth graders). Arabic is ideally suited to test these effects because most Arabic letters share their basic shape with at least one other letter and differ only in the number/position of diacritical points (e.g., ض - ص ;ظ - ط ;غ - ع ;ث - ت - ن ب ;ذ - د ;خ - ح - ج ;ق - ف ;ش - س ;ز - ر). We created two one-letter-different priming conditions for each target word, in which a letter from the consonantal root was substituted by another letter that did or did not keep the same shape (e.g., خدمة - حدمة vs. خدمة - فدمة). Another goal of the current experiment was to test the presence of masked orthographic priming effects, which are thought to be unreliable in Semitic languages. To that end, we included an unrelated priming condition. We found a sizable masked orthographic priming effect relative to the unrelated condition regardless of visual letter similarity, thereby revealing that young readers are able to quickly process the diacritical points of Arabic letters. Furthermore, the presence of masked orthographic priming effects in Arabic suggests that the word identification stream in Indo-European and Semitic languages is more similar than previously thought.This article was made possible by a National Priorities Research Program (NPRP) award (Grant No. 6-378-5-035z) from the Qatar National Research Fund (a member of the Qatar Foundation)

    Neural differentiation is moderated by age in scene- but not face-selective cortical regions

    Get PDF
    The aging brain is characterized by neural dedifferentiation, an apparent decrease in the functional selectivity of category-selective cortical regions. Age-related reductions in neural differentiation have been proposed to play a causal role in cognitive aging. Recent findings suggest, however, that age-related dedifferentiation is not equally evident for all stimulus categories and, additionally, that the relationship between neural differentiation and cognitive performance is not moderated by age. In light of these findings, in the present experiment, younger and older human adults (males and females) underwent fMRI as they studied words paired with images of scenes or faces before a subsequent memory task. Neural selectivity was measured in two scene-selective (parahippocampal place area (PPA) and retrosplenial cortex (RSC)] and two face-selective [fusiform face area (FFA) and occipital face area (OFA)] regions using both a univariate differentiation index and multivoxel pattern similarity analysis. Both methods provided highly convergent results, which revealed evidence of age-related reductions in neural dedifferentiation in scene-selective but not face-selective cortical regions. Additionally, neural differentiation in the PPA demonstrated a positive, age-invariant relationship with subsequent source memory performance (recall of the image category paired with each recognized test word). These findings extend prior findings suggesting that age-related neural dedifferentiation is not a ubiquitous phenomenon, and that the specificity of neural responses to scenes is predictive of subsequent memory performance independently of age

    Phonological representations and repetition priming

    Get PDF
    An ubiquitous phenomenon in psychology is the `repetition effect': a repeated stimulus is processed better on the second occurrence than on the first. Yet, what counts as a repetition? When a spoken word is repeated, is it the acoustic shape or the linguistic type that matters? In the present study, we contrasted the contribution of acoustic and phonological features by using participants with different linguistic backgrounds: they came from two populations sharing a common vocabulary (Catalan) yet possessing different phonemic systems. They performed a lexical decision task with lists containing words that were repeated verbatim, as well as words that were repeated with one phonetic feature changed. The feature changes were phonemic, i.e. linguistically relevant, for one population, but not for the other. The results revealed that the repetition effect was modulated by linguistic, not acoustic, similarity: it depended on the subjects' phonemic system

    Integrative priming occurs rapidly and uncontrollably during lexical processing

    Get PDF
    Lexical priming, whereby a prime word facilitates recognition of a related target word (e.g., nurse ? doctor), is typically attributed to association strength, semantic similarity, or compound familiarity. Here, the authors demonstrate a novel type of lexical priming that occurs among unassociated, dissimilar, and unfamiliar concepts (e.g., horse ? doctor). Specifically, integrative priming occurs when a prime word can be easily integrated with a target word to create a unitary representation. Across several manipulations of timing (stimulus onset asynchrony) and list context (relatedness proportion), lexical decisions for the target word were facilitated when it could be integrated with the prime word. Moreover, integrative priming was dissociated from both associative priming and semantic priming but was comparable in terms of both prevalence (across participants) and magnitude (within participants). This observation of integrative priming challenges present models of lexical priming, such as spreading activation, distributed representation, expectancy, episodic retrieval, and compound cue models. The authors suggest that integrative priming may be explained by a role activation model of relational integration

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
    corecore