10 research outputs found

    The time-course of ultra-rapid object and scene categorization

    No full text
    Classic studies (Rosch, Mervis, Gray, Johnson & Boyes-Braem, 1976) on rapid decisions of category membership identified the basic level as entry point for categorization in a free response task (e.g., ‘dog’ rather than ‘animal’ or ‘golden retriever’). More recent studies with a predefined go/no-go ultra-rapid categorization task design with briefly flashed (20 ms) visual scenes contradicted these earlier findings, indicating that participants are faster at detecting an animal/vehicle (superordinate object level) than a dog/bus (basic object level) in a complex visual image (e.g., Thorpe, Fize & Marlot, 1996). One way to reconcile both seemingly contradictory findings is in terms of a recent parallel distributed processing theory (O’Reilly, Wyatte, Herd, Mingus & Jilk, 2013). This theory states that the inhibitory dynamics in the prefrontal cortical networks support selection between alternatives. The strength of this inhibition during lexical processing depends on a learning process. In an open-ended task, the superordinate object level choice is suppressed in favor of the basic level. But when participants get a clearly predefined task goal (as in ultra-rapid categorization), this inhibitory process is cancelled, and responses on the superordinate level improve in speed and accuracy. This study was aimed at verifying some behavioral predictions from this theory with respect to ultra-rapid categorization by providing a perceptual mask after image presentation and manipulating three defining variables: (1) presentation time (16 to 83 ms presentation), (2) level of categorization (basic versus superordinate level) and (3) goal (object versus scene detection). Results indicated a clear improvement in performance over longer presentation times (PT) and a replication of the superordinate advantage effect (e.g. Macé, Joubert, Nespoulous, & Fabre-Thorpe, 2009) with shorter image presentations. Furthermore, we directly compared object versus scene perception and observed a significantly more accurate detection of objects. These accuracy differences decreased when PT increased, indicating that time is needed to retrieve the object in the scene.status: publishe

    The Time-Course of Ultrarapid Categorization: The Influence of Scene Congruency and Top-Down Processing

    No full text
    Although categorization can take place at different levels of abstraction, classic studies on semantic labeling identified the basic level, for example, dog, as entry point for categorization. Ultrarapid categorization tasks have contradicted these findings, indicating that participants are faster at detecting superordinate-level information, for example, animal, in a complex visual image. We argue that both seemingly contradictive findings can be reconciled within the framework of parallel distributed processing and its successor Leabra (Local, Error-driven and Associative, Biologically Realistic Algorithm). The current study aimed at verifying this prediction in an ultrarapid categorization task with a dynamically changing presentation time (PT) for each briefly presented object, followed by a perceptual mask. Furthermore, we manipulated two defining task variables: level of categorization (basic vs. superordinate categorization) and object presentation mode (object-in-isolation vs. object-in-context). In contradiction with previous ultrarapid categorization research, focusing on reaction time, we used accuracy as our main dependent variable. Results indicated a consistent superordinate processing advantage, coinciding with an overall improvement in performance with longer PT and a significantly more accurate detection of objects in isolation, compared with objects in context, at lower stimulus PT. This contextual disadvantage disappeared when PT increased, indicating that figure-ground separation with recurrent processing is vital for meaningful contextual processing to occur.status: publishe
    corecore