27 research outputs found

    The emergence of explicit knowledge from implicit learning

    Get PDF
    Abstract Substantial evidence has highlighted the ability of observers to incidentally extract statistical contingencies present in visual environments. This study examined whether the knowledge extracted regarding statistical contingencies is unconscious initially, even when it becomes fully accessible to conscious awareness after extensive training. Using a "typical" contextual cuing procedure adapted to real-world scenes, we first observed that, after extensive training in searching for a target within repeated scenes, knowledge about regularities was associated with conscious awareness (Experiment 1). However, both subjective and objective measures of consciousness revealed that in the early phase of training, learning of regular structures first takes place at an unconscious level (Experiments 2 and 3). These results are discussed in the light of the causal relationships between learning and consciousness

    The Emergence of Explicit Knowledge from Implicit Learning The Emergence of Explicit Knowledge from Implicit Learning

    No full text
    Abstract Substantial evidence has highlighted the ability of observers to incidentally extract statistical contingencies present in visual environments. This study examined whether this knowledge extracted regarding statistical contingencies are unconscious initially, even when it becomes fully accessible to conscious awareness after extensive training. Using a "typical" contextual cueing procedure adapted to real-world scenes, we first observed that, after extensive training in searching for a target within repeated scenes, knowledge about regularities was associated with conscious awareness (Experiment 1). However, both subjective and objective measures of consciousness revealed that in the early phase of training, learning of regular structures first takes place at an unconscious Over the past two decades, a substantial amount of evidence has highlighted the remarkable ability of observers to extract and use statistical contingencies present in structured stimulus environments (e.g. In the classic task of the contextual cueing paradigm, participants are instructed to search for a T (target) among Ls (distractors) and report whether the top of the T points left or right. Half of the configurations are systematically repeated across many blocks of trials while the other half are novel. A progressive benefit on search time, named contextual cueing, is typically observed in the repeated contexts, which are predictive of the target's location compared to the novel contexts. Yet, at the end of the search task, participants rarely report having noticed that some displays were repeated across the task, and their performance in a final direct memory task (i.e., recognition and/or target generation) is typically at or near chance levels. This finding is taken to suggest that the contextual cueing effect results from implicit learnin

    Coding of Images in Long Term Memory: The Fate of Visual Memories across Weeks in Adults and Children

    No full text
    International audienceWhat is the content and the format of visual memories in Long Term Memory (LTM)? Is it similar in adults and children? To address these issues, we investigated, in both adults and 9-year-old children, how visual LTM is affected over time and whether visual vs semantic features are affected differentially. In a learning phase, participants were exposed to hundreds of meaningless and meaningful images presented once or twice for either 120ms or 1920ms. Memory was assessed using a recognition task either immediately after learning or after a delay of three or six weeks. The results suggest that multiple and extended exposures are crucial for retaining an image for several weeks. Although a benefit was observed in the meaningful condition when memory was assessed immediately after learning, this benefit tended to disappear over weeks, especially when the images were presented twice for 1920ms. This pattern was observed for both adults and children. Together, the results call into question the dominant models of LTM for images: although semantic information enhances the encoding & maintaining of images in LTM when assessed immediately, this seems not critical for LTM over weeks

    Learning of spatial statistics in nonhuman primates: Contextual cueing in baboons (Papio papio)

    No full text
    International audienceA growing number of theories of cognition suggest that many of our behaviors result from the ability to implicitly extract and use statistical redundancies present in complex environments. In an attempt to develop an animal model of statistical learning mechanisms in humans, the current study investigated spatial contextual cueing (CC) in nonhuman primates. Twenty-five baboons (Papio papio) were trained to search for a target (T) embedded within configurations of distrators (L) that were either predictive or non-predictive of the target location. Baboons exhibited an early CC effect, which remained intact after a 6-week delay and stable across extensive training of 20,000 trials. These results demonstrate the baboons' ability to learn spatial contingencies, as well as the robustness of CC as a cognitive phenomenon across species. Nevertheless, in both the youngest and oldest baboons, CC required many more trials to emerge than in baboons of intermediate age. As a whole, these results reveal strong similarities between CC in humans and baboons, suggesting similar statistical learning mechanisms in these two species. Therefore, baboons provide a valid model to investigate how statistical learning mechanisms develop and/or age during the life span, as well as how these mechanisms are implemented in neural networks, and how they have evolved throughout the phylogeny. (C) 2013 Elsevier B.V. All rights reserved

    Learning of spatial statistics in nonhuman primates: Contextual cueing in baboons (Papio papio)

    No full text
    International audienceA growing number of theories of cognition suggest that many of our behaviors result from the ability to implicitly extract and use statistical redundancies present in complex environments. In an attempt to develop an animal model of statistical learning mechanisms in humans, the current study investigated spatial contextual cueing (CC) in nonhuman primates. Twenty-five baboons (Papio papio) were trained to search for a target (T) embedded within configurations of distrators (L) that were either predictive or non-predictive of the target location. Baboons exhibited an early CC effect, which remained intact after a 6-week delay and stable across extensive training of 20,000 trials. These results demonstrate the baboons' ability to learn spatial contingencies, as well as the robustness of CC as a cognitive phenomenon across species. Nevertheless, in both the youngest and oldest baboons, CC required many more trials to emerge than in baboons of intermediate age. As a whole, these results reveal strong similarities between CC in humans and baboons, suggesting similar statistical learning mechanisms in these two species. Therefore, baboons provide a valid model to investigate how statistical learning mechanisms develop and/or age during the life span, as well as how these mechanisms are implemented in neural networks, and how they have evolved throughout the phylogeny. (C) 2013 Elsevier B.V. All rights reserved

    Statistical learning guides visual attention within iconic memory

    No full text
    International audienc

    The emergence of explicit knowledge from implicit learning

    No full text
    International audienc

    A bias to detail: how hand position modulates visual learning and visual memory

    No full text
    International audienceIn this report, we examine whether and how altered aspects of perception and attention near the hands affect one's learning of to-be-remembered visual material. We employed the contextual cuing paradigm of visual learning in two experiments. Participants searched for a target embedded within images of fractals and other complex geometrical patterns while either holding their hands near to or far from the stimuli. When visual features and structural patterns remained constant across to-be-learned images (Exp. 1), no difference emerged between hand postures in the observed rates of learning. However, when to-be-learned scenes maintained structural pattern information but changed in color (Exp. 2), participants exhibited substantially slower rates of learning when holding their hands near the material. This finding shows that learning near the hands is impaired in situations in which common information must be abstracted from visually unique images, suggesting a bias toward detail-oriented processing near the hands
    corecore