607 research outputs found

    How Many Mechanisms Are Needed to Analyze Speech? A Connectionist Simulation of Structural Rule Learning in Artificial Language Acquisition

    Full text link
    Some empirical evidence in the artificial language acquisition literature has been taken to suggest that statistical learning mechanisms are insufficient for extracting structural information from an artificial language. According to the more than one mechanism (MOM) hypothesis, at least two mechanisms are required in order to acquire language from speech: (a) a statistical mechanism for speech segmentation; and (b) an additional rule‐following mechanism in order to induce grammatical regularities. In this article, we present a set of neural network studies demonstrating that a single statistical mechanism can mimic the apparent discovery of structural regularities, beyond the segmentation of speech. We argue that our results undermine one argument for the MOM hypothesis.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86908/1/j.1551-6709.2011.01191.x.pd

    Across space and time: infants learn from backward and forward visual statistics

    Get PDF
    within temporal and spatial visual streams. Two groups of 8-month-old infants were familiarized with an artificial grammar of shapes, comprised of backward and forward base pairs (i.e., two shapes linked by strong backward or forward transitional probability) and part-pairs (i.e., two shapes with weak transitional probabilities in both directions). One group viewed the continuous visual stream as a temporal sequence, while the other group viewed the same stream as a spatial array. Following familiarization, infants looked longer at test trials containing part- pairs than base pairs, though they had appeared with equal frequency during familiarization. This pattern of looking time was evident for both forward and backward pairs, in both the temporal and spatial conditions. Further, differences in looking time to part-pairs that were consistent or inconsistent with the predictive direction of the base pairs (forward or backward) indicated that infants were indeed sensitive to direction when presented with temporal sequences, but not when presented with spatial arrays. These results suggest that visual statistical learning is flexible in infancy and depends on the nature of visual input

    A role for backward transitional probabilities in word segmentation?

    Full text link

    La conscience auto-organisatrice : une alternative au modÚle dominant de la psychologie cognitive

    Get PDF
    Il est gĂ©nĂ©ralement supposĂ© que les opĂ©rations cognitives auxquelles nous n’avons aucun accĂšs introspectif reposent sur des reprĂ©sentations et des formes de raisonnement identiques Ă  celles qui composent la pensĂ©e consciente. Cet article prĂ©sente une vision alternative, dans laquelle il n’y a plus de place pour un « inconscient cognitif ». L’adaptation repose sur la formation de reprĂ©sentations conscientes, dont la remarquable organisation Ă©merge naturellement grĂące Ă  des processus Ă©lĂ©mentaires d’auto-organisation, par le jeu des interactions dynamiques entre le sujet et son environnement. Quelques pistes de rĂ©flexion sont suggĂ©rĂ©es quant aux enjeux de cette perspective pour l’éducation

    Editors' introduction: Aligning implicit learning and statistical learning: Two approaches, one phenomenon

    Get PDF
    In their editors’ introduction, Rebuschat and Monaghan provide the background to the special issue. They outline the rationale for bringing together, in a single volume, leading researchers from two distinct, yet related research strands,implicit learning and statistical learning. The editors then introduce the new contributions solicited for this special issueand provide their perspective on the agenda setting that results from combining these two approaches
    • 

    corecore