225 research outputs found

    A role for backward transitional probabilities in word segmentation?

    Full text link

    AGL StimSelect: Software for automated selection of stimuli for artificial grammar learning

    Get PDF
    Artificial Grammar Learning (AGL) is an experimental paradigm that has been used extensively in cognitive research for many years to study implicit learning, associative learning, and generalization based either on similarity or rules. Without computer assistance it is virtually impossible to generate appropriate grammatical training stimuli along with grammatical or non-grammatical test stimuli that control relevant psychological variables. We present the first flexible, fully automated software for selecting AGL stimuli. The software allows users to specify a grammar of interest, and to manipulate characteristics of training and test sequences, and their relationship to each other. The user thus has direct control over stimulus features that may influence learning and generalization in AGL tasks. The software enables researchers to develop AGL designs that would not be feasible without automatic stimulus selection. It is implemented in Matlab

    Encoding temporal regularities and information copying in hippocampal circuits

    Get PDF
    Discriminating, extracting and encoding temporal regularities is a critical requirement in the brain, relevant to sensory-motor processing and learning. However, the cellular mechanisms responsible remain enigmatic; for example, whether such abilities require specific, elaborately organized neural networks or arise from more fundamental, inherent properties of neurons. Here, using multi-electrode array technology, and focusing on interval learning, we demonstrate that sparse reconstituted rat hippocampal neural circuits are intrinsically capable of encoding and storing sub-second-order time intervals for over an hour timescale, represented in changes in the spatial-temporal architecture of firing relationships among populations of neurons. This learning is accompanied by increases in mutual information and transfer entropy, formal measures related to information storage and flow. Moreover, temporal relationships derived from previously trained circuits can act as templates for copying intervals into untrained networks, suggesting the possibility of circuit-to-circuit information transfer. Our findings illustrate that dynamic encoding and stable copying of temporal relationships are fundamental properties of simple in vitro networks, with general significance for understanding elemental principles of information processing, storage and replication

    Pere Alberch's developmental morphospaces and the evolution of cognition

    Get PDF
    In this article we argue for an extension of Pere Alberch's notion of developmental morphospace into the realm of cognition and introduce the notion of cognitive phenotype as a new tool for the evolutionary and developmental study of cognitive abilities

    Primitive computations in speech processing

    Get PDF
    Previous research suggests that artificial-language learners exposed to quasi-continuous speech can learn that the first and the last syllables of words have to belong to distinct classes (e.g., Endress & Bonatti, 2007; Peña, Bonatti, Nespor, & Mehler, 2002). The mechanisms of these generalizations, however, are debated. Here we show that participants learn such generalizations only when the crucial syllables are in edge positions (i.e., the first and the last), but not when they are in medial positions (i.e., the second and the fourth in pentasyllabic items). In contrast to the generalizations, participants readily perform statistical analyses also in word middles. In analogy to sequential memory, we suggest that participants extract the generalizations using a simple but specific mechanism that encodes the positions of syllables that occur in edges. Simultaneously, they use another mechanism to track the syllable distribution in the speech streams. In contrast to previous accounts, this model explains why the generalizations are faster than the statistical computations, require additional cues, and break down under different conditions, and why they can be performed at all. We also show that that similar edge-based mechanisms may explain many results in artificial-grammar learning and also various linguistic observations

    Syntactic learning by mere exposure - An ERP study in adult learners

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Artificial language studies have revealed the remarkable ability of humans to extract syntactic structures from a continuous sound stream by mere exposure. However, it remains unclear whether the processes acquired in such tasks are comparable to those applied during normal language processing. The present study compares the ERPs to auditory processing of simple Italian sentences in native and non-native speakers after brief exposure to Italian sentences of a similar structure. The sentences contained a non-adjacent dependency between an auxiliary and the morphologically marked suffix of the verb. Participants were presented four alternating learning and testing phases. During learning phases only correct sentences were presented while during testing phases 50 percent of the sentences contained a grammatical violation.</p> <p>Results</p> <p>The non-native speakers successfully learned the dependency and displayed an N400-like negativity and a subsequent anteriorily distributed positivity in response to rule violations. The native Italian group showed an N400 followed by a P600 effect.</p> <p>Conclusion</p> <p>The presence of the P600 suggests that native speakers applied a grammatical rule. In contrast, non-native speakers appeared to use a lexical form-based processing strategy. Thus, the processing mechanisms acquired in the language learning task were only partly comparable to those applied by competent native speakers.</p
    corecore