15 research outputs found

    Neural Underpinnings of Phonotactic Rule Learning

    Get PDF
    This study used behavioral measures and ERP difference waves to measure the underlying brain processes during the categorization of grammatical vs ungrammatical stimuli according to a lab learned phonotactic rule. The results show that participants learned the simple rule at the behavioral level (as measured with d-prime, a sensitivity measure to rule violations). This rule learning is also reflected in the brain response to violations of the rule, which is indexed by the P3 rare-minus-frequent difference waveform. The neural results indicate that this learning took the form of a neural commitment. Participants learned the rule and used it to make active predictions, categorizing words as ungrammatical at the exact point of violation. This ability must be instantiated at the neural level, meaning rapid neural tuning has occurred in this lab setting

    A tale of two lexica: Investigating computational pressures on word representation with neural networks

    Get PDF
    IntroductionThe notion of a single localized store of word representations has become increasingly less plausible as evidence has accumulated for the widely distributed neural representation of wordform grounded in motor, perceptual, and conceptual processes. Here, we attempt to combine machine learning methods and neurobiological frameworks to propose a computational model of brain systems potentially responsible for wordform representation. We tested the hypothesis that the functional specialization of word representation in the brain is driven partly by computational optimization. This hypothesis directly addresses the unique problem of mapping sound and articulation vs. mapping sound and meaning.ResultsWe found that artificial neural networks trained on the mapping between sound and articulation performed poorly in recognizing the mapping between sound and meaning and vice versa. Moreover, a network trained on both tasks simultaneously could not discover the features required for efficient mapping between sound and higher-level cognitive states compared to the other two models. Furthermore, these networks developed internal representations reflecting specialized task-optimized functions without explicit training.DiscussionTogether, these findings demonstrate that different task-directed representations lead to more focused responses and better performance of a machine or algorithm and, hypothetically, the brain. Thus, we imply that the functional specialization of word representation mirrors a computational optimization strategy given the nature of the tasks that the human brain faces

    İsim mi, fiil mi, bilişimsel olarak basit olan mı önce edinlir? ; edinim sırasını çözümlemek için öncü bir çalışma.

    No full text
    The primary accounts for early lexical differences can be broken down into two distinct theoretical positions that either defend early noun acquisition or provide evidence that challenges this account. This work is trying to bring a computational perspective to the problem of early lexical acquisition of words. It is a preliminary investigation to see if the underlying mechanism relates to computational complexity by which short, frequent and unambiguous words are supposed to be acquired first; and long, ambiguous or infrequent words (including nouns) are predicted not to be acquired early. Our database consists a longitudinal data of three Turkish children between 8 months and 36 months. We conducted three analyses to test this; (1) in frequency analysis, we compared the type token ratios and the number of types and tokens of nouns and verbs in both child directed speech and child speech; (2) in ambiguity analysis, we examined the role of social and attentional cues on word learning; and (3) in phonological analysis, we measured the effect of word length on learning of words. Results revealed that most frequent words did not prove any noun predominance; in place of this, the usage rates of verbs was close to nouns and sometimes much more than that. Furthermore, caretakers did not have any bias to nouns or object names on the contrary there was a preference to verbs at some parts. The high rate of verbs in child speech also challenged the noun-first view. Ambiguity analysis showed that social and attentional cues in the natural language’s context were important factors for word learning. v Therefore, disambiguated words in the context were the most frequent words in child speech. Phonological complexity analysis indicated that word length affected the infant’s ability of word learning. Thus, short words were more advantageous when compared to long words.M.S. - Master of Scienc

    Unlearnable phonotactics

    No full text
    The Subregular Hypothesis (Heinz 2010) states that only patterns with specific subregular computational properties are phonologically learnable. Lai (2015) provided the initial laboratory support for this hypothesis. The current study aimed to replicate and extend the earlier findings by using a different experimental paradigm (oddball task) and a different measure of learning (sensitivity index, 'd'′). Specifically, we compared the learnability of two phonotactic patterns that differ computationally and typologically: a simple rule (“First-Last Assimilation”) that requires agreement between the first and last segment of a word (predicted to be unlearnable), and a harmony rule (“Sibilant Harmony”) that requires the agreement of features throughout the word (predicted to be learnable). The First-Last Assimilation rule was tested under two experimental conditions: one where the training data were also consistent with the Sibilant Harmony rule, and one where the training data were only consistent with the First-Last rule. As in Lai (2015), we found that participants were significantly more sensitive to violations of the Sibilant Harmony (SH) rule than to the First-Last Assimilation (FL) rules. However, unlike Lai (2015), we also found that participants showed some residual sensitivity to the First-Last rule, but that sensitivity interacted with rule type so that participants were significantly more sensitive to SH rule violations. We conclude that participants in Artificial Grammar Learning experiments exhibit evidence of Universal Grammar constraining their learning, but patterns predicted to be unlearnable as a linguistic system can still be learned to some degree, due to non-linguistic learning mechanisms

    A snapshot of pediatric inpatients and outpatients with COVID-19: a point prevalence study from Turkey

    No full text
    This multi-center point prevalence study evaluated children who were diagnosed as having coronavirus disease 2019 (COVID19). On February 2nd, 2022, inpatients and outpatients infected with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) were included in the study from 12 cities and 24 centers in Turkey. Of 8605 patients on February 2nd, 2022, in participating centers, 706 (8.2%) had COVID-19. The median age of the 706 patients was 92.50 months, 53.4% were female, and 76.7% were inpatients. The three most common symptoms of the patients with COVID-19 were fever (56.6%), cough (41.3%), and fatigue (27.5%). The three most common underlying chronic diseases (UCDs) were asthma (3.4%), neurologic disorders (3.3%), and obesity (2.6%). The SARS-CoV-2-related pneumoniae rate was 10.7%. The COVID-19 vaccination rate was 12.5% in all patients. Among patients aged over 12 years with access to the vaccine given by the Republic of Turkey Ministry of Health, the vaccination rate was 38.7%. Patients with UCDs presented with dyspnea and pneumoniae more frequently than those without UCDs (p 0.001 for both). The rates of fever, diarrhea, and pneumoniae were higher in patients without COVID-19 vaccinations (p = 0.001, p = 0.012, and p = 0.027). Conclusion: To lessen the effects of the disease, all eligible children should receive the COVID-19 vaccine. The illness may specifically endanger children with UCDs
    corecore