217 research outputs found

    Addressing acquisition from language change: A modeling perspective

    Get PDF

    Addressing acquisition from language change: A modeling perspective

    Get PDF
    info:eu-repo/semantics/publishe

    Corpus evidence for the role of world knowledge in ambiguity reduction: Using high positive expectations to inform quantifier scope

    Get PDF
    Every-negation utterances (e.g., Every vote doesn’t count) are ambiguous between a surface scope interpretation (e.g., No vote counts) and an inverse scope interpretation (e.g., Not all votes count). Investigations into the interpretation of these utterances have found variation: child and adult interpretations diverge (e.g., Musolino 1999) and adult interpretations of specific constructions show considerable disagreement (Carden 1973, Heringer 1970, Attali et al. 2021). Can we concretely identify factors to explain some of this variation and predict tendencies in individual interpretations? Here we show that a type of expectation about the world (which we call a high positive expectation), which can surface in the linguistic contexts of every-negation utterances, predicts experimental preferences for the inverse scope interpretation of different every-negation utterances. These findings suggest that (1) world knowledge, as set up in a linguistic context, helps to effectively reduce the ambiguity of potentiallyambiguous utterances for listeners, and (2) given that high positive expectations are a kind of affirmative context, negation use is felicitous in affirmative contexts (e.g., Wason 1961)

    Necessary Bias in Natural Language Learning

    Get PDF
    This dissertation investigates the mechanism of language acquisition given the boundary conditions provided by linguistic representation and the time course of acquisition. Exploration of the mechanism is vital once we consider the complexity of the system to be learned and the non-transparent relationship between the observable data and the underlying system. It is not enough to restrict the potential systems the learner could acquire, which can be done by defining a finite set of parameters the learner must set. Even supposing that the system is defined by n binary parameters, we must still explain how the learner converges on the correct system(s) out of the possible 2^n systems, using data that is often highly ambiguous and exception-filled. The main discovery from the case studies presented here is that learners can in fact succeed provided they are biased to only use a subset of the available input that is perceived as a cleaner representation of the underlying system. The case studies are embedded in a framework that conceptualizes language learning as three separable components, assuming that learning is the process of selecting the best-fit option given the available data. These components are (1) a defined hypothesis space, (2) a definition of the data used for learning (data intake), and (3) an algorithm that updates the learner's belief in the available hypotheses, based on data intake. One benefit of this framework is that components can be investigated individually. Moreover, defining the learning components in this somewhat abstract manner allows us to apply the framework to a range of language learning problems and linguistics domains. In addition, we can combine discrete linguistic representations with probabilistic methods and so account for the gradualness and variation in learning that human children display. The tool of exploration for these case studies is computational modeling, which proves itself very useful in addressing the feasibility, sufficiency, and necessity of data intake filtering since these questions would be very difficult to address with traditional experimental techniques. In addition, the results of computational modeling can generate predictions that can then be tested experimentally

    The utility of cognitive plausibility in language acquisition modeling: Evidence from word segmentation. (Manuscript

    Get PDF
    Abstract The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints in general terms, and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We create a more cognitively plausible model of this learning strategy which uses an age-appropriate unit of perceptual representation, evaluates the model output in terms of its utility, and incorporates cognitive constraints into the inference process. Our more cognitively plausible model of the Bayesian word segmentation strategy not only yields better performance than previous implementations but also shows more strongly the beneficial effect of cognitive constraints on segmentation. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition
    • …
    corecore