11 research outputs found

    Towards a Robuster Interpretive Parsing

    Get PDF
    The input data to grammar learning algorithms often consist of overt forms that do not contain full structural descriptions. This lack of information may contribute to the failure of learning. Past work on Optimality Theory introduced Robust Interpretive Parsing (RIP) as a partial solution to this problem. We generalize RIP and suggest replacing the winner candidate with a weighted mean violation of the potential winner candidates. A Boltzmann distribution is introduced on the winner set, and the distribution’s parameter TT is gradually decreased. Finally, we show that GRIP, the Generalized Robust Interpretive Parsing Algorithm significantly improves the learning success rate in a model with standard constraints for metrical stress assignment

    Uncovering structure hand in hand: Joint Robust Interpretive Parsing in Optimality Theory

    Get PDF
    Most linguistic theories postulate structures with covert information, not directly recoverable from utterances. Hence, learners have to interpret their data before drawing conclusions. Within the framework of Optimality Theory (OT), Tesar & Smolensky (1998) proposed Robust Interpretive Parsing (RIP), suggesting the learners rely on their still imperfect grammars to interpret the learning data. I introduce an alternative, more cautious approach, Joint Robust Interpretive Parsing (JRIP). The learner entertains a population of several grammars, which join forces to interpret the learning data. A standard metrical phonology grammar is employed to demonstrates that JRIP performs significantly better than RIP

    Efficient Evaluation and Learning in Multilevel Parallel Constraint Grammars

    Get PDF
    In multilevel parallel Optimality Theory grammars, the number of candidates (possible paths from the input to the output level) increases exponentially with the number of levels of representation. The problem with this is that with the customary strategy of listing all candidates in a tableau, the computation time for evaluation (i.e., choosing the winning candidate) and learning (i.e., reranking the constraints on the basis of language data) increases exponentially with the number of levels as well. This article proposes instead to collect the candidates in a graph in which the number of nodes and the number of connections increase only linearly with the number of levels of representation. As a result, there exist procedures for evaluation and learning that increase only linearly with the number of levels. These efficient procedures help to make multilevel parallel constraint grammars more feasible as models of human language processing. We illustrate visualization, evaluation, and learning with a toy grammar for a traditional case that has already previously been analyzed in terms of parallel evaluation, namely, French liaison

    OT grammars don’t count, but make errors : The consequences of strict domination for simulated annealing

    Get PDF

    K + K = 120 : Papers dedicated to László Kálmán and András Kornai on the occasion of their 60th birthdays

    Get PDF
    corecore