5 research outputs found

    Off-line Constraint Propagation for Efficient HPSG Processing

    Full text link
    We investigate the use of a technique developed in the constraint programming community called constraint propagation to automatically make a HPSG theory more specific at those places where linguistically motivated underspecification would lead to inefficient processing. We discuss two concrete HPSG examples showing how off-line constraint propagation helps improve processing efficiency.Comment: 10 pages, uuencoded gzipped Postscrip

    Hebbian learning in recurrent neural networks for natural language processing

    Get PDF
    This research project examines Hebbian learning in recurrent neural networks for natural language processing and attempts to interpret language at the level of a two and one half year old child. In this project five neural networks were built to interpret natural language: a Simple Recurrent Network with Hebbian Learning, a Jordan network with Hebbian learning and one hidden layer, a Jordannetwork with Hebbian learning and no hidden layers, a Simple Recurrent Network back propagation learning, and a nonrecurrent neural network with backpropagation learning. It is known that Hebbian learning works well when the input vectors are orthogonal, but, as this project shows, it does not perform well in recurrent neural networks for natural language processing when the input vectors for the individual words are approximately orthogonal. This project shows that,given approximately orthogonal vectors to represent each word in the vocabulary the input vectors for a given command are not approximately orthogonal and the internal representations that the neural network builds are similar for different commands. As the data shows, the Hebbian learning neural networks were unable to perform the natural language interpretation task while the back propagation neural networks were much more successful. Therefore, Hebbian learning does not work well in recurrent neural networks for natural language processing even when the input vectors for the individual words are approximately orthogonal

    Modularizing Contexted Constraints

    No full text
    This paper describes a method for compiling a constraint-based grammar into a potentially more efficient form for processing. This method takes dependent disjunctions within a constraint formula and factors them into non-interacting groups whenever possible by determining their independence. When a group of dependent disjunctions is split into smaller groups, an exponential amount of redundant information is reduced. At runtime, this means that an exponential amount of processing can be saved as well. Since the performance of an algorithm for processing constraints with dependent disjunctions is highly determined by its input, the transformation presented in this paper should prove beneficial for all such algorithms. 1 Introduction There are two facts that conspire to make the treatment of disjunction an important consideration when building a natural language processing (NLP) system. The first fact is that natural languages are full of ambiguities, and in a grammar many of these ambi..

    Modularizing contexted constraints

    No full text
    corecore