3 research outputs found

    Computer Modelling of English Grammar

    Get PDF
    Recent work in artificial intelligence has developed a number of techniques which are particularly appropriate for constructing a model of the process of understanding English sentences. These methods are used here in the definition of a framework for linguistic description, called "computational grammar". This framework is employed to explore the - details of the operations involved in transforming an representation English sentence into a general semantic Computational grammar includes both "syntactic" and "semantic" constructs, in order to clarify the interactions between all the various kinds of information, and treats the sentence-analysis process as having a semantic goal which may require syntactic means to achieve it. The sentence-analyser is based on the concept of an "augmented transition network grammar", modified to minimise unwanted top-down processing and unnecessary era bedding. The analyser does not build a purely syntactic ,structure for a sentence, but the semantic rules operate hierarchically in a way which reflects the traditional tree structure. The processing operations are simplified by using temporary storage to postpone premature decisions or to conflate different options. The computational grammar framework has been applied to a few areas of English, including relative clauses, referring expressions, verb phrases and tense. A computer program ( "MCHINE") has been written which implements the constructs of computational grammar and some of the linguistic descriptions of English. A number of sentences have been successfully processed by the program, which can carry on a simple. dialogue as well as building semantic representations for isolated sentences

    Parsing natural language

    Get PDF
    People have long been intrigued by the possibility of using a computer to understand natural language. Most researchers attempting to solve this problem have begun their efforts by trying to have the computer recognize the underlying syntactic form (the parse tree) of the sentence. This thesis presents an overview of the history of syntactic parsing of natural language, and it compares the major methods that have been used. Linguistically, two recent grammars are described: transformational grammar and systemic grammar. Computationally, three parsing strategies are described and compared: top-down parsing, bottom-up parsing, and a combination of both of these methods. Several important natural language systems are described, including Woods\u27 LUNAR program, Winograd\u27s SHRDLU, and Marcus\u27 PARSIFAL

    Coping with Uncertainty: Noun Phrase Interpretation and Early Semantic Analysis

    Get PDF
    A computer program which can "understand" natural language texts must have both syntactic knowledge about the language concerned and semantic knowledge of how what is written relates to its internal representation of the world. It has been a matter of some controversy how these sources of information can best be integrated to translate from an input text to a formal meaning representation. The controversy has concerned largely the question as to what degree of syntactic analysis must be performed before any semantic analysis can take place. An extreme position in this debate is that a syntactic parse tree for a complete sentence must be produced before any investigation of that sentence's meaning is appropriate. This position has been criticised by those who see understanding as a process that takes place gradually as the text is read, rather than in sudden bursts of activity at the ends of sentences. These people advocate a model where semantic analysis can operate on fragments of text before the global syntactic structure is determined - a strategy which we will call early semantic analysis. In this thesis, we investigate the implications of early semantic analysis in the interpretation of noun phrases. One possible approach is to say that a noun phrase is a self-contained unit and can be fully interpreted by the time it has been read. Thus it can always be determined what objects a noun phrase refers to without consulting much more than the structure of the phrase itself. This approach was taken in part by Winograd [Winograd 72], who saw the constraint that a noun phrase have a referent as a valuable aid in resolving local syntactic ambiguity. Unfortunately, Winograd's work has been criticised by Ritchie, because it is not always possible to determine what a noun phrase refers to purely on the basis of local information. In this thesis, we will go further than this and claim that, because the meaning of a noun phrase can be affected by so many factors outside the phrase itself, it makes no sense to talk about "the referent" as a function of -a noun phrase. Instead, the notion of "referent" is something defined by global issues of structure and consistency. Having rejected one approach to the early semantic analysis of noun phrases, we go on to develop an alternative, which we call incremental evaluation. The basic idea is that a noun phrase does provide some information about what it refers to. It should be possible to represent this partial information and gradually refine it as relevant implications of the context are followed up. Moreover, the partial information should be available to an inference system, which, amongst other things, can detect the absence of a referent and provide the advantages of Winograd's system. In our system, noun phrase interpretation does take place locally, but the point is that it does not finish there. Instead, the determination of the meaning of a noun phrase is spread over the subsequent analysis of how it contributes to the meaning of the text as a whole
    corecore