64,820 research outputs found

    An approach for the automated synthesis of technical processes

    Get PDF
    This paper considers the implications of introducing the computational method for technical process synthesis founded on the Theory of Technical Systems as an addition to the current Computational Design Synthesis (CDS) methods and tools. A computational method containing formal model of technical process based on labelled multidigraph and formal model of technical process synthesis that is based on labelled multidigraph graph-grammar transformations will be presented. The result of applying transformation to the multigraph is generation of variants showing how technical process could be accomplished

    "The Predication Semantics Model: The Role of Predicate: Class in Text Comprehension and Recall"

    Get PDF
    This paper presents and tests the predication semantics model, a computational model of text comprehension. It goes beyond previous case grammar approaches to text comprehension in employing a propositional rather than a rigid hierarchical tree notion, attempting to maintain a coherent set of propositions in working memory. The authors' assertion is that predicate class contains semantic information that readers use to make generally accurate predictions of a given proposition. Thus, the main purpose of the model-which works as a series of input and reduction cycles-is to explore the extent to which predicate categories play a role in reading comprehension and recall. In the reduction phase of the model, the propositions entered into the memory during the input phase are decreased while coherence is maintained among them. In an examination of the working memory at the end of each cycle, the computational model maintained coherence for 70% of cycles. The model appeared prone to serial dependence in errors: the coherence problem appears to occur because (unlike real readers) the simulation docs not reread when necessary. Overall, the experiment suggested that the predication semantics model is robust. The results suggested that the model emulates a primary process in text comprehension: predicate categories provide semantic information that helps to initiate and control automatic processes in reading, and allows people to grasp the gist of a text even when they have only minimal background knowledge. While needing refinement in several areas presenting minor problems-for example, the lack of a sufficiently complex memory to ensure that when the simulation of the model goes wrong it does not, as at present, stay wrong for successive intervals-the success of the model even at the current restrictive level of detail demonstrates the importance of the semantic information in predicate categories.

    Computer Modelling of English Grammar

    Get PDF
    Recent work in artificial intelligence has developed a number of techniques which are particularly appropriate for constructing a model of the process of understanding English sentences. These methods are used here in the definition of a framework for linguistic description, called "computational grammar". This framework is employed to explore the - details of the operations involved in transforming an representation English sentence into a general semantic Computational grammar includes both "syntactic" and "semantic" constructs, in order to clarify the interactions between all the various kinds of information, and treats the sentence-analysis process as having a semantic goal which may require syntactic means to achieve it. The sentence-analyser is based on the concept of an "augmented transition network grammar", modified to minimise unwanted top-down processing and unnecessary era bedding. The analyser does not build a purely syntactic ,structure for a sentence, but the semantic rules operate hierarchically in a way which reflects the traditional tree structure. The processing operations are simplified by using temporary storage to postpone premature decisions or to conflate different options. The computational grammar framework has been applied to a few areas of English, including relative clauses, referring expressions, verb phrases and tense. A computer program ( "MCHINE") has been written which implements the constructs of computational grammar and some of the linguistic descriptions of English. A number of sentences have been successfully processed by the program, which can carry on a simple. dialogue as well as building semantic representations for isolated sentences

    Concurrent Lexicalized Dependency Parsing: The ParseTalk Model

    Full text link
    A grammar model for concurrent, object-oriented natural language parsing is introduced. Complete lexical distribution of grammatical knowledge is achieved building upon the head-oriented notions of valency and dependency, while inheritance mechanisms are used to capture lexical generalizations. The underlying concurrent computation model relies upon the actor paradigm. We consider message passing protocols for establishing dependency relations and ambiguity handling.Comment: 90kB, 7pages Postscrip

    Structured Prediction of Sequences and Trees using Infinite Contexts

    Full text link
    Linguistic structures exhibit a rich array of global phenomena, however commonly used Markov models are unable to adequately describe these phenomena due to their strong locality assumptions. We propose a novel hierarchical model for structured prediction over sequences and trees which exploits global context by conditioning each generation decision on an unbounded context of prior decisions. This builds on the success of Markov models but without imposing a fixed bound in order to better represent global phenomena. To facilitate learning of this large and unbounded model, we use a hierarchical Pitman-Yor process prior which provides a recursive form of smoothing. We propose prediction algorithms based on A* and Markov Chain Monte Carlo sampling. Empirical results demonstrate the potential of our model compared to baseline finite-context Markov models on part-of-speech tagging and syntactic parsing

    Evaluation of the NLP Components of the OVIS2 Spoken Dialogue System

    Full text link
    The NWO Priority Programme Language and Speech Technology is a 5-year research programme aiming at the development of spoken language information systems. In the Programme, two alternative natural language processing (NLP) modules are developed in parallel: a grammar-based (conventional, rule-based) module and a data-oriented (memory-based, stochastic, DOP) module. In order to compare the NLP modules, a formal evaluation has been carried out three years after the start of the Programme. This paper describes the evaluation procedure and the evaluation results. The grammar-based component performs much better than the data-oriented one in this comparison.Comment: Proceedings of CLIN 9
    • …
    corecore