15 research outputs found

    Gathering Statistics to Aspectually Classify Sentences with a Genetic Algorithm

    Full text link
    This paper presents a method for large corpus analysis to semantically classify an entire clause. In particular, we use cooccurrence statistics among similar clauses to determine the aspectual class of an input clause. The process examines linguistic features of clauses that are relevant to aspectual classification. A genetic algorithm determines what combinations of linguistic features to use for this task.Comment: postscript, 9 pages, Proceedings of the Second International Conference on New Methods in Language Processing, Oflazer and Somers ed

    Automatic Identification of Aspectual Classes across Verbal Readings

    Get PDF
    International audienceThe automatic prediction of aspectual classes is very challenging for verbs whose aspectual value varies across readings, which are the rule rather than the exception. This paper sheds a new perspective on this problem by using a machine learning approach and a rich morpho-syntactic and semantic valency lexicon.In contrast to previous work, where the aspectual value of corpus clauses is determined on the basis of features retrieved from the corpus, we use features extracted from the lexicon, and aim to predict the aspectual value of verbal \textit{readings} rather than verbs.Studying the performance of the classifiers on a set of manually annotated verbal readings, we found that our lexicon provided enough information to reliably predict the aspectual value of verbs across their readings.We additionally tested our predictions for unseen predicates through a task based evaluation, by using them in the automatic detection of temporal relation types in TempEval 2007 tasks for French. These experiments also confirmed the reliability of our aspectual predictions, even for unseen verbs

    An Overview of Syntactic Tense & Aspect: From both Grammatical & Lexical Perspectives

    Get PDF
    Language can be complicated even within one language, such as in English. Rules of grammar, construction, and syntax are used to express ideas clearly so that others understand the intention behind them. However, these rules can lead to challenges in ensuring that ideas are effectively communicated and interpreted, particularly because word choice in the context of grammar and syntax rules can impact the way an expression is interpreted. This can be illustrated through an examination of the perfective aspect of syntax. The purpose of this research is to provide an overview of aspect and tense from both the grammatical and lexical perspectives

    Animation From Instructions

    Get PDF
    We believe that computer animation in the form of narrated animated simulations can provide an engaging, effective and flexible medium for instructing agents in the performance of tasks. However, we argue that the only way to achieve the kind of flexibility needed to instruct agents of varying capabilities to perform tasks with varying demands in work places of varying layout is to drive both animation and narration from a common representation that embodies the same conceptualization of tasks and actions as Natural Language itself. To this end, we are exploring the use of Natural Language instructions to drive animated simulations. In this paper, we discuss the relationship between instructions and behavior that underlie our work and the overall structure of our system. We then describe in some what more detail three aspects of the system - the representation used by the Simulator, the operation of the Simulator and the Motion Generators used in the system

    Doing time : inducing temporal graphs

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (leaves 51-53).We consider the problem of constructing a directed acyclic graph that encodes temporal relations found in a text. The unit of our analysis is a temporal segment, a fragment of text that maintains temporal coherence. The strength of our approach lies in its ability to simultaneously optimize pairwise ordering preferences and global constraints on the graph topology. Our learning method achieves 83% F-measure in temporal segmentation and 84% accuracy in inferring temporal relations between two segments.by Philip James Bramsen.M.Eng

    SEAFACT: Semantic Analysis for Animation of Cooking Tasks

    Get PDF
    SEAFACT is a natural language interface to a computer-generated animation system. SEAFACT operates in the domain of cooking tasks. The domain is limited to a mini-world consisting of a small set of verbs which were chosen because they involve rather complex arm movements which will be interesting to animate. A linguistic analysis of the language found in recipes, included here, was used to define the domain. SEAFACT allows the user to specify tasks in this domain, using a small subset of English. The system then analyzes the English input and produces a representation of the task which can drive lower level motion synthesis procedures. The output of the system contains sufficient non-geometric information needed to schedule task start and end times, describe concurrent actions, and provide reach, grasp, and motion goals

    Decoding algorithms for complex natural language tasks

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 79-85).This thesis focuses on developing decoding techniques for complex Natural Language Processing (NLP) tasks. The goal of decoding is to find an optimal or near optimal solution given a model that defines the goodness of a candidate. The task is challenging because in a typical problem the search space is large, and the dependencies between elements of the solution are complex. The goal of this work is two-fold. First, we are interested in developing decoding techniques with strong theoretical guarantees. We develop a decoding model based on the Integer Linear Programming paradigm which is guaranteed to compute the optimal solution and is capable of accounting for a wide range of global constraints. As an alternative, we also present a novel randomized algorithm which can guarantee an arbitrarily high probability of finding the optimal solution. We apply these methods to the task of constructing temporal graphs and to the task of title generation. Second, we are interested in carefully investigating the relations between learning and decoding. We build on the Perceptron framework to integrate the learning and decoding procedures into a single unified process. We use the resulting model to automatically generate tables-of-contents, structures with deep hierarchies and rich contextual dependencies. In all three natural language tasks, our experimental results demonstrate that theoretically grounded and stronger decoding strategies perform better than existing methods. As a final contribution, we have made the source code for these algorithms publicly available for the NLP research community.by Pawan Deshpande.M.Eng

    Tense, aspect and temporal reference

    Get PDF
    English exhibits a rich apparatus of tense, aspect, time adverbials and other expressions that can be used to order states of affairs with respect to each other, or to locate them at a point in time with respect to the moment of speech. Ideally one would want a semantics for these expressions to demonstrate that an orderly relationship exists between any one expression and the meanings it conveys. Yet most existing linguistic and formal semantic accounts leave something to be desired in this respect, describing natural language temporal categories as being full of ambiguities and indeterminacies, apparently escaping a uniform semantic description. It will be argued that this anomaly stems from the assumption that the semantics of these expressions is directly related to the linear conception of time familiar from temporal logic or physics - an assumption which can be seen to underly most of the current work on tense and aspect. According to these theories, the cognitive work involved in the processing of temporal discourse consists of the ordering of events as points or intervals on a time line or a set of time lines. There are, however, good reasons for wondering whether this time concept really is the one that our linguistic categories are most directly related to; it will be argued that a semantics of temporally referring expressions and a theory of their use in defining the temporal relations of events require a different and more complex structure underlying the meaning representations than is commonly assumed. A semantics will be developed, based on the assumption that categories like tense, aspect, aspectual adverbials and propositions refer to a mental representation of events that is structured on other than purely temporal principles, and to which the notion of a nucleus or consequentially related sequence of preparatory process, goal event and consequent state is central. It will be argued that the identification of the correct ontology is a logical preliminary to the choice of any particular formal representation scheme, as well as being essential in the design of natural language front-ends for temporal databases. It will be shown how the ontology developed here can be implemented in a database that contains time-related information about events and that is to be queried by means of natural language utterances
    corecore