12 research outputs found

    Mining Large-scale Event Knowledge from Web Text

    Get PDF
    AbstractThis paper addresses the problem of automatic acquisition of semantic relations between events. While previous works on semantic relation automatic acquisition relied on annotated text corpus, it is still unclear how to develop more generic methods to meet the needs of identifying related event pairs and extracting event-arguments (especially the predicate, subject and object). Motivated by this limitation, we develop a three-phased approach that acquires causality from the Web text. First, we use explicit connective markers (such as “because”) as linguistic cues to discover causal related events. Next, we extract the event-arguments based on local dependency parse trees of event expressions. At the last step, we propose a statistical model to measure the potential causal relations. The results of our empirical evaluations on a large-scale Web text corpus show that (a) the use of local dependency tree extensively improves both the accuracy and recall of event-arguments extraction task, and (b) our measure improves the traditional PMI method

    What's in a Message?

    Get PDF
    In this paper we present the first step in a larger series of experiments for the induction of predicate/argument structures. The structures that we are inducing are very similar to the conceptual structures that are used in Frame Semantics (such as FrameNet). Those structures are called messages and they were previously used in the context of a multi-document summarization system of evolving events. The series of experiments that we are proposing are essentially composed from two stages. In the first stage we are trying to extract a representative vocabulary of words. This vocabulary is later used in the second stage, during which we apply to it various clustering approaches in order to identify the clusters of predicates and arguments--or frames and semantic roles, to use the jargon of Frame Semantics. This paper presents in detail and evaluates the first stage

    A methodology for the semiautomatic annotation of EPEC-RolSem, a basque corpus labeled at predicative level following the PropBank-Verb Net model

    Get PDF
    In this article we describe the methodology developed for the semiautomatic annotation of EPEC-RolSem, a Basque corpus labeled at predicate level following the PropBank-VerbNet model. The methodology presented is the product of detailed theoretical study of the semantic nature of verbs in Basque and of their similarities and differences with verbs in other languages. As part of the proposed methodology, we are creating a Basque lexicon on the PropBank-VerbNet model that we have named the Basque Verb Index (BVI). Our work thus dovetails the general trend toward building lexicons from tagged corpora that is clear in work conducted for other languages. EPEC-RolSem and BVI are two important resources for the computational semantic processing of Basque; as far as the authors are aware, they are also the first resources of their kind developed for Basque. In addition, each entry in BVI is linked to the corresponding verb-entry in well-known resources like PropBank, VerbNet, WordNet, Levin’s Classification and FrameNet. We have also implemented several automatic processes to aid in creating and annotating the BVI, including processes designed to facilitate the task of manual annotation.Lan honetan, EPEC-RolSem corpusa etiketatzeko jarraitu dugun metodologia deskribatuko dugu. EPEC-RolSem corpusa PropBank-VerbNet ereduari jarraiki predikatu-mailan etiketatutako euskarazko corpusa da. Etiketatze-lana aurrera eramateko euskal aditzen izaera semantikoa aztertu eta ingeleseko aditzekin konparatu dugu, azterketa horren emaitza da lan honetan proposatzen dugun metodologia. Metodologiaren atal bat PropBank-VerbNet eredura sortutako euskal aditzen lexikoiaren osaketa izan da, lexikoi hau Basque Verb Index (BVI) deitu dugu. Gure lanak alor honetan beste hizkuntzetan dagoen joera nagusia jarraitzen du, hau da, etiketatutako corpusetatik lexikoiak sortzea. EPEC-RolSem eta BVI oso baliabide garrantzitsuak dira euskararen semantika konputazionalaren alorrean, izan ere, euskararako sortutako mota honetako lehen baliabideak dira. Honetaz guztiaz gain, BVIko sarrera bakoitza PropBank, VerbNet, WordNet, Levinen sailkapena eta FrameNet bezalako baliabide ezagunekin lotua dago. Hainbat prozesu automatiko inplementatu ditugu EPEC-RolSem corpusaren eskuzko etiketatzea laguntzeko eta baita BVI sortzeko eta osatzeko ere

    Processing Units in Conversation: A Comparative Study of French and Mandarin Data

    Full text link

    Exploiting Cross-Lingual Representations For Natural Language Processing

    Get PDF
    Traditional approaches to supervised learning require a generous amount of labeled data for good generalization. While such annotation-heavy approaches have proven useful for some Natural Language Processing (NLP) tasks in high-resource languages (like English), they are unlikely to scale to languages where collecting labeled data is di cult and time-consuming. Translating supervision available in English is also not a viable solution, because developing a good machine translation system requires expensive to annotate resources which are not available for most languages. In this thesis, I argue that cross-lingual representations are an effective means of extending NLP tools to languages beyond English without resorting to generous amounts of annotated data or expensive machine translation. These representations can be learned in an inexpensive manner, often from signals completely unrelated to the task of interest. I begin with a review of different ways of inducing such representations using a variety of cross-lingual signals and study algorithmic approaches of using them in a diverse set of downstream tasks. Examples of such tasks covered in this thesis include learning representations to transfer a trained model across languages for document classification, assist in monolingual lexical semantics like word sense induction, identify asymmetric lexical relationships like hypernymy between words in different languages, or combining supervision across languages through a shared feature space for cross-lingual entity linking. In all these applications, the representations make information expressed in other languages available in English, while requiring minimal additional supervision in the language of interest

    Annotation syntaxico-sémantique des actants en corpus spécialisé

    Get PDF
    L’annotation en rôles sémantiques est une tâche qui permet d’attribuer des étiquettes de rôles telles que Agent, Patient, Instrument, Lieu, Destination etc. aux différents participants actants ou circonstants (arguments ou adjoints) d’une lexie prédicative. Cette tâche nécessite des ressources lexicales riches ou des corpus importants contenant des phrases annotées manuellement par des linguistes sur lesquels peuvent s’appuyer certaines approches d’automatisation (statistiques ou apprentissage machine). Les travaux antérieurs dans ce domaine ont porté essentiellement sur la langue anglaise qui dispose de ressources riches, telles que PropBank, VerbNet et FrameNet, qui ont servi à alimenter les systèmes d’annotation automatisés. L’annotation dans d’autres langues, pour lesquelles on ne dispose pas d’un corpus annoté manuellement, repose souvent sur le FrameNet anglais. Une ressource telle que FrameNet de l’anglais est plus que nécessaire pour les systèmes d’annotation automatisé et l’annotation manuelle de milliers de phrases par des linguistes est une tâche fastidieuse et exigeante en temps. Nous avons proposé dans cette thèse un système automatique pour aider les linguistes dans cette tâche qui pourraient alors se limiter à la validation des annotations proposées par le système. Dans notre travail, nous ne considérons que les verbes qui sont plus susceptibles que les noms d’être accompagnés par des actants réalisés dans les phrases. Ces verbes concernent les termes de spécialité d’informatique et d’Internet (ex. accéder, configurer, naviguer, télécharger) dont la structure actancielle est enrichie manuellement par des rôles sémantiques. La structure actancielle des lexies verbales est décrite selon les principes de la Lexicologie Explicative et Combinatoire, LEC de Mel’čuk et fait appel partiellement (en ce qui concerne les rôles sémantiques) à la notion de Frame Element tel que décrit dans la théorie Frame Semantics (FS) de Fillmore. Ces deux théories ont ceci de commun qu’elles mènent toutes les deux à la construction de dictionnaires différents de ceux issus des approches traditionnelles. Les lexies verbales d’informatique et d’Internet qui ont été annotées manuellement dans plusieurs contextes constituent notre corpus spécialisé. Notre système qui attribue automatiquement des rôles sémantiques aux actants est basé sur des règles ou classificateurs entraînés sur plus de 2300 contextes. Nous sommes limités à une liste de rôles restreinte car certains rôles dans notre corpus n’ont pas assez d’exemples annotés manuellement. Dans notre système, nous n’avons traité que les rôles Patient, Agent et Destination dont le nombre d’exemple est supérieur à 300. Nous avons crée une classe que nous avons nommé Autre où nous avons rassemblé les autres rôles dont le nombre d’exemples annotés est inférieur à 100. Nous avons subdivisé la tâche d’annotation en sous-tâches : identifier les participants actants et circonstants et attribuer des rôles sémantiques uniquement aux actants qui contribuent au sens de la lexie verbale. Nous avons soumis les phrases de notre corpus à l’analyseur syntaxique Syntex afin d’extraire les informations syntaxiques qui décrivent les différents participants d’une lexie verbale dans une phrase. Ces informations ont servi de traits (features) dans notre modèle d’apprentissage. Nous avons proposé deux techniques pour l’identification des participants : une technique à base de règles où nous avons extrait une trentaine de règles et une autre technique basée sur l’apprentissage machine. Ces mêmes techniques ont été utilisées pour la tâche de distinguer les actants des circonstants. Nous avons proposé pour la tâche d’attribuer des rôles sémantiques aux actants, une méthode de partitionnement (clustering) semi supervisé des instances que nous avons comparée à la méthode de classification de rôles sémantiques. Nous avons utilisé CHAMÉLÉON, un algorithme hiérarchique ascendant.Semantic role annotation is a process that aims to assign labels such as Agent, Patient, Instrument, Location, etc. to actants or circumstants (also called arguments or adjuncts) of predicative lexical units. This process often requires the use of rich lexical resources or corpora in which sentences are annotated manually by linguists. The automatic approaches (statistical or machine learning) are based on corpora. Previous work was performed for the most part in English which has rich resources, such as PropBank, VerbNet and FrameNet. These resources were used to serve the automated annotation systems. This type of annotation in other languages for which no corpora of annotated sentences are available often use FrameNet by projection. Although a resource such as FrameNet is necessary for the automated annotation systems and the manual annotation by linguists of a large number of sentences is a tedious and time consuming work. We have proposed an automated system to help linguists in this task so that they have only to validate annotations proposed. Our work focuses on verbs that are more likely than other predicative units (adjectives and nouns) to be accompanied by actants realized in sentences. These verbs are specialized terms of the computer science and Internet domains (ie. access, configure, browse, download) whose actantial structures have been annotated manually with semantic roles. The actantial structure is based on principles of Explanatory and Combinatory Lexicology, LEC of Mel’čuk and appeal in part (with regard to semantic roles) to the notion of Frame Element as described in the theory of frame semantics (FS) of Fillmore. What these two theories have in common is that they lead to the construction of dictionaries different from those resulting from the traditional theories. These manually annotated verbal units in several contexts constitute the specialized corpus that our work will use. Our system designed to assign automatically semantic roles to actants is based on rules and classifiers trained on more than 2300 contexts. We are limited to a restricted list of roles for certain roles in our corpus have not enough examples manually annotated. In our system, we addressed the roles Patient, Agent and destination that the number of examples is greater than 300. We have created a class that we called Autre which we bring to gether the other roles that the number of annotated examples is less than 100. We subdivided the annotation task in the identification of participant actants and circumstants and the assignment of semantic roles to actants that contribute to the sense of the verbal lexical unit. We parsed, with Syntex, the sentences of the corpus to extract syntactic informations that describe the participants of the verbal lexical unit in the sentence. These informations are used as features in our learning model. We have proposed two techniques for the task of participant detection: the technique based in rules and machine learning. These same techniques are used for the task of classification of these participants into actants and circumstants. We proposed to the task of assigning semantic roles to the actants, a partitioning method (clustering) semi supervised of instances that we have compared to the method of semantic role classification. We used CHAMELEON, an ascending hierarchical algorithm
    corecore