58 research outputs found

    GeneSis: A Generative Approach to Substitutes in Context

    Get PDF
    The lexical substitution task aims at generating a list of suitable replacements for a target word in context, ideally keeping the meaning of the modified text unchanged. While its usage has increased in recent years, the paucity of annotated data prevents the finetuning of neural models on the task, hindering the full fruition of recently introduced powerful architectures such as language models. Furthermore, lexical substitution is usually evaluated in a framework that is strictly bound to a limited vocabulary, making it impossible to credit appropriate, but out-of-vocabulary, substitutes. To assess these issues, we propose GENESIS (Generating Substitutes in contexts), the first generative approach to lexical substitution. Thanks to a seq2seq model, we generate substitutes for a word according to the context it appears in, attaining state-of-theart results on different benchmarks. Moreover, our approach allows silver data to be produced for further improving the performances of lexical substitution systems. Along with an extensive analysis of GENESIS results, we also present a human evaluation of the generated substitutes in order to assess their quality. We release the fine-tuned models, the generated datasets and the code to reproduce the experiments at https://github.com/SapienzaNLP/genesis

    XL-AMR: Enabling Cross-Lingual AMR Parsing with Transfer Learning Techniques

    Get PDF
    Abstract Meaning Representation (AMR) is a popular formalism of natural language that represents the meaning of a sentence as a semantic graph. It is agnostic about how to derive meanings from strings and for this reason it lends itself well to the encoding of semantics across languages. However, cross-lingual AMR parsing is a hard task, because training data are scarce in languages other than English and the existing English AMR parsers are not directly suited to being used in a cross-lingual setting. In this work we tackle these two problems so as to enable cross-lingual AMR parsing: we explore different transfer learning techniques for producing automatic AMR annotations across languages and develop a cross-lingual AMR parser, XL-AMR. This can be trained on the produced data and does not rely on AMR aligners or source-copy mechanisms as is commonly the case in English AMR parsing. The results of XL-AMR significantly surpass those previously reported in Chinese, German, Italian and Spanish. Finally we provide a qualitative analysis which sheds light on the suitability of AMR across languages. We release XL-AMR at github.com/SapienzaNLP/xl-amr

    Evaluating Multilingual Sentence Representation Models in a Real Case Scenario

    Get PDF
    In this paper, we present an evaluation of sentence representation models on the paraphrase detection task. The evaluation is designed to simulate a real-world problem of plagiarism and is based on one of the most important cases of forgery in modern history: the so-called {``}Protocols of the Elders of Zion{''}. The sentence pairs for the evaluation are taken from the infamous forged text {``}Protocols of the Elders of Zion{''} (Protocols) by unknown authors; and by {``}Dialogue in Hell between Machiavelli and Montesquieu{''} by Maurice Joly. Scholars have demonstrated that the first text plagiarizes from the second, indicating all the forged parts on qualitative grounds. Following this evidence, we organized the rephrased texts and asked native speakers to quantify the level of similarity between each pair. We used this material to evaluate sentence representation models in two languages: English and French, and on three tasks: similarity correlation, paraphrase identification, and paraphrase retrieval. Our evaluation aims at encouraging the development of benchmarks based on real-world problems, as a means to prevent problems connected to AI hypes, and to use NLP technologies for social good. Through our evaluation, we are able to confirm that the infamous Protocols are actually a plagiarized text but, as we will show, we encounter several problems connected with the convoluted nature of the task, that is very different from the one reported in standard benchmarks of paraphrase detection and sentence similarity. Code and data available at https://github.com/roccotrip/protocols

    Representation of Jews and Anti-Jewish Bias in 19th-Century French Public Discourse: Distant and Close Reading

    Get PDF
    We explore through the lens of distant reading the evolution of discourse on Jews in France during the XIX century. We analyze a large textual corpus including heterogeneous sources-literary works, periodicals, songs, essays, historical narratives-to trace how Jews are associated to different semantic domains, and how such associations shift over time. Our analysis deals with three key aspects of such changes: the overall transformation of embedding spaces, the trajectories of word associations, and the comparative projection of different religious groups over different, historically relevant semantic dimensions or streams of discourse. This allows to show changes in the association between words and semantic domains (referring e.g. to economic and moral behaviors), the evolution of stereotypes, and the dynamics of bias over a long time span characterized by major historical transformations. We suggest that the analysis of large textual corpora can be fruitfully used in a dialogue with more traditional close reading approaches-by pointing to opportunities of in-depth analyses that mobilize more qualitative approaches and a detailed inspection of the sources that distant reading inevitably tends to aggregate. We offer a short example of such a dialogue between different approaches in our discussion of the Second Empire transformations, where we mobilize the historian's tools to start disentangling the complex interactions between changes in French society, the nature of sources, and representations of Jews. While our example is limited in scope, we foresee large potential payoffs in the cooperative interaction between distant and close reading

    A Game-Theoretic Approach to Word Sense Disambiguation

    Get PDF
    This article presents a new model for word sense disambiguation formulated in terms of evolutionary game theory, where each word to be disambiguated is represented as a node on a graph whose edges represent word relations and senses are represented as classes. The words simultaneously update their class membership preferences according to the senses that neighboring words are likely to choose. We use distributional information to weigh the influence that each word has on the decisions of the others and semantic similarity information to measure the strength of compatibility among the choices. With this information we can formulate the word sense disambiguation problem as a constraint satisfaction problem and solve it using tools derived from game theory, maintaining the textual coherence. The model is based on two ideas: Similar words should be assigned to similar classes and the meaning of a word does not depend on all the words in a text but just on some of them. The article provides an in-depth motivation of the idea of modeling the word sense disambiguation problem in terms of game theory, which is illustrated by an example. The conclusion presents an extensive analysis on the combination of similarity measures to use in the framework and a comparison with state-of-the-art systems. The results show that our model outperforms state-of-the-art algorithms and can be applied to different tasks and in different scenarios

    Linguistically Based QA by Dinamyc LOD Access from Logical Form

    Get PDF
    We present a system for Question Answering which computes a prospective answer from Logical Forms (hence LFs) produced by a full-fledged NLP for text understanding, and then maps the result onto schemata in SPARQL to be used for accessing the Semantic Web. As an intermediate step, and whenever there are complex concepts to be mapped, the system looks for a corresponding amalgam in YAGO classes. This is what happens in case the query to be constructed has [president,'United States'] as its goal, and the amalgam search will produce the complex concept [PresidentOfTheUnitedStates]. In case no class has been recovered, as for instance in the query related to the complex structure [5th,president,'United States'] the system knows that the cardinal figure '5th' behaves like a quantifier restricting the class of [PresidentOfTheUnitedStates]. In fact LFs are organized with a restricted ontology made up of 7 types: FOCus, PREDicate, ARGument, MODifier, ADJunct, QUANTifier, INTensifier, CARDinal. In addition, every argument has a Semantic Role to tell Subject from Object and Referential from non-Referential predicates. Another important step in the computation of the final LF, is the translation of the interrogative pronoun into a corresponding semantic class word taken from general nouns, in our case the highest concepts of WordNet hierarchy. The result is mapped into classes, properties, and restrictions (filters) as for instance in the question: Who was the wife of President Lincoln ? which becomes the final LF: be-[focus-person, arg-[wife/theme_bound], arg-['Lincoln'/theme-[mod-[pred-['President']]]]] and is then turned into the SPARQL expression, ?x dbpedia-owl:spouse :Abraham_Lincoln where "dbpedia-owl:spouse" is produced by searching the DBpedia properties and in case of failure looking into the synset associated to the concept as WIFE. In particular then, the concept "Abraham_Lincoln" is derived from DBpedia by the association of a property and an entity name, "President" and "Lincoln", which contextualizes the reference of the name to the appropriate referent in the world. It is just by the internal structure of the Logical Form that we are able to produce a suitable and meaningful context for concept disambiguation. Logical Forms are the final output of a complex system for text understanding - GETARUNS - which can deal with different levels of syntactic and semantic ambiguity in the generation of a final structure, by accessing computational lexical equipped with sub-categorization frames and appropriate selectional restrictions applied to the attachment of complements and adjuncts. The system also produces pronominal binding and instantiates the implicit arguments, if needed, in order to complete the required Predicate Argument structure which is licensed by the semantic component

    From logical forms to SPARQL query with GETARUNS

    Get PDF
    We present a system for Question Answering which computes a prospective answer from Logical Forms produced by a full-fledged NLP for text understanding, and then maps the result onto schemata in SPARQL to be used for accessing the Semantic Web. As an intermediate step, and whenever there are complex concepts to be mapped, the system looks for a corresponding amalgam in YAGO classes. It is just by the internal structure of the Logical Form that we are able to produce a suitable and meaningful context for concept disambiguation. Logical Forms are the final output of a complex system for text understanding - GETARUNS - which can deal with different levels of syntactic and semantic ambiguity in the generation of a final structure, by accessing computational lexical equipped with sub-categorization frames and appropriate selectional restrictions applied to the attachment of complements and adjuncts. The system also produces pronominal binding and instantiates the implicit arguments, if needed, in order to complete the required Predicate Argument structure which is licensed by the semantic component

    AAA: Fair Evaluation for Abuse Detection Systems Wanted

    Get PDF

    Process simulation for the design and scale up of heterogeneous catalytic process: Kinetic modelling issues

    Get PDF
    Process simulation represents an important tool for plant design and optimization, either applied to well established or to newly developed processes. Suitable thermodynamic packages should be selected in order to properly describe the behavior of reactors and unit operations and to precisely define phase equilibria. Moreover, a detailed and representative kinetic scheme should be available to predict correctly the dependence of the process on its main variables. This review points out some models and methods for kinetic analysis specifically applied to the simulation of catalytic processes, as a basis for process design and optimization. Attention is paid also to microkinetic modelling and to the methods based on first principles, to elucidate mechanisms and independently calculate thermodynamic and kinetic parameters. Different case studies support the discussion. At first, we have selected two basic examples from the industrial chemistry practice, e.g., ammonia and methanol synthesis, which may be described through a relatively simple reaction pathway and the relative available kinetic scheme. Then, a more complex reaction network is deeply discussed to define the conversion of bioethanol into syngas/hydrogen or into building blocks, such as ethylene. In this case, lumped kinetic schemes completely fail the description of process behavior. Thus, in this case, more detailed\ue2\u80\u94e.g., microkinetic\ue2\u80\u94schemes should be available to implement into the simulator. However, the correct definition of all the kinetic data when complex microkinetic mechanisms are used, often leads to unreliable, highly correlated parameters. In such cases, greater effort to independently estimate some relevant kinetic/thermodynamic data through Density Functional Theory (DFT)/ab initio methods may be helpful to improve process description

    Opinion and Factivity Analysis of Italian political discourse

    Get PDF
    The success of a newspaper article for the public opinion can be measured by the degree in which the journalist is able to report and modify (if needed) attitudes, opinions, feelings and political beliefs. We present a symbolic system for Italian, derived from GETARUNS, which integrates a range of natural language processing tools with the intent to characterise the print press discourse from a semantic and pragmatic point of view. This has been done on some 500K words of text, extracted from three Italian newspapers in order to characterize their stance on a deep political crisis situation. We tried two different approaches: a lexicon-based approach for semantic polarity using off-the-shelf dictionaries with the addition of manually supervised domain related concepts; another one is a feature-based semantic and pragmatic approach, which computes propositional level analysis with the intent to better characterize important component like factuality and subjectivity. Results are quite revealing and confirm the otherwise common knowledge about the political stance of each newspaper on such topic as the change of government that took place at the end of last year, 2011
    • …
    corecore