53 research outputs found

    Inscripciones romanas inéditas de Tarragona

    Get PDF

    LLL. Algoritme de reducció de bases de xarxes

    Get PDF
    Treballs Finals de Grau de Matemàtiques, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2017, Director: Artur Travesa i Grau[en] The algorithm LLL is a strong tool for reducing lattice bases in polinomical time introduced by Arjen Lenstra, Hendrik Lenstra and László Lovász in 1982. We will study it’s implementation, as well as proof it’s polinomical time behaviour. Finally, we will show it’s use in factorizing factorizing polynomials with rational coefficients and some computational examples

    Cross-lingual AMR Aligner: Paying Attention to Cross-Attention

    Get PDF
    This paper introduces a novel aligner for Abstract Meaning Representation (AMR) graphs that can scale cross-lingually, and is thus capable of aligning units and spans in sentences of different languages. Our approach leverages modern Transformer-based parsers, which inherently encode alignment information in their cross-attention weights, allowing us to extract this information during parsing. This eliminates the need for English-specific rules or the Expectation Maximization (EM) algorithm that have been used in previous approaches. In addition, we propose a guided supervised method using alignment to further enhance the performance of our aligner. We achieve state-of-the-art results in the benchmarks for AMR alignment and demonstrate our aligner’s ability to obtain them across multiple languages

    Cross-lingual AMR Aligner: Paying Attention to Cross-Attention

    Full text link
    This paper introduces a novel aligner for Abstract Meaning Representation (AMR) graphs that can scale cross-lingually, and is thus capable of aligning units and spans in sentences of different languages. Our approach leverages modern Transformer-based parsers, which inherently encode alignment information in their cross-attention weights, allowing us to extract this information during parsing. This eliminates the need for English-specific rules or the Expectation Maximization (EM) algorithm that have been used in previous approaches. In addition, we propose a guided supervised method using alignment to further enhance the performance of our aligner. We achieve state-of-the-art results in the benchmarks for AMR alignment and demonstrate our aligner's ability to obtain them across multiple languages. Our code will be available at \href{https://www.github.com/Babelscape/AMR-alignment}{github.com/Babelscape/AMR-alignment}.Comment: ACL 2023. Please cite authors correctly using both lastnames ("Mart\'inez Lorenzo", "Huguet Cabot"

    Us vs. Them: A Dataset of Populist Attitudes, News Bias and Emotions

    Get PDF
    Computational modelling of political discourse tasks has become an increasingly important area of research in natural language processing. Populist rhetoric has risen across the political sphere in recent years; however, computational approaches to it have been scarce due to its complex nature. In this paper, we present the new Us vs. Them\textit{Us vs. Them} dataset, consisting of 6861 Reddit comments annotated for populist attitudes and the first large-scale computational models of this phenomenon. We investigate the relationship between populist mindsets and social groups, as well as a range of emotions typically associated with these. We set a baseline for two tasks related to populist attitudes and present a set of multi-task learning models that leverage and demonstrate the importance of emotion and group identification as auxiliary tasks.Comment: Camera-ready version in EACL 202

    Perineurioma intraneural de presentación intramandibular: estudio histológico, inmunohistoquímico y citogenético

    Get PDF
    Presentamos el caso de un perineurioma intraneural del nervio dentario, de localización intramandibular. Se trata de un tumor poco frecuente del que se ha discutido su origen neoplásico o reactivo. La localización intraósea en región de cabeza y cuello es excepcional. Definimos las características histológicas e inmunohistoquímicas de este tumor, estableciendo el diagnóstico diferencial con la variedad extraneural de perineurioma, con otros tumores de la vaina del nervio periférico más frecuentes en esta localización y con la neuropatía hipertrófica localizada, entidad reactiva con la cual se ha identificado a veces. Mediante la hibridización in situ con inmunofluorescencia se confirma el origen neoplásico del perineurioma.We report a case of an intramandibular intraneural perineurioma developed in the left dentary nerve. This tumour is rare and shows a typical histological, immunohistochemical and ultrastructural appearance: concentric whorls of perineurial cells EMA+ and PS100- around nerve fibers. This tumour must be distinguished from extraneural or soft tissue perineurioma, also composed of perineurial cells, with distinct clinical presentation and histological appearance, and from localized hypertrophic neuropathy, a reactive process frequently identified with intraneural perineurioma. Cytogenetic evidence for the neoplastic nature of this tumour is also presented in this report

    Incorporating Graph Information in Transformer-based AMR Parsing

    Get PDF
    Abstract Meaning Representation (AMR) is a Semantic Parsing formalism that aims at providing a semantic graph abstraction representing a given text. Current approaches are based on autoregressive language models such as BART or T5, fine-tuned through Teacher Forcing to obtain a linearized version of the AMR graph from a sentence. In this paper, we present LeakDistill, a model and method that explores a modification to the Transformer architecture, using structural adapters to explicitly incorporate graph information into the learned representations and improve AMR parsing performance. Our experiments show how, by employing word-to-node alignment to embed graph structural information into the encoder at training time, we can obtain state-of-the-art AMR parsing through self-knowledge distillation, even without the use of additional data

    Incorporating Graph Information in Transformer-based AMR Parsing

    Full text link
    Abstract Meaning Representation (AMR) is a Semantic Parsing formalism that aims at providing a semantic graph abstraction representing a given text. Current approaches are based on autoregressive language models such as BART or T5, fine-tuned through Teacher Forcing to obtain a linearized version of the AMR graph from a sentence. In this paper, we present LeakDistill, a model and method that explores a modification to the Transformer architecture, using structural adapters to explicitly incorporate graph information into the learned representations and improve AMR parsing performance. Our experiments show how, by employing word-to-node alignment to embed graph structural information into the encoder at training time, we can obtain state-of-the-art AMR parsing through self-knowledge distillation, even without the use of additional data. We release the code at \url{http://www.github.com/sapienzanlp/LeakDistill}.Comment: ACL 2023. Please cite authors correctly using both lastnames ("Mart\'inez Lorenzo", "Huguet Cabot"

    REDFM^{\rm FM}: a Filtered and Multilingual Relation Extraction Dataset

    Full text link
    Relation Extraction (RE) is a task that identifies relationships between entities in a text, enabling the acquisition of relational facts and bridging the gap between natural language and structured knowledge. However, current RE models often rely on small datasets with low coverage of relation types, particularly when working with languages other than English. In this paper, we address the above issue and provide two new resources that enable the training and evaluation of multilingual RE systems. First, we present SREDFM^{\rm FM}, an automatically annotated dataset covering 18 languages, 400 relation types, 13 entity types, totaling more than 40 million triplet instances. Second, we propose REDFM^{\rm FM}, a smaller, human-revised dataset for seven languages that allows for the evaluation of multilingual RE systems. To demonstrate the utility of these novel datasets, we experiment with the first end-to-end multilingual RE model, mREBEL, that extracts triplets, including entity types, in multiple languages. We release our resources and model checkpoints at https://www.github.com/babelscape/rebelComment: ACL 2023. Please cite authors correctly using both lastnames ("Huguet Cabot", "Ngonga Ngomo"
    corecore