1,083 research outputs found

    Learning Affect with Distributional Semantic Models

    Get PDF
    The affective content of a text depends on the valence and emotion values of its words. At the same time a word distributional properties deeply influence its affective content. For instance a word may become negatively loaded because it tends to co-occur with other negative expressions. Lexical affective values are used as features in sentiment analysis systems and are typically estimated with hand-made resources (e.g. WordNet Affect), which have a limited coverage. In this paper we show how distributional semantic models can effectively be used to bootstrap emotive embeddings for Italian words and then compute affective scores with respect to eight basic emotions. We also show how these emotive scores can be used to learn the positive vs. negative valence of words and model behavioral data

    On the detection of nearly optimal solutions in the context of single-objective space mission design problems

    Get PDF
    When making decisions, having multiple options available for a possible realization of the same project can be advantageous. One way to increase the number of interesting choices is to consider, in addition to the optimal solution x*, also nearly optimal or approximate solutions; these alternative solutions differ from x* and can be in different regions – in the design space – but fulfil certain proximity to its function value f(x*). The scope of this article is the efficient computation and discretization of the set E of e–approximate solutions for scalar optimization problems. To accomplish this task, two strategies to archive and update the data of the search procedure will be suggested and investigated. To make emphasis on data storage efficiency, a way to manage significant and insignificant parameters is also presented. Further on, differential evolution will be used together with the new archivers for the computation of E. Finally, the behaviour of the archiver, as well as the efficiency of the resulting search procedure, will be demonstrated on some academic functions as well as on three models related to space mission design

    Preface to the fifth Workshop on Natural Language for Artificial Intelligence (NL4AI)

    Get PDF
    Preface to the fifth Workshop on Natural Language for Artificial Intelligence (NL4AI

    GQA-it: Italian Question Answering on Image Scene Graphs

    Get PDF
    The recent breakthroughs in the field of deep learning have lead to state-of-the-art results in several Computer Vision and Natural Language Processing tasks such as Visual Question Answering (VQA). Nevertheless, the training requirements in cross-linguistic settings are not completely satisfying at the moment. The datasets suitable for training VQA systems for non English languages are still not available, thus representing a significant barrier for most neural methods. This paper explores the possibility of acquiring in a semiautomatic fashion a large-scale dataset for VQA in Italian. It consists of more than 1 M question-answer pairs over 80k images, with a test set of 3,000 question-answer pairs manually validated. To the best of our knowledge, the models trained on this dataset represent the first attempt to approach VQA in Italian, with experimental results comparable with those obtained on the English original material

    Lessons Learned from EVALITA 2020 and Thirteen Years of Evaluation of Italian Language Technology

    Get PDF
    This paper provides a summary of the 7th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian (EVALITA2020) which was held online on December 17th, due to the 2020 COVID-19 pandemic. The 2020 edition of Evalita included 14 different tasks belonging to five research areas, namely: (i) Affect, Hate, and Stance, (ii) Creativity and Style, (iii) New Challenges in Long-standing Tasks, (iv) Semantics and Multimodality, (v) Time and Diachrony. This paper provides a description of the tasks and the key findings from the analysis of participant outcomes. Moreover, it provides a detailed analysis of the participants and task organizers which demonstrates the growing interest with respect to this campaign. Finally, a detailed analysis of the evaluation of tasks across the past seven editions is provided; this allows to assess how the research carried out by the Italian community dealing with Computational Linguistics has evolved in terms of popular tasks and paradigms during the last 13 years

    Quantum Modular Z^G\hat Z^G-Invariants

    Full text link
    We study the quantum modular properties of Z^G\hat{ Z}^G-invariants of closed three-manifolds. Higher depth quantum modular forms are expected to play a central role for general three-manifolds and gauge groups GG. In particular, we conjecture that for plumbed three-manifolds whose plumbing graphs have nn junction nodes with definite signature and for rank rr gauge group GG, that Z^G\hat{ Z}^G is related to a quantum modular form of depth nrnr. We prove this for G=SU(3)G=SU(3) and for an infinite class of three-manifolds (weakly negative Seifert with three exceptional fibers). We also investigate the relation between the quantum modularity of Z^G\hat{ Z}^G-invariants of the same three-manifold with different gauge group GG. We conjecture a recursive relation among the iterated Eichler integrals relevant for Z^G\hat{ Z}^G with G=SU(2)G=SU(2) and SU(3)SU(3), for negative Seifert manifolds with three exceptional fibers. This is reminiscent of the recursive structure among mock modular forms playing the role of Vafa-Witten invariants for SU(N)SU(N). We prove the conjecture when the three-manifold is moreover an integral homological sphere.Comment: 72 pages, 5 table

    Evaluating Pre-Trained Transformers on Italian Administrative Texts

    Get PDF
    In recent years, Transformer-based models have been widely used in NLP for various downstream tasks and in different domains. However, a language model explicitly built for the Italian administrative language is still lacking. Therefore, in this paper, we decided to compare the performance of five different Transformer models, pre-trained on general purpose texts, on two main tasks in the Italian administrative domain: Name Entity Recognition and multi-label document classification on Public Administration (PA) documents. We evaluate the performance of each model on both tasks to identify the best model in this particular domain. We also discuss the effect of model size and pre-training data on the performances on domain data. Our evaluation identifies UmBERTo as the best-performing model, with an accuracy of 0.71, an F1 score of 0.89 for multi-label document classification, and an F1 score of 0.87 for NER-PA

    MicroRNA Roles in Cell Reprogramming Mechanisms

    Get PDF
    Cell reprogramming is a groundbreaking technology that, in few decades, generated a new paradigm in biomedical science. To date we can use cell reprogramming to potentially generate every cell type by converting somatic cells and suitably modulating the expression of key transcription factors. This approach can be used to convert skin fibroblasts into pluripotent stem cells as well as into a variety of differentiated and medically relevant cell types, including cardiomyocytes and neural cells. The molecular mechanisms underlying such striking cell phenotypes are still largely unknown, but in the last decade it has been proven that cell reprogramming approaches are significantly influenced by non-coding RNAs. Specifically, this review will focus on the role of microRNAs in the reprogramming processes that lead to the generation of pluripotent stem cells, neurons, and cardiomyocytes. As highlighted here, non-coding RNA-forced expression can be sufficient to support some cell reprogramming processes, and, therefore, we will also discuss how these molecular determinants could be used in the future for biomedical purposes

    WiC-ITA at EVALITA2023: Overview of the EVALITA2023 Word-in-Context for ITAlian Task

    Get PDF
    WiC-ita is a shared task proposed at the EVALITA 2023 campaign. The task focuses on the meaning of words in specific contexts and has been modelled as both a binary classification and a ranking problem. Overall, 4 groups took part in both subtasks, with 9 different runs. In this report, we describe how the task was set up, we report the system results, and we discuss them
    • …
    corecore