3 research outputs found

    On the Readability of Kernel-based Deep Learning Models in Semantic Role Labeling Tasks over Multiple Languages

    Get PDF
    Sentence embeddings are effective input vectors for the neural learning of a number of inferences about content and meaning. Unfortunately, most of such decision processes are epistemologically opaque as for the limited interpretability of the acquired neural models based on the involved embeddings. In this paper, we concentrate on the readability of neural models, discussing an embedding technique (the Nyström methodology) that corresponds to the reconstruction of a sentence in a kernel space, capturing grammatical and lexical semantic information. From this method, we build a Kernel-based Deep Architecture that is characterized by inherently high interpretability properties, as the proposed embedding is derived from examples, i.e., landmarks, that are both human readable and labeled. Its integration with an explanation methodology, the Layer-wise Relevance Propagation, supports here the automatic compilation of argumentations for the Kernel-based Deep Architecture decisions, expressed in form of analogy with activated landmarks. Quantitative evaluation against the Semantic Role Labeling task, both in English and Italian, suggests that explanations based on semantic and syntagmatic structures are rich and characterize convincing arguments, as they effectively help the user in assessing whether or not to trust the machine decisions

    Argument Mining into Active Learning Systematic Reviews: unlocking the synergy between MARGOT and ASReview

    Get PDF
    Active learning enhances the systematic review process by effectively screening a large amount of titles and abstracts, using machine learning in combination with human expertise. However, the intricacy of full-text (traditional) abstracts can lead to issues, such as token restrictions and longer processing time. In light of these challenges, this thesis harnesses the capabilities of argument mining to distill salient information from abstracts in order to refine the screening process. Therefore, I propose the integration between ASReview LAB, an active learning tool for systematic reviews, and MARGOT, an argumentation mining software. This suggested approach leverages the power of computational argumentation, illustrating its significant value in literature processing. On this basis, I conducted an experiment based on various benchmark data, employing machine learning techniques to extract features from both traditional and Argument Mined abstracts. These features informed subsequent classification models. Next, I test the consistency of the experiment and conduct a quantitative and qualitative analysis spotlighting the benefits of Argument Mined abstracts. Results indicate marginal differences between traditional and Argument Mined abstracts. Yet, in different scenarios, Argument Mined abstracts elevate the overall quality of the systematic reviews. Furthermore, the efficiency of machine learning models depends heavily on the intrinsic attributes of the data they process
    corecore