6,269 research outputs found

    Exploring figurative language recognition: a comprehensive study of human and machine approaches

    Full text link
    Treballs Finals de Grau de Llengües i Literatures Modernes. Facultat de Filologia. Universitat de Barcelona. Curs: 2022-2023. Tutora: Elisabet Comelles Pujadas[eng] Figurative language (FL) plays a significant role in human communication. Understanding and interpreting FL is essential for humans to fully grasp the intended message, appreciate cultural nuances, and engage in effective interaction. For machines, comprehending FL presents a challenge due to its complexity and ambiguity. Enabling machines to understand FL has become increasingly important in sentiment analysis, text classification, and social media monitoring, for instance, benefits from accurately recognizing figurative expressions to capture subtle emotions and extract meaningful insights. Machine translation also requires the ability to accurately convey FL to ensure translations reflect the intended meaning and cultural nuances. Therefore, developing computational methods to enable machines to understand and interpret FL is crucial. By bridging the gap between human and machine understanding of FL, we can enhance communication, improve language-based applications, and unlock new possibilities in human-machine interactions. Keywords: figurative language, NLP, human-machine communication.[cat] El Llenguatge Figuratiu (LF) té un paper important en la comunicació humana. Per entendre completament els missatges, apreciar els matisos culturals i la interacció efectiva, és necessària la capacitat d'interpretar el LF. No obstant això, els ordinadors tenen dificultats per entendre la LF a causa de la seva complexitat i ambigüitat. És crític que els ordinadors siguin capaços de reconèixer el LF, especialment en àrees com l'anàlisi de sentiments, la classificació de textos i la supervisió de les xarxes socials. El reconeixement precís del LF permet capturar emocions i extreure idees semàntiques. La traducció automàtica també requereix una representació precisa del LF per reflectir el significat previst i els matisos culturals. Per tant, és rellevant desenvolupar mètodes computacionals que ajudin els ordinadors a comprendre i interpretar el LF. Fer un pont entre la comprensió humana i màquina del LF pot millorar la comunicació, desenvolupar aplicacions de llenguatge i obrir noves possibilitats per a la interacció home-màquina. Paraules clau: llenguatge figuratiu, processament del llenguatge natural, interacció home-màquina

    DISCOURSE-DRIVEN MEANING CONSTRUCTION IN NEOSEMANTIC NOUN-TO-VERB CONVERSIONS [MEANING CONSTRUCTION IN NOUN-TO-VERB CONVERSIONS]

    Get PDF
    Neosemantic noun-to-verb conversions such as beer → to beer, door → to door, pink → to pink, etc., constitute a particularly interesting field of study for Cognitive Linguistics in that they call for a discourse-guided and context-based analysis of meaning construction. The present article takes a closer look at the cognitive motivation for the conversion process involved in the noun-verb alterations with a view to explaining the semantics of some conversion formations in relation to the user-centred discourse context. The analysis developed in this article draws from the combined insights of Fauconnier and Turner’s (2002) Conceptual Integration Theory and Langacker’s (2005, 2008) Current Discourse Space

    Dealing with Data for RE: Mitigating Challenges while using NLP and Generative AI

    Full text link
    Across the dynamic business landscape today, enterprises face an ever-increasing range of challenges. These include the constantly evolving regulatory environment, the growing demand for personalization within software applications, and the heightened emphasis on governance. In response to these multifaceted demands, large enterprises have been adopting automation that spans from the optimization of core business processes to the enhancement of customer experiences. Indeed, Artificial Intelligence (AI) has emerged as a pivotal element of modern software systems. In this context, data plays an indispensable role. AI-centric software systems based on supervised learning and operating at an industrial scale require large volumes of training data to perform effectively. Moreover, the incorporation of generative AI has led to a growing demand for adequate evaluation benchmarks. Our experience in this field has revealed that the requirement for large datasets for training and evaluation introduces a host of intricate challenges. This book chapter explores the evolving landscape of Software Engineering (SE) in general, and Requirements Engineering (RE) in particular, in this era marked by AI integration. We discuss challenges that arise while integrating Natural Language Processing (NLP) and generative AI into enterprise-critical software systems. The chapter provides practical insights, solutions, and examples to equip readers with the knowledge and tools necessary for effectively building solutions with NLP at their cores. We also reflect on how these text data-centric tasks sit together with the traditional RE process. We also highlight new RE tasks that may be necessary for handling the increasingly important text data-centricity involved in developing software systems.Comment: 24 pages, 2 figures, to be published in NLP for Requirements Engineering Boo

    Design considerations for a hierarchical semantic compositional framework for medical natural language understanding

    Full text link
    Medical natural language processing (NLP) systems are a key enabling technology for transforming Big Data from clinical report repositories to information used to support disease models and validate intervention methods. However, current medical NLP systems fall considerably short when faced with the task of logically interpreting clinical text. In this paper, we describe a framework inspired by mechanisms of human cognition in an attempt to jump the NLP performance curve. The design centers about a hierarchical semantic compositional model (HSCM) which provides an internal substrate for guiding the interpretation process. The paper describes insights from four key cognitive aspects including semantic memory, semantic composition, semantic activation, and hierarchical predictive coding. We discuss the design of a generative semantic model and an associated semantic parser used to transform a free-text sentence into a logical representation of its meaning. The paper discusses supportive and antagonistic arguments for the key features of the architecture as a long-term foundational framework

    A Computable Economist’s Perspective on Computational Complexity

    Get PDF
    A computable economist's view of the world of computational complexity theory is described. This means the model of computation underpinning theories of computational complexity plays a central role. The emergence of computational complexity theories from diverse traditions is emphasised. The unifications that emerged in the modern era was codified by means of the notions of efficiency of computations, non-deterministic computations, completeness, reducibility and verifiability - all three of the latter concepts had their origins on what may be called 'Post's Program of Research for Higher Recursion Theory'. Approximations, computations and constructions are also emphasised. The recent real model of computation as a basis for studying computational complexity in the domain of the reals is also presented and discussed, albeit critically. A brief sceptical section on algorithmic complexity theory is included in an appendix

    SynthDST: Synthetic Data is All You Need for Few-Shot Dialog State Tracking

    Full text link
    In-context learning with Large Language Models (LLMs) has emerged as a promising avenue of research in Dialog State Tracking (DST). However, the best-performing in-context learning methods involve retrieving and adding similar examples to the prompt, requiring access to labeled training data. Procuring such training data for a wide range of domains and applications is time-consuming, expensive, and, at times, infeasible. While zero-shot learning requires no training data, it significantly lags behind the few-shot setup. Thus, `\textit{Can we efficiently generate synthetic data for any dialogue schema to enable few-shot prompting?}' Addressing this question, we propose \method, a data generation framework tailored for DST, utilizing LLMs. Our approach only requires the dialogue schema and a few hand-crafted dialogue templates to synthesize natural, coherent, and free-flowing dialogues with DST annotations. Few-shot learning using data from {\method} results in 454-5% improvement in Joint Goal Accuracy over the zero-shot baseline on MultiWOZ 2.1 and 2.4. Remarkably, our few-shot learning approach recovers nearly 9898% of the performance compared to the few-shot setup using human-annotated training data. Our synthetic data and code can be accessed at https://github.com/apple/ml-synthdstComment: 9 pages. 4 figures, EACL 2024 main conferenc

    Physics Avoidance & Cooperative Semantics: Inferentialism and Mark Wilson’s Engagement with Naturalism Qua Applied Mathematics

    Get PDF
    Mark Wilson argues that the standard categorizations of "Theory T thinking"— logic-centered conceptions of scientific organization (canonized via logical empiricists in the mid-twentieth century)—dampens the understanding and appreciation of those strategic subtleties working within science. By "Theory T thinking," we mean to describe the simplistic methodology in which mathematical science allegedly supplies ‘processes’ that parallel nature's own in a tidily isomorphic fashion, wherein "Theory T’s" feigned rigor and methodological dogmas advance inadequate discrimination that fails to distinguish between explanatory structures that are architecturally distinct. One of Wilson's main goals is to reverse such premature exclusions and, thus, early on Wilson returns to John Locke's original physical concerns regarding material science and the congeries of descriptive concern insofar as capturing varied phenomena (i.e., cohesion, elasticity, fracture, and the transmission of coherent work) encountered amongst ordinary solids like wood and steel are concerned. Of course, Wilson methodologically updates such a purview by appealing to multiscalar techniques of modern computing, drawing from Robert Batterman's work on the greediness of scales and Jim Woodward's insights on causation

    A Computable Economist’s Perspective on Computational Complexity

    Get PDF
    A computable economist.s view of the world of computational complexity theory is described. This means the model of computation underpinning theories of computational complexity plays a central role. The emergence of computational complexity theories from diverse traditions is emphasised. The unifications that emerged in the modern era was codified by means of the notions of efficiency of computations, non-deterministic computations, completeness, reducibility and verifiability - all three of the latter concepts had their origins on what may be called "Post's Program of Research for Higher Recursion Theory". Approximations, computations and constructions are also emphasised. The recent real model of computation as a basis for studying computational complexity in the domain of the reals is also presented and discussed, albeit critically. A brief sceptical section on algorithmic complexity theory is included in an appendix.
    corecore