23,607 research outputs found
Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation
This paper surveys the current state of the art in Natural Language
Generation (NLG), defined as the task of generating text or speech from
non-linguistic input. A survey of NLG is timely in view of the changes that the
field has undergone over the past decade or so, especially in relation to new
(usually data-driven) methods, as well as new applications of NLG technology.
This survey therefore aims to (a) give an up-to-date synthesis of research on
the core tasks in NLG and the architectures adopted in which such tasks are
organised; (b) highlight a number of relatively recent research topics that
have arisen partly as a result of growing synergies between NLG and other areas
of artificial intelligence; (c) draw attention to the challenges in NLG
evaluation, relating them to similar challenges faced in other areas of Natural
Language Processing, with an emphasis on different evaluation methods and the
relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118
pages, 8 figures, 1 tabl
Similarity of Semantic Relations
There are at least two kinds of similarity. Relational similarity is
correspondence between relations, in contrast with attributional similarity,
which is correspondence between attributes. When two words have a high
degree of attributional similarity, we call them synonyms. When two pairs
of words have a high degree of relational similarity, we say that their
relations are analogous. For example, the word pair mason:stone is analogous
to the pair carpenter:wood. This paper introduces Latent Relational Analysis (LRA),
a method for measuring relational similarity. LRA has potential applications in many
areas, including information extraction, word sense disambiguation,
and information retrieval. Recently the Vector Space Model (VSM) of information
retrieval has been adapted to measuring relational similarity,
achieving a score of 47% on a collection of 374 college-level multiple-choice
word analogy questions. In the VSM approach, the relation between a pair of words is
characterized by a vector of frequencies of predefined patterns in a large corpus.
LRA extends the VSM approach in three ways: (1) the patterns are derived automatically
from the corpus, (2) the Singular Value Decomposition (SVD) is used to smooth the frequency
data, and (3) automatically generated synonyms are used to explore variations of the
word pairs. LRA achieves 56% on the 374 analogy questions, statistically equivalent to the
average human score of 57%. On the related problem of classifying semantic relations, LRA
achieves similar gains over the VSM
Human-Level Performance on Word Analogy Questions by Latent Relational Analysis
This paper introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, machine translation, and information retrieval. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason/stone is analogous to the pair carpenter/wood; the relations between mason and stone are highly similar to the relations between carpenter and wood. Past work on semantic similarity measures has mainly been concerned with attributional similarity. For instance, Latent Semantic Analysis (LSA) can measure the degree of similarity between two words, but not between two relations. Recently the Vector Space Model (VSM) of information retrieval has been adapted to the task of measuring relational similarity, achieving a score of 47% on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) the patterns are derived automatically from the corpus (they are not predefined), (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data (it is also used this way in LSA), and (3) automatically generated synonyms are used to explore reformulations of the word pairs. LRA achieves 56% on the 374 analogy questions, statistically equivalent to the average human score of 57%. On the related problem of classifying noun-modifier relations, LRA achieves similar gains over the VSM, while using a smaller corpus
Travels with the Flying Dutchman: marketing managers, marketing planning and the metaphors of practice
A review of the literature on strategic marketing planning reveals that the manner in which it is carried out in practice does not appear to reflect the way in which it is written about in texts. It is also clear that the exploration of marketing processes in organisations is seriously neglected from a phenomenological perspective. In order to explore this area, and the lived reality of planning from marketing managers perspectives, a research methodology was adopted using the phenomenological interview. A key research question focused investigation on determining what successful marketing decision making expertise actually consists of, if it is not about the explicit skills and knowledge embedded in the rational technical model of planning.
The subsequent phenomenological analysis of the interviews demonstrated that the complexity of marketing planning and individual action cannot be collapsed into a textual model. What managers drew on was a qualitative, locally constructed knowledge base. Marketing decision making and action was found to be based within a locally enacted hermeneutical circle of talk, relationships, tacit knowledge and emergent issues, where the plans they wrote acted as cues to action rather than as prescriptive guides. Based on these findings, a revised theoretical framework is proposed for understanding marketing planning. This framework draws on the socially constructed metaphors used by the marketing managers in this study to explain their practical activity. It is argued that this theoretical approach offers up ideas for action to other marketers, rather than prescriptions. It also indicates that much marketing activity is successful yet diverse, both in form and style
Computational Models (of Narrative) for Literary Studies
In the last decades a growing body of literature in Artificial Intelligence (AI) and Cognitive
Science (CS) has approached the problem of narrative understanding by means of computational
systems. Narrative, in fact, is an ubiquitous element in our everyday activity and
the ability to generate and understand stories, and their structures, is a crucial cue of our intelligence.
However, despite the fact that - from an historical standpoint - narrative (and narrative
structures) have been an important topic of investigation in both these areas, a more
comprehensive approach coupling them with narratology, digital humanities and literary
studies was still lacking.
With the aim of covering this empty space, in the last years, a multidisciplinary effort
has been made in order to create an international meeting open to computer scientist, psychologists,
digital humanists, linguists, narratologists etc.. This event has been named CMN
(for Computational Models of Narrative) and was launched in the 2009 by the MIT scholars
Mark A. Finlayson and Patrick H. Winston1
Exploiting Deep Semantics and Compositionality of Natural Language for Human-Robot-Interaction
We develop a natural language interface for human robot interaction that
implements reasoning about deep semantics in natural language. To realize the
required deep analysis, we employ methods from cognitive linguistics, namely
the modular and compositional framework of Embodied Construction Grammar (ECG)
[Feldman, 2009]. Using ECG, robots are able to solve fine-grained reference
resolution problems and other issues related to deep semantics and
compositionality of natural language. This also includes verbal interaction
with humans to clarify commands and queries that are too ambiguous to be
executed safely. We implement our NLU framework as a ROS package and present
proof-of-concept scenarios with different robots, as well as a survey on the
state of the art
Recommended from our members
Unpacking capabilities underlying design (thinking) process
Engineering graduates must know how to frame and solve non-routine problems. While design classes explicitly teach problem framing and solving, it is lacking throughout much of the rest of the engineering curriculum and is often relegated to capstone classes at the end of the studentsâ educational experience. This paper explores problem framing and solving through the lens of experiential learning theory. It captures core problem framing and solving approaches from critical, design and systems thinking and concludes with a table of learning outcomes that might be drawn upon in designing an engineering curriculum that more fully develops the problem framing and solving capabilities of its students
Does the "most sinfully decadent cake ever" taste good? Answering Yes/No Questions from Figurative Contexts
Figurative language is commonplace in natural language, and while making
communication memorable and creative, can be difficult to understand. In this
work, we investigate the robustness of Question Answering (QA) models on
figurative text. Yes/no questions, in particular, are a useful probe of
figurative language understanding capabilities of large language models. We
propose FigurativeQA, a set of 1000 yes/no questions with figurative and
non-figurative contexts, extracted from the domains of restaurant and product
reviews. We show that state-of-the-art BERT-based QA models exhibit an average
performance drop of up to 15\% points when answering questions from figurative
contexts, as compared to non-figurative ones. While models like GPT-3 and
ChatGPT are better at handling figurative texts, we show that further
performance gains can be achieved by automatically simplifying the figurative
contexts into their non-figurative (literal) counterparts. We find that the
best overall model is ChatGPT with chain-of-thought prompting to generate
non-figurative contexts. Our work provides a promising direction for building
more robust QA models with figurative language understanding capabilities.Comment: Accepted at RANLP 202
Computational Analysis and Generation of Slogans
I reklam anvÀnds sloganer för att förbÀttra Äterkallandet av den annonserade produkten av konsumenter och skilja den frÄn andra pÄ marknaden. Att skapa effektiva slagord Àr en resurskrÀvande uppgift för mÀnniskor. I denna avhandling beskriver vi en ny metod för att automatiskt generera sloganer, med tanke pÄ ett mÄlkoncept (t ex bil) och en adjektivsegenskap för att uttrycka (t ex elegant) som input. Dessutom föreslÄr vi en metod för att generera nominella metaforer med hjÀlp av en metafor-tolkningsmodell för att möjliggöra generering av metaforiska slagord. Metoden för att generera sloganer extraherar skelett frÄn befintliga sloganer, sÄ fyller det ett skelett med lÀmpliga ord genom att anvÀnda flera sprÄkliga resurser (som ett förvar av grammatiska och semantiska relationer och sprÄkmodeller) och genetiska algoritmer samtidigt som man optimerar flera mÄl sÄsom semantiska relateradhet, sprÄkkorrigering och anvÀndning av retoriska enheter.
Vi utvĂ€rderar metaforen och slogangenereringsmetoderna med hjĂ€lp av en tĂ€nktalkoplattform. PĂ„ en 5-punkts Likert-skala ber vi online-domare att bedöma de genererade metaforerna tillsammans med tre andra metaforer som genererades med andra metoder och visa hur bra de kommunicerar den eftersökta betydelsen. Slogangenereringsmetoden utvĂ€rderas genom att be crowdsourced-domare att bedöma genererade sloganer frĂ„n fem perspektiv, vilka Ă€r 1) hur bra Ă€r sloganet relaterat till Ă€mnet, 2) hur korrekt Ă€r sloganets sprĂ„k, 3) hur metaforiskt Ă€r sloganet, 4) hur engagerande, attraktivt och minnesvĂ€rt Ă€r det och 5) hur bra Ă€r sloganet överlag. Dessa frĂ„gor Ă€r utvalda för att undersöka effekterna av relateradhet till produkten och den markerade egenskapen, anvĂ€ndningen av retoriska anordningar och sprĂ„kets korrekthet pĂ„ den övergripande uppskattningen av slogan. PĂ„ samma sĂ€tt utvĂ€rderar vi befintliga sloganer som har skapats av Ă€kta mĂ€nniskor. Baserat pĂ„ utvĂ€rderingarna analyserar vi metoden som helhet tillsammans med de enskilda optimeringsfunktionerna och ger insikter om befintliga sloganer. Resultaten frĂ„n vĂ„ra utvĂ€rderingar visar att vĂ„r metaforgeneringsmetod kan producera lĂ€mpliga metaforer. För slogangenereraren bevisar resultaten att metoden har varit framgĂ„ngsrik i att producera minst en effektiv slogan för varje utvĂ€rderad input. ĂndĂ„ finns det utrymme för att förbĂ€ttra metoden, som diskuteras i slutet av avhandlingen
- âŠ