186 research outputs found
Demonstration of a prototype for a conversational companion for reminiscing about images
This work was funded by the Companions project (2006-2009) sponsored by the European Commission as part of the Information Society Technologies (IST) programme under EC grant number IST-FP6-034434.This paper describes an initial prototype demonstrator of a Companion, designed as a platform for novel approaches to the following: 1) The use of Information Extraction (IE) techniques to extract the content of incoming dialogue utterances after an Automatic Speech Recognition (ASR) phase, 2) The conversion of the input to Resource Descriptor Format (RDF) to allow the generation of new facts from existing ones, under the control of a Dialogue Manger (DM), that also has access to stored knowledge and to open knowledge accessed in real time from the web, all in RDF form, 3) A DM implemented as a stack and network virtual machine that models mixed initiative in dialogue control, and 4) A tuned dialogue act detector based on corpus evidence. The prototype platform was evaluated, and we describe this briefly; it is also designed to support more extensive forms of emotion detection carried by both speech and lexical content, as well as extended forms of machine learning.peer-reviewe
Zero-shot and Few-shot Learning with Instruction-following LLMs for Claim Matching in Automated Fact-checking
The claim matching (CM) task can benefit an
automated fact-checking pipeline by putting
together claims that can be resolved with the
same fact-check. In this work, we are the
first to explore zero-shot and few-shot learning approaches to the task. We consider
CM as a binary classification task and experiment with a set of instruction-following
large language models (GPT-3.5-turbo, Gemini1.5-flash, Mistral-7B-Instruct, and Llama-3-
8B-Instruct), investigating prompt templates.
We introduce a new CM dataset, ClaimMatch,
which will be released upon acceptance. We
put LLMs to the test in the CM task and find
that it can be tackled by leveraging more mature yet similar tasks such as natural language
inference or paraphrase detection. We also propose a pipeline for CM, which we evaluate on
texts of different lengths
A Neural Model for Compositional Word Embeddings and Sentence Processing
We propose a new neural model for word embeddings, which uses Unitary Matrices as the primary device for encoding lexical information. It uses simple matrix multiplication to derive matrices for large units, yielding a sentence processing model that is strictly compositional, does not lose information over time steps, and is transparent, in the sense that word embed- dings can be analysed regardless of context. This model does not employ activation functions, and so the network is fully accessible to analysis by the methods of linear algebra at each point in its operation on an input sequence. We test it in two NLP agreement tasks and obtain rule like perfect accuracy, with greater stability than current state-of-the-art systems. Our proposed model goes some way towards offer- ing a class of computationally powerful deep learning systems that can be fully understood and compared to human cognitive processes for natural language learning and representation
- …
