248,360 research outputs found
Intuitive querying of e-Health data repositories
At the centre of the Clinical e-Science Framework (CLEF) project is a repository of well organised, detailed clinical histories, encoded as data that will be available for use in clinical care and in-silico medical experiments. An integral part of the CLEF workbench is a tool to allow biomedical researchers and clinicians to query – in an intuitive way – the repository of patient data. This paper describes the CLEF query editing interface, which makes use of natural language generation techniques in order to alleviate some of the problems generally faced by natural language and graphical query interfaces. The query interface also incorporates an answer renderer that dynamically generates responses in both natural language text and graphics
Using graph transformation algorithms to generate natural language equivalents of icons expressing medical concepts
A graphical language addresses the need to communicate medical information in
a synthetic way. Medical concepts are expressed by icons conveying fast visual
information about patients' current state or about the known effects of drugs.
In order to increase the visual language's acceptance and usability, a natural
language generation interface is currently developed. In this context, this
paper describes the use of an informatics method ---graph transformation--- to
prepare data consisting of concepts in an OWL-DL ontology for use in a natural
language generation component. The OWL concept may be considered as a
star-shaped graph with a central node. The method transforms it into a graph
representing the deep semantic structure of a natural language phrase. This
work may be of future use in other contexts where ontology concepts have to be
mapped to half-formalized natural language expressions.Comment: Presented at the TSD 2014 conference: Text, Speech and Dialogue, 17th
international conference. Brno, Czech Republic, September 8-12, 2014. 10
pages, 7 figure
Natural language generation in the LOLITA system an engineering approach
Natural Language Generation (NLG) is the automatic generation of Natural Language (NL) by computer in order to meet communicative goals. One aim of NL processing (NLP) is to allow more natural communication with a computer and, since communication is a two-way process, a NL system should be able to produce as well as interpret NL text. This research concerns the design and implementation of a NLG module for the LOLITA system. LOLITA (Large scale, Object-based, Linguistic Interactor, Translator and Analyser) is a general purpose base NLP system which performs core NLP tasks and upon which prototype NL applications have been built. As part of this encompassing project, this research shares some of its properties and methodological assumptions: the LOLITA generator has been built following Natural Language Engineering principles uses LOLITA's SemNet representation as input and is implemented in the functional programming language Haskell. As in other generation systems the adopted solution utilises a two component architecture. However, in order to avoid problems which occur at the interface between traditional planning and realisation modules (known as the generation gap) the distribution of tasks between the planner and plan-realiser is different: the plan-realiser, in the absence of detailed planning instructions, must perform some tasks (such as the selection and ordering of content) which are more traditionally performed by a planner. This work largely concerns the development of the plan- realiser and its interface with the planner. Another aspect of the solution is the use of Abstract Transformations which act on the SemNet input before realisation leading to an increased ability for creating paraphrases. The research has lead to a practical working solution which has greatly increased the power of the LOLITA system. The research also investigates how NLG systems can be evaluated and the advantages and disadvantages of using a functional language for the generation task
SKATE: A Natural Language Interface for Encoding Structured Knowledge
In Natural Language (NL) applications, there is often a mismatch between what
the NL interface is capable of interpreting and what a lay user knows how to
express. This work describes a novel natural language interface that reduces
this mismatch by refining natural language input through successive,
automatically generated semi-structured templates. In this paper we describe
how our approach, called SKATE, uses a neural semantic parser to parse NL input
and suggest semi-structured templates, which are recursively filled to produce
fully structured interpretations. We also show how SKATE integrates with a
neural rule-generation model to interactively suggest and acquire commonsense
knowledge. We provide a preliminary coverage analysis of SKATE for the task of
story understanding, and then describe a current business use-case of the tool
in a specific domain: COVID-19 policy design.Comment: Accepted at IAAI-2
Inseq: An Interpretability Toolkit for Sequence Generation Models
Past work in natural language processing interpretability focused mainly on
popular classification tasks while largely overlooking generation settings,
partly due to a lack of dedicated tools. In this work, we introduce Inseq, a
Python library to democratize access to interpretability analyses of sequence
generation models. Inseq enables intuitive and optimized extraction of models'
internal information and feature importance scores for popular decoder-only and
encoder-decoder Transformers architectures. We showcase its potential by
adopting it to highlight gender biases in machine translation models and locate
factual knowledge inside GPT-2. Thanks to its extensible interface supporting
cutting-edge techniques such as contrastive feature attribution, Inseq can
drive future advances in explainable natural language generation, centralizing
good practices and enabling fair and reproducible model evaluations.Comment: Library: https://github.com/inseq-team/inseq, Documentation:
https://inseq.readthedocs.io, v0.
- …