1,815 research outputs found
Automatic extraction of knowledge from web documents
A large amount of digital information available is written as text documents in the form of web pages, reports, papers, emails, etc. Extracting the knowledge of interest from such documents from multiple sources in a timely fashion is therefore crucial. This paper provides an update on the Artequakt system which uses natural language tools to automatically extract knowledge about artists from multiple documents based on a predefined ontology. The ontology represents the type and form of knowledge to extract. This knowledge is then used to generate tailored biographies. The information extraction process of Artequakt is detailed and evaluated in this paper
Computational acquisition of knowledge in small-data environments: a case study in the field of energetics
The UK’s defence industry is accelerating its implementation of artificial intelligence, including
expert systems and natural language processing (NLP) tools designed to supplement human
analysis. This thesis examines the limitations of NLP tools in small-data environments (common
in defence) in the defence-related energetic-materials domain. A literature review identifies
the domain-specific challenges of developing an expert system (specifically an ontology). The
absence of domain resources such as labelled datasets and, most significantly, the preprocessing
of text resources are identified as challenges. To address the latter, a novel general-purpose
preprocessing pipeline specifically tailored for the energetic-materials domain is developed. The
effectiveness of the pipeline is evaluated.
Examination of the interface between using NLP tools in data-limited environments to either
supplement or replace human analysis completely is conducted in a study examining the subjective
concept of importance. A methodology for directly comparing the ability of NLP tools
and experts to identify important points in the text is presented. Results show the participants
of the study exhibit little agreement, even on which points in the text are important. The NLP,
expert (author of the text being examined) and participants only agree on general statements.
However, as a group, the participants agreed with the expert. In data-limited environments,
the extractive-summarisation tools examined cannot effectively identify the important points
in a technical document akin to an expert.
A methodology for the classification of journal articles by the technology readiness level (TRL)
of the described technologies in a data-limited environment is proposed. Techniques to overcome
challenges with using real-world data such as class imbalances are investigated. A methodology
to evaluate the reliability of human annotations is presented. Analysis identifies a lack of
agreement and consistency in the expert evaluation of document TRL.Open Acces
Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation
This paper surveys the current state of the art in Natural Language
Generation (NLG), defined as the task of generating text or speech from
non-linguistic input. A survey of NLG is timely in view of the changes that the
field has undergone over the past decade or so, especially in relation to new
(usually data-driven) methods, as well as new applications of NLG technology.
This survey therefore aims to (a) give an up-to-date synthesis of research on
the core tasks in NLG and the architectures adopted in which such tasks are
organised; (b) highlight a number of relatively recent research topics that
have arisen partly as a result of growing synergies between NLG and other areas
of artificial intelligence; (c) draw attention to the challenges in NLG
evaluation, relating them to similar challenges faced in other areas of Natural
Language Processing, with an emphasis on different evaluation methods and the
relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118
pages, 8 figures, 1 tabl
Text Summarisation: From Human Activity to Computer Program. The Problem of Tacit Knowledge
In this article I discuss whether the human activity of text summarisation can be successfully simulated in a computer. In order to write a computer program that produces high-quality summaries it becomes necessary to specify the cognitive pro-cesses involved when humans summarise text. As texts can be summarised in many different ways, evaluation of summaries becomes an important aspect in the discussion. The article discusses relevant factors in such an evaluation process. It turns out that humans when summarising texts make use of knowledge which is not readily open to scrutiny; it is tacit knowledge. This makes it very difficult to produce computer-generated summaries which are as good as those produced by skilled humans. New developments within artificial intelligence, relying on network processing techniques, may offer solutions to the problem of dealing with tacit knowledge. At present, accept-able computer summaries may be generated by programs combining accessible human knowledge of the summarisation process and knowledge about text
Automatic abstracting: a review and an empirical evaluation
The abstract is a fundamental tool in information retrieval. As condensed representations,
they facilitate conservation of the increasingly precious search time and space of scholars, allowing them to manage more effectively an ever-growing deluge of documentation.
Traditionally the product of human intellectual effort, attempts to automate the abstracting
process began in 1958. Two identifiable automatic abstracting techniques emerged which
reflect differing levels of ambition regarding simulation of the human abstracting process,
namely sentence extraction and text summarisation. This research paradigm has recently
diversified further, with a cross-fertilisation of methods. Commercial systems are beginning
to appear, but automatic abstracting is still mainly confined to an experimental arena.
The purpose of this study is firstly to chart the historical development and current state of
both manual and automatic abstracting; and secondly, to devise and implement an empirical
user-based evaluation to assess the adequacy of automatic abstracts derived from sentence
extraction techniques according to a set of utility criteria. [Continues.
- …