594 research outputs found
Factoid question answering for spoken documents
In this dissertation, we present a factoid question answering system, specifically tailored for Question Answering (QA) on spoken documents.
This work explores, for the first time, which techniques can be robustly adapted from the usual QA on written documents to the more difficult spoken documents scenario. More specifically, we study new information retrieval (IR) techniques designed for speech, and utilize several levels of linguistic information for the speech-based QA task. These include named-entity detection with phonetic information, syntactic parsing applied to speech transcripts, and the use of coreference resolution.
Our approach is largely based on supervised machine learning techniques, with special focus on the answer extraction step, and makes little use of handcrafted knowledge. Consequently, it should be easily adaptable to other domains and languages.
In the work resulting of this Thesis, we have impulsed and coordinated the creation of an evaluation framework for the task of QA on spoken documents. The framework, named QAst, provides multi-lingual corpora, evaluation questions, and answers key. These corpora have been used in the QAst evaluation that was held in the CLEF workshop for the years 2007, 2008 and 2009, thus helping the developing of state-of-the-art techniques for this particular topic.
The presentend QA system and all its modules are extensively evaluated on the European Parliament Plenary Sessions
English corpus composed of manual transcripts and automatic transcripts obtained by three different Automatic Speech Recognition (ASR) systems that exhibit significantly different word error rates. This data belongs to the CLEF 2009 track for QA on speech transcripts.
The main results confirm that syntactic information is very useful for learning to rank question candidates, improving results on both manual and automatic transcripts unless the ASR quality is very low. Overall, the performance of our system is comparable or better than the state-of-the-art on this corpus, confirming the validity of our approach.En aquesta Tesi, presentem un sistema de Question Answering (QA) factual, especialment ajustat per treballar amb documents orals.
En el desenvolupament explorem, per primera vegada, quines tècniques de les habitualment emprades en QA per documents escrit són suficientment robustes per funcionar en l'escenari més difícil de documents orals. Amb més especificitat, estudiem nous mètodes de Information Retrieval (IR) dissenyats per tractar amb la veu, i utilitzem diversos nivells d'informació linqüística. Entre aquests s'inclouen, a saber: detecció de Named Entities utilitzant informació fonètica, "parsing" sintàctic aplicat a transcripcions de veu, i també l'ús d'un sub-sistema de detecció i resolució de la correferència.
La nostra aproximació al problema es recolza en gran part en tècniques supervisades de Machine Learning, estant aquestes enfocades especialment cap a la part d'extracció de la resposta, i fa servir la menor quantitat possible de coneixement creat per humans. En conseqüència, tot el procés de QA pot ser adaptat a altres dominis o altres llengües amb relativa facilitat.
Un dels resultats addicionals de la feina darrere d'aquesta Tesis ha estat que hem impulsat i coordinat la creació d'un marc d'avaluació de la taska de QA en documents orals. Aquest marc de treball, anomenat QAst (Question Answering on Speech Transcripts), proporciona un corpus de documents orals multi-lingüe, uns conjunts de preguntes d'avaluació, i les respostes correctes d'aquestes. Aquestes dades han estat utilitzades en les evaluacionis QAst que han tingut lloc en el si de les conferències CLEF en els anys 2007, 2008 i 2009; d'aquesta manera s'ha promogut i ajudat a la creació d'un estat-de-l'art de tècniques adreçades a aquest problema en particular.
El sistema de QA que presentem i tots els seus particulars sumbòduls, han estat avaluats extensivament utilitzant el corpus EPPS (transcripcions de les Sessions Plenaries del Parlament Europeu) en anglès, que cónté transcripcions manuals de tots els discursos i també transcripcions automàtiques obtingudes mitjançant tres reconeixedors automàtics de la parla (ASR) diferents. Els reconeixedors tenen característiques i resultats diferents que permetes una avaluació quantitativa i qualitativa de la tasca. Aquestes dades pertanyen a l'avaluació QAst del 2009.
Els resultats principals de la nostra feina confirmen que la informació sintàctica és mol útil per aprendre automàticament a valorar la plausibilitat de les respostes candidates, millorant els resultats previs tan en transcripcions manuals com transcripcions automàtiques, descomptat que la qualitat de l'ASR sigui molt baixa. En general, el rendiment del nostre sistema és comparable o millor que els altres sistemes pertanyents a l'estat-del'art, confirmant així la validesa de la nostra aproximació
Structured Named Entities
The names of people, locations, and organisations play a central role in language, and named entity recognition (NER) has been widely studied, and successfully incorporated, into natural language processing (NLP) applications. The most common variant of NER involves identifying and classifying proper noun mentions of these and miscellaneous entities as linear spans in text. Unfortunately, this version of NER is no closer to a detailed treatment of named entities than chunking is to a full syntactic analysis. NER, so construed, reflects neither the syntactic nor semantic structure of NE mentions, and provides insufficient categorical distinctions to represent that structure. Representing this nested structure, where a mention may contain mention(s) of other entities, is critical for applications such as coreference resolution. The lack of this structure creates spurious ambiguity in the linear approximation. Research in NER has been shaped by the size and detail of the available annotated corpora. The existing structured named entity corpora are either small, in specialist domains, or in languages other than English. This thesis presents our Nested Named Entity (NNE) corpus of named entities and numerical and temporal expressions, taken from the WSJ portion of the Penn Treebank (PTB, Marcus et al., 1993). We use the BBN Pronoun Coreference and Entity Type Corpus (Weischedel and Brunstein, 2005a) as our basis, manually annotating it with a principled, fine-grained, nested annotation scheme and detailed annotation guidelines. The corpus comprises over 279,000 entities over 49,211 sentences (1,173,000 words), including 118,495 top-level entities. Our annotations were designed using twelve high-level principles that guided the development of the annotation scheme and difficult decisions for annotators. We also monitored the semantic grammar that was being induced during annotation, seeking to identify and reinforce common patterns to maintain consistent, parsimonious annotations. The result is a scheme of 118 hierarchical fine-grained entity types and nesting rules, covering all capitalised mentions of entities, and numerical and temporal expressions. Unlike many corpora, we have developed detailed guidelines, including extensive discussion of the edge cases, in an ongoing dialogue with our annotators which is critical for consistency and reproducibility. We annotated independently from the PTB bracketing, allowing annotators to choose spans which were inconsistent with the PTB conventions and errors, and only refer back to it to resolve genuine ambiguity consistently. We merged our NNE with the PTB, requiring some systematic and one-off changes to both annotations. This allows the NNE corpus to complement other PTB resources, such as PropBank, and inform PTB-derived corpora for other formalisms, such as CCG and HPSG. We compare this corpus against BBN. We consider several approaches to integrating the PTB and NNE annotations, which affect the sparsity of grammar rules and visibility of syntactic and NE structure. We explore their impact on parsing the NNE and merged variants using the Berkeley parser (Petrov et al., 2006), which performs surprisingly well without specialised NER features. We experiment with flattening the NNE annotations into linear NER variants with stacked categories, and explore the ability of a maximum entropy and a CRF NER system to reproduce them. The CRF performs substantially better, but is infeasible to train on the enormous stacked category sets. The flattened output of the Berkeley parser are almost competitive with the CRF. Our results demonstrate that the NNE corpus is feasible for statistical models to reproduce. We invite researchers to explore new, richer models of (joint) parsing and NER on this complex and challenging task. Our nested named entity corpus will improve a wide range of NLP tasks, such as coreference resolution and question answering, allowing automated systems to understand and exploit the true structure of named entities
A Survey on Semantic Processing Techniques
Semantic processing is a fundamental research domain in computational
linguistics. In the era of powerful pre-trained language models and large
language models, the advancement of research in this domain appears to be
decelerating. However, the study of semantics is multi-dimensional in
linguistics. The research depth and breadth of computational semantic
processing can be largely improved with new technologies. In this survey, we
analyzed five semantic processing tasks, e.g., word sense disambiguation,
anaphora resolution, named entity recognition, concept extraction, and
subjectivity detection. We study relevant theoretical research in these fields,
advanced methods, and downstream applications. We connect the surveyed tasks
with downstream applications because this may inspire future scholars to fuse
these low-level semantic processing tasks with high-level natural language
processing tasks. The review of theoretical research may also inspire new tasks
and technologies in the semantic processing domain. Finally, we compare the
different semantic processing techniques and summarize their technical trends,
application trends, and future directions.Comment: Published at Information Fusion, Volume 101, 2024, 101988, ISSN
1566-2535. The equal contribution mark is missed in the published version due
to the publication policies. Please contact Prof. Erik Cambria for detail
AXMEDIS 2008
The AXMEDIS International Conference series aims to explore all subjects and topics related to cross-media and digital-media content production, processing, management, standards, representation, sharing, protection and rights management, to address the latest developments and future trends of the technologies and their applications, impacts and exploitation. The AXMEDIS events offer venues for exchanging concepts, requirements, prototypes, research ideas, and findings which could contribute to academic research and also benefit business and industrial communities. In the Internet as well as in the digital era, cross-media production and distribution represent key developments and innovations that are fostered by emergent technologies to ensure better value for money while optimising productivity and market coverage
Recommended from our members
Learning with Joint Inference and Latent Linguistic Structure in Graphical Models
Constructing end-to-end NLP systems requires the processing of many types of linguistic information prior to solving the desired end task. A common approach to this problem is to construct a pipeline, one component for each task, with each system\u27s output becoming input for the next. This approach poses two problems. First, errors propagate, and, much like the childhood game of telephone , combining systems in this manner can lead to unintelligible outcomes. Second, each component task requires annotated training data to act as supervision for training the model. These annotations are often expensive and time-consuming to produce, may differ from each other in genre and style, and may not match the intended application.
In this dissertation we present a general framework for constructing and reasoning on joint graphical model formulations of NLP problems. Individual models are composed using weighted Boolean logic constraints, and inference is performed using belief propagation. The systems we develop are composed of two parts: one a representation of syntax, the other a desired end task (semantic role labeling, named entity recognition, or relation extraction). By modeling these problems jointly, both models are trained in a single, integrated process, with uncertainty propagated between them. This mitigates the accumulation of errors typical of pipelined approaches.
Additionally we propose a novel marginalization-based training method in which the error signal from end task annotations is used to guide the induction of a constrained latent syntactic representation. This allows training in the absence of syntactic training data, where the latent syntactic structure is instead optimized to best support the end task predictions. We find that across many NLP tasks this training method offers performance comparable to fully supervised training of each individual component, and in some instances improves upon it by learning latent structures which are more appropriate for the task
Toponym Resolution in Text
Institute for Communicating and Collaborative SystemsBackground. In the area of Geographic Information Systems (GIS), a shared discipline between
informatics and geography, the term geo-parsing is used to describe the process of identifying
names in text, which in computational linguistics is known as named entity recognition
and classification (NERC). The term geo-coding is used for the task of mapping from implicitly
geo-referenced datasets (such as structured address records) to explicitly geo-referenced
representations (e.g., using latitude and longitude). However, present-day GIS systems provide
no automatic geo-coding functionality for unstructured text.
In Information Extraction (IE), processing of named entities in text has traditionally been seen
as a two-step process comprising a flat text span recognition sub-task and an atomic classification
sub-task; relating the text span to a model of the world has been ignored by evaluations
such as MUC or ACE (Chinchor (1998); U.S. NIST (2003)).
However, spatial and temporal expressions refer to events in space-time, and the grounding of
events is a precondition for accurate reasoning. Thus, automatic grounding can improve many
applications such as automatic map drawing (e.g. for choosing a focus) and question answering
(e.g. , for questions like How far is London from Edinburgh?, given a story in which both occur
and can be resolved). Whereas temporal grounding has received considerable attention in the
recent past (Mani and Wilson (2000); Setzer (2001)), robust spatial grounding has long been
neglected.
Concentrating on geographic names for populated places, I define the task of automatic
Toponym Resolution (TR) as computing the mapping from occurrences of names for places as
found in a text to a representation of the extensional semantics of the location referred to (its
referent), such as a geographic latitude/longitude footprint.
The task of mapping from names to locations is hard due to insufficient and noisy databases,
and a large degree of ambiguity: common words need to be distinguished from proper names
(geo/non-geo ambiguity), and the mapping between names and locations is ambiguous (London
can refer to the capital of the UK or to London, Ontario, Canada, or to about forty other
Londons on earth). In addition, names of places and the boundaries referred to change over
time, and databases are incomplete.
Objective. I investigate how referentially ambiguous spatial named entities can be grounded,
or resolved, with respect to an extensional coordinate model robustly on open-domain news
text.
I begin by comparing the few algorithms proposed in the literature, and, comparing semiformal,
reconstructed descriptions of them, I factor out a shared repertoire of linguistic heuristics
(e.g. rules, patterns) and extra-linguistic knowledge sources (e.g. population sizes). I then
investigate how to combine these sources of evidence to obtain a superior method. I also investigate
the noise effect introduced by the named entity tagging step that toponym resolution
relies on in a sequential system pipeline architecture.
Scope. In this thesis, I investigate a present-day snapshot of terrestrial geography as represented
in the gazetteer defined and, accordingly, a collection of present-day news text. I limit
the investigation to populated places; geo-coding of artifact names (e.g. airports or bridges),
compositional geographic descriptions (e.g. 40 miles SW of London, near Berlin), for instance,
is not attempted. Historic change is a major factor affecting gazetteer construction and ultimately
toponym resolution. However, this is beyond the scope of this thesis.
Method. While a small number of previous attempts have been made to solve the toponym
resolution problem, these were either not evaluated, or evaluation was done by manual inspection
of system output instead of curating a reusable reference corpus.
Since the relevant literature is scattered across several disciplines (GIS, digital libraries,
information retrieval, natural language processing) and descriptions of algorithms are mostly
given in informal prose, I attempt to systematically describe them and aim at a reconstruction
in a uniform, semi-formal pseudo-code notation for easier re-implementation. A systematic
comparison leads to an inventory of heuristics and other sources of evidence.
In order to carry out a comparative evaluation procedure, an evaluation resource is required.
Unfortunately, to date no gold standard has been curated in the research community. To this
end, a reference gazetteer and an associated novel reference corpus with human-labeled referent
annotation are created.
These are subsequently used to benchmark a selection of the reconstructed algorithms and
a novel re-combination of the heuristics catalogued in the inventory.
I then compare the performance of the same TR algorithms under three different conditions,
namely applying it to the (i) output of human named entity annotation, (ii) automatic annotation
using an existing Maximum Entropy sequence tagging model, and (iii) a na¨ıve toponym lookup
procedure in a gazetteer.
Evaluation. The algorithms implemented in this thesis are evaluated in an intrinsic or
component evaluation. To this end, we define a task-specific matching criterion to be used with
traditional Precision (P) and Recall (R) evaluation metrics. This matching criterion is lenient
with respect to numerical gazetteer imprecision in situations where one toponym instance is
marked up with different gazetteer entries in the gold standard and the test set, respectively, but
where these refer to the same candidate referent, caused by multiple near-duplicate entries in
the reference gazetteer.
Main Contributions. The major contributions of this thesis are as follows:
• A new reference corpus in which instances of location named entities have been manually
annotated with spatial grounding information for populated places, and an associated
reference gazetteer, from which the assigned candidate referents are chosen. This reference
gazetteer provides numerical latitude/longitude coordinates (such as 51320 North,
0 50 West) as well as hierarchical path descriptions (such as London > UK) with respect
to a world wide-coverage, geographic taxonomy constructed by combining several large,
but noisy gazetteers. This corpus contains news stories and comprises two sub-corpora,
a subset of the REUTERS RCV1 news corpus used for the CoNLL shared task (Tjong
Kim Sang and De Meulder (2003)), and a subset of the Fourth Message Understanding
Contest (MUC-4; Chinchor (1995)), both available pre-annotated with gold-standard.
This corpus will be made available as a reference evaluation resource;
• a new method and implemented system to resolve toponyms that is capable of robustly
processing unseen text (open-domain online newswire text) and grounding toponym instances
in an extensional model using longitude and latitude coordinates and hierarchical
path descriptions, using internal (textual) and external (gazetteer) evidence;
• an empirical analysis of the relative utility of various heuristic biases and other sources
of evidence with respect to the toponym resolution task when analysing free news genre
text;
• a comparison between a replicated method as described in the literature, which functions
as a baseline, and a novel algorithm based on minimality heuristics; and
• several exemplary prototypical applications to show how the resulting toponym resolution
methods can be used to create visual surrogates for news stories, a geographic exploration
tool for news browsing, geographically-aware document retrieval and to answer
spatial questions (How far...?) in an open-domain question answering system. These
applications only have demonstrative character, as a thorough quantitative, task-based
(extrinsic) evaluation of the utility of automatic toponym resolution is beyond the scope of this thesis and left for future work
Methods of sentence extraction, abstraction and ordering for automatic text summarization
In this thesis, we have developed several techniques for tackling both the extractive and abstractive text summarization tasks. We implement a rank based extractive sentence selection algorithm. For ensuring a pure sentence abstraction, we propose several novel sentence abstraction techniques which jointly perform sentence compression, fusion, and paraphrasing at the sentence level. We also model abstractive compression generation as a sequence-to-sequence (seq2seq) problem using an encoder-decoder framework. Furthermore, we applied our sentence abstraction techniques to the multi-document abstractive text summarization. We also propose a greedy sentence ordering algorithm to maintain the summary coherence for increasing the readability. We introduce an optimal solution to the summary length limit problem. Our experiments demonstrate that the methods bring significant improvements over the state-of-the-art methods. At the end of this thesis, we also introduced a new concept called "Reader Aware Summary" which can generate summaries for some critical readers (e.g. Non-Native Reader).Natural Sciences and Engineering Research Council (NSERC) of Canada and the University of Lethbridg
Distributed collaborative structuring
Making Inter- and Intranet resources available in a structured way is one of the most important and challenging problems today. An underlying structure allows users to search for information, documents or relationships without a clearly defined information need. While search and filtering technology is becoming more and more powerful, the development of such explorative access methods lacks behind. This work is concerned with the development of large-scale data mining methods that allow to structure information spaces based on loosely coupled user annotations and navigation patterns. An essential challenge, that was not yet fully realized in this context, is heterogeneity. Different users and user groups often have different preferences and needs on how to access an information collection. While current Business Intelligence, Information Retrieval or Content Management solutions allow for a certain degree of personalization, these approaches are still very static. This considerably limits their applicability in heterogeneous environments. This work is based on a novel paradigm, called collaborative structuring. This term is chosen as a generalization to the term collaborative filtering. Instead of only filtering items, collaborative structuring allows users to organize information spaces in a loosely coupled way, based on patterns emerging through data mining. A first contribution of the work is to define the conceptual notion of collaborative structuring as combinatorial optimization problem and to put it into relation with existing research in the areas of data and web mining. As second contribution, highly scalable, distributed optimization strategies are proposed and analyzed. Finally, the proposed approaches are quantitatively evaluated against existing methods using several real-world data sets. Also, practical experience from two application areas is given, namely information access for heterogeneous expert communities and collaborative media organization
Essays on monetary policy
This is a summary of the four chapters that comprise this D.Phil. thesis.1 This thesis
examines two major aspects of policy. The first two chapters examine monetary policy communication. The second two examine the causes and consequences of a time-varying reaction
function of the central bank.
1. Central Bank Communication and Higher Moments
In this first chapter, I investigate which parts of central bank communication affect the
higher moments of expectations embedded in financial market pricing.
Much of the literature on central bank communication has focused on how communication
impacts the conditional expected mean of future policy. But this chapter asks how central
bank communication affects the second and third moments of the financial market’s perceived
distribution of future policy decisions. I use high frequency changes in option-prices around
Bank of England communications to show that communication affects higher moments of the
distribution of expectations. I find that the relevant communication in the case of the Bank
of England is primarily confined to the information contained in the Q&A and Statement,
rather than the longer Inflation Report.
2. Mark My Words: The Transmission of Central Bank Communication to the General Public via the Print Media
In the second chapter, jointly with James Brookes, I ask how central banks can change
their communication in order to receive greater newspaper coverage, if that is indeed an objective of theirs.
We use computational linguistics combined with an event-study methodology to measure
the extent of news coverage a central bank communication receives, and the textual features
that might cause a communication to be more (or less) likely to be considered newsworthy.
We consider the case of the Bank of England, and estimate the relationship between news
coverage and central bank communication implied by our model. We find that the interaction
between the state of the economy and the way in which the Bank of England writes its
communication is important for determining news coverage. We provide concrete suggestions
for ways in which central bank communication can increase its news coverage by improving
readability in line with our results.
3. Uncertainty and Time-varying Monetary Policy
In the third chapter, together with Michael McMahon, I investigate the links between
uncertainty and the reaction function of the Federal Reserve.
US macroeconomic evidence points to higher economic volatility being positively correlated with more aggressive monetary policy responses. This represents a challenge for “good
policy” explanations of the Great Moderation which map a more aggressive monetary response to reduced volatility. While some models of monetary policy under uncertainty can
match this comovement qualitatively, these models do not, on their own, account for the
reaction-function changes quantitatively for reasonable changes in uncertainty. We present a
number of alternative sources of uncertainty that we believe should be more prevalent in the
literature on monetary policy.
4. The Element(s) of Surprise
In the final chapter, together with Michael McMahon, I analyse the implications for monetary surprises of time-varying reaction functions.
Monetary policy surprises are driven by several separate forces. We argue that many of
the surprises in monetary policy instruments are driven by unexpected changes in the reaction
function of policymakers. We show that these reaction function surprises are fundamentally
different from monetary policy shocks in their effect on the economy, are likely endogenous
to the state, and unable to removed using current orthogonalisation procedures. As a result
monetary policy surprises should not be used to measure the effect of a monetary policy
“shock” to the economy. We find evidence for reaction function surprises in the features
of the high frequency asset price surprise data and in analysing the text of a major US
economic forecaster. Further, we show that periods in which an estimated macro model
suggests policymakers have switched reaction functions provide the majority of variation in
monetary policy surprises
- …