16,373 research outputs found
Lexical Adaptation of Link Grammar to the Biomedical Sublanguage: a Comparative Evaluation of Three Approaches
We study the adaptation of Link Grammar Parser to the biomedical sublanguage
with a focus on domain terms not found in a general parser lexicon. Using two
biomedical corpora, we implement and evaluate three approaches to addressing
unknown words: automatic lexicon expansion, the use of morphological clues, and
disambiguation using a part-of-speech tagger. We evaluate each approach
separately for its effect on parsing performance and consider combinations of
these approaches. In addition to a 45% increase in parsing efficiency, we find
that the best approach, incorporating information from a domain part-of-speech
tagger, offers a statistically signicant 10% relative decrease in error. The
adapted parser is available under an open-source license at
http://www.it.utu.fi/biolg
Automatic evaluation of generation and parsing for machine translation with automatically acquired transfer rules
This paper presents a new method of evaluation for generation and parsing components of transfer-based MT systems where the transfer rules have been automatically
acquired from parsed sentence-aligned bitext corpora. The method provides a means of quantifying the upper bound imposed on the MT system by the quality of the parsing
and generation technologies for the target language. We include experiments to calculate this upper bound for both handcrafted and automatically induced parsing and generation technologies currently in use by transfer-based MT systems
Learning to select data for transfer learning with Bayesian Optimization
Domain similarity measures can be used to gauge adaptability and select
suitable data for transfer learning, but existing approaches define ad hoc
measures that are deemed suitable for respective tasks. Inspired by work on
curriculum learning, we propose to \emph{learn} data selection measures using
Bayesian Optimization and evaluate them across models, domains and tasks. Our
learned measures outperform existing domain similarity measures significantly
on three tasks: sentiment analysis, part-of-speech tagging, and parsing. We
show the importance of complementing similarity with diversity, and that
learned measures are -- to some degree -- transferable across models, domains,
and even tasks.Comment: EMNLP 2017. Code available at:
https://github.com/sebastianruder/learn-to-select-dat
Introduction to the special issue on cross-language algorithms and applications
With the increasingly global nature of our everyday interactions, the need for multilingual technologies to support efficient and efective information access and communication cannot be overemphasized. Computational modeling of language has been the focus of
Natural Language Processing, a subdiscipline of Artificial Intelligence. One of the current challenges for this discipline is to design methodologies and algorithms that are cross-language in order to create multilingual technologies rapidly. The goal of this JAIR special
issue on Cross-Language Algorithms and Applications (CLAA) is to present leading research in this area, with emphasis on developing unifying themes that could lead to the development of the science of multi- and cross-lingualism. In this introduction, we provide the reader with the motivation for this special issue and summarize the contributions of the papers that have been included. The selected papers cover a broad range of cross-lingual technologies including machine translation, domain and language adaptation for sentiment
analysis, cross-language lexical resources, dependency parsing, information retrieval and knowledge representation. We anticipate that this special issue will serve as an invaluable resource for researchers interested in topics of cross-lingual natural language processing.Postprint (published version
Recommended from our members
An improved hidden vector state model approach and its adaptation in extracting protein interaction information from biomedical literature
Large quantity of knowledge, which is important for biological researchers to unveil the mechanism of life, often hides in the literature, such as journal articles, reports, books and so on. Many approaches focusing on extracting information from unstructured text, such as pattern matching, shallow and full parsing, have been proposed especially for biomedical applications. In this paper, we present an information extraction system employing a semantic parser using the Hidden Vector State (HVS) model for protein-protein interactions. We found that it performed better than other established statistical methods and achieved 58.3% and 76.8% in recall and precision respectively. Moreover, the pure data-driven HVS model can be easily adapted to other domains, which is rarely mentioned and possessed by other approaches. Experimental results prove that the model trained on one domain can still generate satisfactory results when shifting to another domain with a small amount of adaptation training data
Robustness issues in a data-driven spoken language understanding system
Robustness is a key requirement in spoken language understanding (SLU) systems. Human speech is often ungrammatical and ill-formed, and there will frequently be a mismatch between training and test data. This paper discusses robustness and adaptation issues in a statistically-based SLU system which is entirely data-driven. To test robustness, the system has been tested on data from the Air Travel Information Service (ATIS) domain which has been artificially corrupted with varying levels of additive noise. Although the speech recognition performance degraded steadily, the system did not fail catastrophically. Indeed, the rate at which the end-to-end performance of the complete system degraded was significantly slower than that of the actual recognition component. In a second set of experiments, the ability to rapidly adapt the core understanding component of the system to a different application within the same broad domain has been tested. Using only a small amount of training data, experiments have shown that a semantic parser based on the Hidden Vector State (HVS) model originally trained on the ATIS corpus can be straightforwardly adapted to the somewhat different DARPA Communicator task using standard adaptation algorithms. The paper concludes by suggesting that the results presented provide initial support to the claim that an SLU system which is statistically-based and trained entirely from data is intrinsically robust and can be readily adapted to new applications
Task Driven Generative Modeling for Unsupervised Domain Adaptation: Application to X-ray Image Segmentation
Automatic parsing of anatomical objects in X-ray images is critical to many
clinical applications in particular towards image-guided invention and workflow
automation. Existing deep network models require a large amount of labeled
data. However, obtaining accurate pixel-wise labeling in X-ray images relies
heavily on skilled clinicians due to the large overlaps of anatomy and the
complex texture patterns. On the other hand, organs in 3D CT scans preserve
clearer structures as well as sharper boundaries and thus can be easily
delineated. In this paper, we propose a novel model framework for learning
automatic X-ray image parsing from labeled CT scans. Specifically, a Dense
Image-to-Image network (DI2I) for multi-organ segmentation is first trained on
X-ray like Digitally Reconstructed Radiographs (DRRs) rendered from 3D CT
volumes. Then we introduce a Task Driven Generative Adversarial Network
(TD-GAN) architecture to achieve simultaneous style transfer and parsing for
unseen real X-ray images. TD-GAN consists of a modified cycle-GAN substructure
for pixel-to-pixel translation between DRRs and X-ray images and an added
module leveraging the pre-trained DI2I to enforce segmentation consistency. The
TD-GAN framework is general and can be easily adapted to other learning tasks.
In the numerical experiments, we validate the proposed model on 815 DRRs and
153 topograms. While the vanilla DI2I without any adaptation fails completely
on segmenting the topograms, the proposed model does not require any topogram
labels and is able to provide a promising average dice of 85% which achieves
the same level accuracy of supervised training (88%)
Hybrid language processing in the Spoken Language Translator
The paper presents an overview of the Spoken Language Translator (SLT)
system's hybrid language-processing architecture, focussing on the way in which
rule-based and statistical methods are combined to achieve robust and efficient
performance within a linguistically motivated framework. In general, we argue
that rules are desirable in order to encode domain-independent linguistic
constraints and achieve high-quality grammatical output, while corpus-derived
statistics are needed if systems are to be efficient and robust; further, that
hybrid architectures are superior from the point of view of portability to
architectures which only make use of one type of information. We address the
topics of ``multi-engine'' strategies for robust translation; robust bottom-up
parsing using pruning and grammar specialization; rational development of
linguistic rule-sets using balanced domain corpora; and efficient supervised
training by interactive disambiguation. All work described is fully implemented
in the current version of the SLT-2 system.Comment: 4 pages, uses icassp97.sty; to appear in ICASSP-97; see
http://www.cam.sri.com for related materia
- …