7 research outputs found
CFO: A Framework for Building Production NLP Systems
This paper introduces a novel orchestration framework, called CFO
(COMPUTATION FLOW ORCHESTRATOR), for building, experimenting with, and
deploying interactive NLP (Natural Language Processing) and IR (Information
Retrieval) systems to production environments. We then demonstrate a question
answering system built using this framework which incorporates state-of-the-art
BERT based MRC (Machine Reading Comprehension) with IR components to enable
end-to-end answer retrieval. Results from the demo system are shown to be high
quality in both academic and industry domain specific settings. Finally, we
discuss best practices when (pre-)training BERT based MRC models for production
systems.Comment: http://ibm.biz/cfo_framewor
The TechQA Dataset
We introduce TechQA, a domain-adaptation question answering dataset for the
technical support domain. The TechQA corpus highlights two real-world issues
from the automated customer support domain. First, it contains actual questions
posed by users on a technical forum, rather than questions generated
specifically for a competition or a task. Second, it has a real-world size --
600 training, 310 dev, and 490 evaluation question/answer pairs -- thus
reflecting the cost of creating large labeled datasets with actual data.
Consequently, TechQA is meant to stimulate research in domain adaptation rather
than being a resource to build QA systems from scratch. The dataset was
obtained by crawling the IBM Developer and IBM DeveloperWorks forums for
questions with accepted answers that appear in a published IBM Technote---a
technical document that addresses a specific technical issue. We also release a
collection of the 801,998 publicly available Technotes as of April 4, 2019 as a
companion resource that might be used for pretraining, to learn representations
of the IT domain language.Comment: Long version of conference paper to be submitte
Generation-Focused Table-Based Intermediate Pre-training for Free-Form Question Answering
Question answering over semi-structured tables has attracted significant attention in the NLP community.
However, most of the existing work focus on questions that can be answered with short-form answer, i.e. the answer is often a table cell or aggregation of multiple cells.
This can mismatch with the intents of users who want to ask more complex questions that require free-form answers such as explanations.
To bridge the gap, most recently, pre-trained sequence-to-sequence language models such as T5 are used for generating free-form answers based on the question and table inputs.
However, these pre-trained language models have weaker encoding abilities over table cells and schema.
To mitigate this issue, in this work, we present an intermediate pre-training framework, Generation-focused Table-based Intermediate Pre-training (GENTAP), that jointly learns representations of natural language questions and tables.
GENTAP learns to generate via two training objectives to enhance the question understanding and table representation abilities for complex questions.
Based on experimental results, models that leverage GENTAP framework outperform the existing baselines on FETAQA benchmark.
The pre-trained models are not only useful for free-form question answering, but also for few-shot data-to-text generation task, thus showing good transfer ability by obtaining new state-of-the-art results