3,933 research outputs found
QADiver: Interactive Framework for Diagnosing QA Models
Question answering (QA) extracting answers from text to the given question in
natural language, has been actively studied and existing models have shown a
promise of outperforming human performance when trained and evaluated with
SQuAD dataset. However, such performance may not be replicated in the actual
setting, for which we need to diagnose the cause, which is non-trivial due to
the complexity of model. We thus propose a web-based UI that provides how each
model contributes to QA performances, by integrating visualization and analysis
tools for model explanation. We expect this framework can help QA model
researchers to refine and improve their models.Comment: AAAI 2019 Demonstratio
MRCLens: an MRC Dataset Bias Detection Toolkit
Many recent neural models have shown remarkable empirical results in Machine
Reading Comprehension, but evidence suggests sometimes the models take
advantage of dataset biases to predict and fail to generalize on out-of-sample
data. While many other approaches have been proposed to address this issue from
the computation perspective such as new architectures or training procedures,
we believe a method that allows researchers to discover biases, and adjust the
data or the models in an earlier stage will be beneficial. Thus, we introduce
MRCLens, a toolkit that detects whether biases exist before users train the
full model. For the convenience of introducing the toolkit, we also provide a
categorization of common biases in MRC.Comment: dataperf workshop at IMC
COOL, a Context Outlooker, and its Application to Question Answering and other Natural Language Processing Tasks
Vision outlookers improve the performance of vision transformers, which
implement a self-attention mechanism by adding outlook attention, a form of
local attention.
In natural language processing, as has been the case in computer vision and
other domains, transformer-based models constitute the state-of-the-art for
most processing tasks. In this domain, too, many authors have argued and
demonstrated the importance of local context.
We present and evaluate an outlook attention mechanism, COOL, for natural
language processing. COOL adds, on top of the self-attention layers of a
transformer-based model, outlook attention layers that encode local syntactic
context considering word proximity and consider more pair-wise constraints than
dynamic convolution operations used by existing approaches.
A comparative empirical performance evaluation of an implementation of COOL
with different transformer-based approaches confirms the opportunity of
improvement over a baseline using the neural language models alone for various
natural language processing tasks, including question answering. The proposed
approach is competitive with state-of-the-art methods
A study of the very high order natural user language (with AI capabilities) for the NASA space station common module
The requirements are identified for a very high order natural language to be used by crew members on board the Space Station. The hardware facilities, databases, realtime processes, and software support are discussed. The operations and capabilities that will be required in both normal (routine) and abnormal (nonroutine) situations are evaluated. A structure and syntax for an interface (front-end) language to satisfy the above requirements are recommended
Situated Sentence Processing: The Coordinated Interplay Account and a Neurobehavioral Model
Crocker MW, Knoeferle P, Mayberry M. Situated Sentence Processing: The Coordinated Interplay Account and a Neurobehavioral Model. Brain and Language. 2010;112(3):189-201
- …