669,367 research outputs found
On the automated interpretation and indexing of American football
This work combines natural language understanding and image processing with incremental learning to develop a system that can automatically interpret and index American Football. We have developed a model for representing spatio-temporal characteristics of multiple objects in dynamic scenes in this domain. Our representation combines expert knowledge, domain knowledge, spatial knowledge and temporal knowledge. We also present an incremental learning algorithm to improve the knowledge base as well as to keep previously developed concepts consistent with new data. The advantages of the incremental learning algorithm are that is that it does not split concepts and it generates a compact conceptual hierarchy which does not store instances
ExBERT: An External Knowledge Enhanced BERT for Natural Language Inference
Neural language representation models such as BERT, pretrained on large-scale unstructured corpora lack explicit grounding to real-world commonsense knowledge and are often unable to remember facts required for reasoning and inference. Natural Language Inference (NLI) is a challenging reasoning task that relies on common human understanding of language and real-world commonsense knowledge. We introduce a new model for NLI called External Knowledge Enhanced BERT (ExBERT), to enrich the contextual representation with realworld commonsense knowledge from external knowledge sources and enhance BERT’s language understanding and reasoning capabilities. ExBERT takes full advantage of contextual word representations obtained from BERT and employs them to retrieve relevant external knowledge from knowledge graphs and to encode the retrieved external knowledge. Our model adaptively incorporates the external knowledge context required for reasoning over the inputs. Extensive experiments on the challenging SciTail and SNLI benchmarks demonstrate the effectiveness of ExBERT: in comparison to the previous state-of-the-art, we obtain an accuracy of 95.9% on SciTail and 91.5% on SNLI
GECKA3D: A 3D Game Engine for Commonsense Knowledge Acquisition
Commonsense knowledge representation and reasoning is key for tasks such as
artificial intelligence and natural language understanding. Since commonsense
consists of information that humans take for granted, gathering it is an
extremely difficult task. In this paper, we introduce a novel 3D game engine
for commonsense knowledge acquisition (GECKA3D) which aims to collect
commonsense from game designers through the development of serious games.
GECKA3D integrates the potential of serious games and games with a purpose.
This provides a platform for the acquisition of re-usable and multi-purpose
knowledge, and also enables the development of games that can provide
entertainment value and teach players something meaningful about the actual
world they live in
Knowledge-driven Natural Language Understanding of English Text and its Applications
Understanding the meaning of a text is a fundamental challenge of natural
language understanding (NLU) research. An ideal NLU system should process a
language in a way that is not exclusive to a single task or a dataset. Keeping
this in mind, we have introduced a novel knowledge driven semantic
representation approach for English text. By leveraging the VerbNet lexicon, we
are able to map syntax tree of the text to its commonsense meaning represented
using basic knowledge primitives. The general purpose knowledge represented
from our approach can be used to build any reasoning based NLU system that can
also provide justification. We applied this approach to construct two NLU
applications that we present here: SQuARE (Semantic-based Question Answering
and Reasoning Engine) and StaCACK (Stateful Conversational Agent using
Commonsense Knowledge). Both these systems work by "truly understanding" the
natural language text they process and both provide natural language
explanations for their responses while maintaining high accuracy.Comment: Preprint. Accepted by the 35th AAAI Conference (AAAI-21) Main Track
UW-BHI at MEDIQA 2019: An Analysis of Representation Methods for Medical Natural Language Inference
Recent advances in distributed language modeling have led to large
performance increases on a variety of natural language processing (NLP) tasks.
However, it is not well understood how these methods may be augmented by
knowledge-based approaches. This paper compares the performance and internal
representation of an Enhanced Sequential Inference Model (ESIM) between three
experimental conditions based on the representation method: Bidirectional
Encoder Representations from Transformers (BERT), Embeddings of Semantic
Predications (ESP), or Cui2Vec. The methods were evaluated on the Medical
Natural Language Inference (MedNLI) subtask of the MEDIQA 2019 shared task.
This task relied heavily on semantic understanding and thus served as a
suitable evaluation set for the comparison of these representation methods
- …