335,746 research outputs found
Word graphs: The third set
This is the third paper in a series of natural language processing in term of knowledge graphs. A word is a basic unit in natural language processing. This is why we study word graphs. Word graphs were already built for prepositions and adwords (including adjectives, adverbs and Chinese quantity words) in two other papers. In this paper, we propose the concept of the logic word and classify logic words into groups in terms of semantics and the way they are used in describing reasoning processes. A start is made with the building of the lexicon of logic words in terms of knowledge graphs
Constructing Ontology-Based Cancer Treatment Decision Support System with Case-Based Reasoning
Decision support is a probabilistic and quantitative method designed for
modeling problems in situations with ambiguity. Computer technology can be
employed to provide clinical decision support and treatment recommendations.
The problem of natural language applications is that they lack formality and
the interpretation is not consistent. Conversely, ontologies can capture the
intended meaning and specify modeling primitives. Disease Ontology (DO) that
pertains to cancer's clinical stages and their corresponding information
components is utilized to improve the reasoning ability of a decision support
system (DSS). The proposed DSS uses Case-Based Reasoning (CBR) to consider
disease manifestations and provides physicians with treatment solutions from
similar previous cases for reference. The proposed DSS supports natural
language processing (NLP) queries. The DSS obtained 84.63% accuracy in disease
classification with the help of the ontology
Research on Reasoning and Modeling of Solving Mathematics Situation Word Problems of Primary Schools
[[abstract]]This research developed a web-based reasoning of mathematical situation word problems using the natural language processing technology. Our system provided the steps of morphological analysis, syntax analysis, semantic analysis and rule judgment to infer the semantic structure and operational structure of situation word problems. It also adopted the language of MathML and SVG to provide the web-based illustration of solving procedure in mathematical situation word problems. Keywords: situation word problem; natural language processing; MathML; SVG
Visual Question Answering: A Survey of Methods and Datasets
Visual Question Answering (VQA) is a challenging task that has received
increasing attention from both the computer vision and the natural language
processing communities. Given an image and a question in natural language, it
requires reasoning over visual elements of the image and general knowledge to
infer the correct answer. In the first part of this survey, we examine the
state of the art by comparing modern approaches to the problem. We classify
methods by their mechanism to connect the visual and textual modalities. In
particular, we examine the common approach of combining convolutional and
recurrent neural networks to map images and questions to a common feature
space. We also discuss memory-augmented and modular architectures that
interface with structured knowledge bases. In the second part of this survey,
we review the datasets available for training and evaluating VQA systems. The
various datatsets contain questions at different levels of complexity, which
require different capabilities and types of reasoning. We examine in depth the
question/answer pairs from the Visual Genome project, and evaluate the
relevance of the structured annotations of images with scene graphs for VQA.
Finally, we discuss promising future directions for the field, in particular
the connection to structured knowledge bases and the use of natural language
processing models.Comment: 25 page
A discriminative approach to grounded spoken language understanding in interactive robotics
Spoken Language Understanding in Interactive Robotics provides computational models of human-machine communication based on the vocal input. However, robots operate in specific environments and the correct interpretation of the spoken sentences depends on the physical, cognitive and linguistic aspects triggered by the operational environment. Grounded language processing should exploit both the physical constraints of the context as well as knowledge assumptions of the robot. These include the subjective perception of the environment that explicitly affects linguistic reasoning. In this work, a standard linguistic pipeline for semantic parsing is extended toward a form of perceptually informed natural language processing that combines discriminative learning and distributional semantics. Empirical results achieve up to a 40% of relative error reduction
Graph Neural Networks with Generated Parameters for Relation Extraction
Recently, progress has been made towards improving relational reasoning in
machine learning field. Among existing models, graph neural networks (GNNs) is
one of the most effective approaches for multi-hop relational reasoning. In
fact, multi-hop relational reasoning is indispensable in many natural language
processing tasks such as relation extraction. In this paper, we propose to
generate the parameters of graph neural networks (GP-GNNs) according to natural
language sentences, which enables GNNs to process relational reasoning on
unstructured text inputs. We verify GP-GNNs in relation extraction from text.
Experimental results on a human-annotated dataset and two distantly supervised
datasets show that our model achieves significant improvements compared to
baselines. We also perform a qualitative analysis to demonstrate that our model
could discover more accurate relations by multi-hop relational reasoning
Language Without Words: A Pointillist Model for Natural Language Processing
This paper explores two separate questions: Can we perform natural language
processing tasks without a lexicon?; and, Should we? Existing natural language
processing techniques are either based on words as units or use units such as
grams only for basic classification tasks. How close can a machine come to
reasoning about the meanings of words and phrases in a corpus without using any
lexicon, based only on grams?
Our own motivation for posing this question is based on our efforts to find
popular trends in words and phrases from online Chinese social media. This form
of written Chinese uses so many neologisms, creative character placements, and
combinations of writing systems that it has been dubbed the "Martian Language."
Readers must often use visual queues, audible queues from reading out loud, and
their knowledge and understanding of current events to understand a post. For
analysis of popular trends, the specific problem is that it is difficult to
build a lexicon when the invention of new ways to refer to a word or concept is
easy and common. For natural language processing in general, we argue in this
paper that new uses of language in social media will challenge machines'
abilities to operate with words as the basic unit of understanding, not only in
Chinese but potentially in other languages.Comment: 5 pages, 2 figure
Explicit Reasoning over End-to-End Neural Architectures for Visual Question Answering
Many vision and language tasks require commonsense reasoning beyond
data-driven image and natural language processing. Here we adopt Visual
Question Answering (VQA) as an example task, where a system is expected to
answer a question in natural language about an image. Current state-of-the-art
systems attempted to solve the task using deep neural architectures and
achieved promising performance. However, the resulting systems are generally
opaque and they struggle in understanding questions for which extra knowledge
is required. In this paper, we present an explicit reasoning layer on top of a
set of penultimate neural network based systems. The reasoning layer enables
reasoning and answering questions where additional knowledge is required, and
at the same time provides an interpretable interface to the end users.
Specifically, the reasoning layer adopts a Probabilistic Soft Logic (PSL) based
engine to reason over a basket of inputs: visual relations, the semantic parse
of the question, and background ontological knowledge from word2vec and
ConceptNet. Experimental analysis of the answers and the key evidential
predicates generated on the VQA dataset validate our approach.Comment: 9 pages, 3 figures, AAAI 201
- …
