19,348 research outputs found
Investigating graphs in textual case-based reasoning
Advances in Case-Based Reasoning: Proceedings of the Seventh European Conference, ECCBR 2004: pp.573-586.Textual case-based reasoning (TCBR) provides the ability to reason
with domain-specific knowledge when experiences exist in text. Ideally, we
would like to find an inexpensive way to automatically, efficiently, and
accurately represent textual documents as cases. One of the challenges,
however, is that current automated methods that manipulate text are not always
useful because they are either expensive (based on natural language processing)
or they do not take into account word order and negation (based on statistics)
when interpreting textual sources. Recently, Schenker et al. [1] introduced an
algorithm to convert textual documents into graphs that conserves and conveys
the order and structure of the source text in the graph representation.
Unfortunately, the resulting graphs cannot be used as cases because they do not
take domain knowledge into consideration. Thus, the goal of this study is to
investigate the potential benefit, if any, of this new algorithm to TCBR. For
this purpose, we conducted an experiment to evaluate variations of the
algorithm for TCBR. We discuss the potential contribution of this algorithm to
existing TCBR approaches
Thick 2D Relations for Document Understanding
We use a propositional language of qualitative rectangle relations to detect the reading order from document images. To this end, we define the notion of a document encoding rule and we analyze possible formalisms to express document encoding rules such as LATEX and SGML. Document encoding rules expressed in the propositional language of rectangles are used to build a reading order detector for document images. In order to achieve robustness and avoid brittleness when applying the system to real life document images, the notion of a thick boundary interpretation for a qualitative relation is introduced. The framework is tested on a collection of heterogeneous document images showing recall rates up to 89%
Automatic case acquisition from texts for process-oriented case-based reasoning
This paper introduces a method for the automatic acquisition of a rich case
representation from free text for process-oriented case-based reasoning. Case
engineering is among the most complicated and costly tasks in implementing a
case-based reasoning system. This is especially so for process-oriented
case-based reasoning, where more expressive case representations are generally
used and, in our opinion, actually required for satisfactory case adaptation.
In this context, the ability to acquire cases automatically from procedural
texts is a major step forward in order to reason on processes. We therefore
detail a methodology that makes case acquisition from processes described as
free text possible, with special attention given to assembly instruction texts.
This methodology extends the techniques we used to extract actions from cooking
recipes. We argue that techniques taken from natural language processing are
required for this task, and that they give satisfactory results. An evaluation
based on our implemented prototype extracting workflows from recipe texts is
provided.Comment: Sous presse, publication pr\'evue en 201
Improving Retrieval-Based Question Answering with Deep Inference Models
Question answering is one of the most important and difficult applications at
the border of information retrieval and natural language processing, especially
when we talk about complex science questions which require some form of
inference to determine the correct answer. In this paper, we present a two-step
method that combines information retrieval techniques optimized for question
answering with deep learning models for natural language inference in order to
tackle the multi-choice question answering in the science domain. For each
question-answer pair, we use standard retrieval-based models to find relevant
candidate contexts and decompose the main problem into two different
sub-problems. First, assign correctness scores for each candidate answer based
on the context using retrieval models from Lucene. Second, we use deep learning
architectures to compute if a candidate answer can be inferred from some
well-chosen context consisting of sentences retrieved from the knowledge base.
In the end, all these solvers are combined using a simple neural network to
predict the correct answer. This proposed two-step model outperforms the best
retrieval-based solver by over 3% in absolute accuracy.Comment: 8 pages, 2 figures, 8 tables, accepted at IJCNN 201
Exploring the mathematics of motion through construction and collaboration
In this paper we give a detailed account of the design principles and construction of activities designed for learning about the relationships between position, velocity and acceleration, and corresponding kinematics graphs. Our approach is model-based, that is, it focuses attention on the idea that students constructed their own models – in the form of programs – to formalise and thus extend their existing knowledge. In these activities, students controlled the movement of objects in a programming environment, recording the motion data and plotting corresponding position-time and velocity-time graphs. They shared their findings on a specially-designed web-based collaboration system, and posted cross-site challenges to which others could react. We present learning episodes that provide evidence of students making discoveries about the relationships between different representations of motion. We conjecture that these discoveries arose from their activity in building models of motion and their participation in classroom and online communities
Dynamic Graph Generation Network: Generating Relational Knowledge from Diagrams
In this work, we introduce a new algorithm for analyzing a diagram, which
contains visual and textual information in an abstract and integrated way.
Whereas diagrams contain richer information compared with individual
image-based or language-based data, proper solutions for automatically
understanding them have not been proposed due to their innate characteristics
of multi-modality and arbitrariness of layouts. To tackle this problem, we
propose a unified diagram-parsing network for generating knowledge from
diagrams based on an object detector and a recurrent neural network designed
for a graphical structure. Specifically, we propose a dynamic graph-generation
network that is based on dynamic memory and graph theory. We explore the
dynamics of information in a diagram with activation of gates in gated
recurrent unit (GRU) cells. On publicly available diagram datasets, our model
demonstrates a state-of-the-art result that outperforms other baselines.
Moreover, further experiments on question answering shows potentials of the
proposed method for various applications
Drawing OWL 2 ontologies with Eddy the editor
In this paper we introduce Eddy, a new open-source tool for the graphical editing of OWL~2 ontologies. Eddy is specifically designed for creating ontologies in Graphol, a completely visual ontology language that is equivalent to OWL~2. Thus, in Eddy ontologies are easily drawn as diagrams, rather than written as sets of formulas, as commonly happens in popular ontology design and engineering environments.
This makes Eddy particularly suited for usage by people who are more familiar with diagramatic languages for conceptual modeling rather than with typical ontology formalisms, as is often required in non-academic and industrial contexts. Eddy provides intuitive functionalities for specifying Graphol diagrams, guarantees their syntactic correctness, and allows for exporting them in standard OWL 2 syntax. A user evaluation study we conducted shows that Eddy is perceived as an easy and intuitive tool for ontology specification
- …