55,456 research outputs found
Didactic Networks: A proposal for e-learning content generation
The Didactic Networks proposed in this paper are based on previous publications in the field of the RSR (Rhetorical-Semantic Relations). The RSR is a set of primitive relations used for building a specific kind of semantic networks for artificial intelligence applications on the web: the RSN (Rhetorical-Semantic Networks). We bring into focus the RSR application in the field of elearning, by defining Didactic Networks as a new set of semantic patterns oriented to the development of eleaming applications. The different lines we offer in our research Jail mainly into three levels: • The most basic one is in the field of computational linguistics and related to Logical Operations on RSR (RSR Inverses and plurals. RSR combinations, etc), once they have been created. The application of Walter Bosma 's results regarding rhetorical distance application and treatment as semantic weighted networks is one of the important issues here. • In parallel, we have been working on the creation of a knowledge representation and storage model and data architecture capable of supporting the definition of knowledge networks based on RSR. • The third strategic line is in the meso-level, the formulation of a molecular structure of knowledge based on the most frequently used patterns. The main contribution at this level is the set of Fundamental Cognitive Networks (FCN) as an application of Novak's mental maps proposal. This paper is part of this third intermediate level, and the Fundamental Didactic Networks (FDN) are the result of the application of rhetorical theoiy procedures to the instructional theory. We have formulated a general set of RSR capable of building discourse, making it possible to express any concept, procedure or principle in terms of knowledge nodes and RSRs. The instructional knowledge can then be elaborated in the same way. This network structure expressing the instructional knowledge in terms of RSR makes the objective of developing web-learning lessons semi-automutkally possible, as well as any other type of utilities oriented towards the exploitation of semantic structure, such as the automatic question answering systems
Detecting and Explaining Causes From Text For a Time Series Event
Explaining underlying causes or effects about events is a challenging but
valuable task. We define a novel problem of generating explanations of a time
series event by (1) searching cause and effect relationships of the time series
with textual data and (2) constructing a connecting chain between them to
generate an explanation. To detect causal features from text, we propose a
novel method based on the Granger causality of time series between features
extracted from text such as N-grams, topics, sentiments, and their composition.
The generation of the sequence of causal entities requires a commonsense
causative knowledge base with efficient reasoning. To ensure good
interpretability and appropriate lexical usage we combine symbolic and neural
representations, using a neural reasoning algorithm trained on commonsense
causal tuples to predict the next cause step. Our quantitative and human
analysis show empirical evidence that our method successfully extracts
meaningful causality relationships between time series with textual features
and generates appropriate explanation between them.Comment: Accepted at EMNLP 201
Recommended from our members
Proceedings of QG2010: The Third Workshop on Question Generation
These are the peer-reviewed proceedings of "QG2010, The Third Workshop on Question Generation". The workshop included a special track for "QGSTEC2010: The First Question Generation Shared Task and Evaluation Challenge".
QG2010 was held as part of The Tenth International Conference on Intelligent Tutoring Systems (ITS2010)
Joint Video and Text Parsing for Understanding Events and Answering Queries
We propose a framework for parsing video and text jointly for understanding
events and answering user queries. Our framework produces a parse graph that
represents the compositional structures of spatial information (objects and
scenes), temporal information (actions and events) and causal information
(causalities between events and fluents) in the video and text. The knowledge
representation of our framework is based on a spatial-temporal-causal And-Or
graph (S/T/C-AOG), which jointly models possible hierarchical compositions of
objects, scenes and events as well as their interactions and mutual contexts,
and specifies the prior probabilistic distribution of the parse graphs. We
present a probabilistic generative model for joint parsing that captures the
relations between the input video/text, their corresponding parse graphs and
the joint parse graph. Based on the probabilistic model, we propose a joint
parsing system consisting of three modules: video parsing, text parsing and
joint inference. Video parsing and text parsing produce two parse graphs from
the input video and text respectively. The joint inference module produces a
joint parse graph by performing matching, deduction and revision on the video
and text parse graphs. The proposed framework has the following objectives:
Firstly, we aim at deep semantic parsing of video and text that goes beyond the
traditional bag-of-words approaches; Secondly, we perform parsing and reasoning
across the spatial, temporal and causal dimensions based on the joint S/T/C-AOG
representation; Thirdly, we show that deep joint parsing facilitates subsequent
applications such as generating narrative text descriptions and answering
queries in the forms of who, what, when, where and why. We empirically
evaluated our system based on comparison against ground-truth as well as
accuracy of query answering and obtained satisfactory results
Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering
We present a new kind of question answering dataset, OpenBookQA, modeled
after open book exams for assessing human understanding of a subject. The open
book that comes with our questions is a set of 1329 elementary level science
facts. Roughly 6000 questions probe an understanding of these facts and their
application to novel situations. This requires combining an open book fact
(e.g., metals conduct electricity) with broad common knowledge (e.g., a suit of
armor is made of metal) obtained from other sources. While existing QA datasets
over documents or knowledge bases, being generally self-contained, focus on
linguistic understanding, OpenBookQA probes a deeper understanding of both the
topic---in the context of common knowledge---and the language it is expressed
in. Human performance on OpenBookQA is close to 92%, but many state-of-the-art
pre-trained QA methods perform surprisingly poorly, worse than several simple
neural baselines we develop. Our oracle experiments designed to circumvent the
knowledge retrieval bottleneck demonstrate the value of both the open book and
additional facts. We leave it as a challenge to solve the retrieval problem in
this multi-hop setting and to close the large gap to human performance.Comment: Published as conference long paper at EMNLP 201
- …