6,966 research outputs found
Extracting Formal Models from Normative Texts
We are concerned with the analysis of normative texts - documents based on
the deontic notions of obligation, permission, and prohibition. Our goal is to
make queries about these notions and verify that a text satisfies certain
properties concerning causality of actions and timing constraints. This
requires taking the original text and building a representation (model) of it
in a formal language, in our case the C-O Diagram formalism. We present an
experimental, semi-automatic aid that helps to bridge the gap between a
normative text in natural language and its C-O Diagram representation. Our
approach consists of using dependency structures obtained from the
state-of-the-art Stanford Parser, and applying our own rules and heuristics in
order to extract the relevant components. The result is a tabular data
structure where each sentence is split into suitable fields, which can then be
converted into a C-O Diagram. The process is not fully automatic however, and
some post-editing is generally required of the user. We apply our tool and
perform experiments on documents from different domains, and report an initial
evaluation of the accuracy and feasibility of our approach.Comment: Extended version of conference paper at the 21st International
Conference on Applications of Natural Language to Information Systems (NLDB
2016). arXiv admin note: substantial text overlap with arXiv:1607.0148
A Survey on IT-Techniques for a Dynamic Emergency Management in Large Infrastructures
This deliverable is a survey on the IT techniques that are relevant to the three use cases of the project EMILI. It describes the state-of-the-art in four complementary IT areas: Data cleansing, supervisory control and data acquisition, wireless sensor networks and complex event processing. Even though the deliverableās authors have tried to avoid a too technical language and have tried to explain every concept referred to, the deliverable might seem rather technical to readers so far little familiar with the techniques it describes
From Finite Automata to Regular Expressions and Back--A Summary on Descriptional Complexity
The equivalence of finite automata and regular expressions dates back to the
seminal paper of Kleene on events in nerve nets and finite automata from 1956.
In the present paper we tour a fragment of the literature and summarize results
on upper and lower bounds on the conversion of finite automata to regular
expressions and vice versa. We also briefly recall the known bounds for the
removal of spontaneous transitions (epsilon-transitions) on non-epsilon-free
nondeterministic devices. Moreover, we report on recent results on the average
case descriptional complexity bounds for the conversion of regular expressions
to finite automata and brand new developments on the state elimination
algorithm that converts finite automata to regular expressions.Comment: In Proceedings AFL 2014, arXiv:1405.527
- ā¦