2,526 research outputs found
Semantic process mining tools: core building blocks
Process mining aims at discovering new knowledge based on information hidden in event logs. Two important enablers for such analysis are powerful process mining techniques and the omnipresence of event logs in today's information systems. Most information systems supporting (structured) business processes (e.g. ERP, CRM, and workflow systems) record events in some form (e.g. transaction logs, audit trails, and database tables). Process mining techniques use event logs for all kinds of analysis, e.g., auditing, performance analysis, process discovery, etc. Although current process mining techniques/tools are quite mature, the analysis they support is somewhat limited because it is purely based on labels in logs. This means that these techniques cannot benefit from the actual semantics behind these labels which could cater for more accurate and robust analysis techniques. Existing analysis techniques are purely syntax oriented, i.e., much time is spent on filtering, translating, interpreting, and modifying event logs given a particular question. This paper presents the core building blocks necessary to enable semantic process mining techniques/tools. Although the approach is highly generic, we focus on a particular process mining technique and show how this technique can be extended and implemented in the ProM framework tool
Learning Linear Temporal Properties
We present two novel algorithms for learning formulas in Linear Temporal
Logic (LTL) from examples. The first learning algorithm reduces the learning
task to a series of satisfiability problems in propositional Boolean logic and
produces a smallest LTL formula (in terms of the number of subformulas) that is
consistent with the given data. Our second learning algorithm, on the other
hand, combines the SAT-based learning algorithm with classical algorithms for
learning decision trees. The result is a learning algorithm that scales to
real-world scenarios with hundreds of examples, but can no longer guarantee to
produce minimal consistent LTL formulas. We compare both learning algorithms
and demonstrate their performance on a wide range of synthetic benchmarks.
Additionally, we illustrate their usefulness on the task of understanding
executions of a leader election protocol
A Monte Carlo simulation of the Sudbury Neutrino Observatory proportional counters
The third phase of the Sudbury Neutrino Observatory (SNO) experiment added an
array of 3He proportional counters to the detector. The purpose of this Neutral
Current Detection (NCD) array was to observe neutrons resulting from
neutral-current solar neutrino-deuteron interactions. We have developed a
detailed simulation of the current pulses from the NCD array proportional
counters, from the primary neutron capture on 3He through the NCD array
signal-processing electronics. This NCD array Monte Carlo simulation was used
to model the alpha-decay background in SNO's third-phase 8B solar-neutrino
measurement.Comment: 38 pages; submitted to the New Journal of Physic
Correct-by-Construction Tactical Planners for Automated Cars
One goal of developing automated cars is to completely free people from driving tasks. Automated cars that require no human driver need to handle all traffic situations that a human driver is expected to handle, and possibly more. Although human drivers cause a lot of traffic accidents, they still have a very low accident and failure rate that automated systems must match.Tactical planners are responsible for making discrete decisions during the coming seconds or minute. As with all subsystems in an automated car, these planners need to be supported with a credible and convincing argument of their correctness. The planners\u27 decisions affect the environment and the planners need to interact with other road users in a feedback loop, so the correctness of the planners depend on their behavior in relation to other drivers and the environment over time. One possibility to ascertain their correctness is to deploy the planners in real traffic. To be sufficiently certain that a tactical planner is safe by that methods, it needs to be tested on 255 million miles without having an accident.Formal methods can, in contrast to testing, mathematically prove that the requirements are fulfilled. Hence, they are a promising alternative for making credible arguments of tactical planners\u27 correctness. The topic of this thesis is how formal methods can be used in the automotive industry to design safe tactical planners. What is interesting is both how automotive systems should be modeled in formal frameworks, and how formal methods can be used practically within the automotive development process.The main findings of this thesis are that it is natural to express desired properties of tactical planners in formal languages and use formal methods to prove their correctness. Model Checking, Reactive Synthesis, and Supervisory Control Theory have been used in the design and development process of tactical planners, and all three methods have their benefits, depending on the application.Formal synthesis is an especially interesting class of formal methods because they can automatically generate a planner based on requirements and models. Formal synthesis removes the need to manually develop and implement the planner, so the development efforts can be directed to formalizing good requirements on the planner and good assumptions on the environment. However, formal synthesis has two limitations: the resulting planner is a black box that is difficult to inspect, and it is difficult to find a level of abstraction that allows detailed requirements and generic planners
The succinctness of first-order logic on linear orders
Succinctness is a natural measure for comparing the strength of different logics. Intuitively, a logic L_1 is more succinct than another logic L_2 if all properties that can be expressed in L_2 can be expressed in L_1 by formulas of (approximately) the same size, but some properties can be expressed in L_1 by (significantly) smaller formulas.
We study the succinctness of logics on linear orders. Our first theorem is concerned with the finite variable fragments of first-order logic. We prove that:
(i) Up to a polynomial factor, the 2- and the 3-variable fragments of first-order logic on linear orders have the same succinctness. (ii) The 4-variable fragment is exponentially more succinct than the 3-variable fragment. Our second main result compares the succinctness of first-order logic on linear orders with that of monadic second-order logic. We prove that the fragment of monadic second-order logic that has the same expressiveness as first-order logic on linear orders is non-elementarily more succinct than first-order logic
The succinctness of first-order logic on linear orders
Succinctness is a natural measure for comparing the strength of different
logics. Intuitively, a logic L_1 is more succinct than another logic L_2 if all
properties that can be expressed in L_2 can be expressed in L_1 by formulas of
(approximately) the same size, but some properties can be expressed in L_1 by
(significantly) smaller formulas.
We study the succinctness of logics on linear orders. Our first theorem is
concerned with the finite variable fragments of first-order logic. We prove
that:
(i) Up to a polynomial factor, the 2- and the 3-variable fragments of
first-order logic on linear orders have the same succinctness. (ii) The
4-variable fragment is exponentially more succinct than the 3-variable
fragment. Our second main result compares the succinctness of first-order logic
on linear orders with that of monadic second-order logic. We prove that the
fragment of monadic second-order logic that has the same expressiveness as
first-order logic on linear orders is non-elementarily more succinct than
first-order logic
Encoding formulas as deep networks: Reinforcement learning for zero-shot execution of LTL formulas
We demonstrate a reinforcement learning agent which uses a compositional
recurrent neural network that takes as input an LTL formula and determines
satisfying actions. The input LTL formulas have never been seen before, yet the
network performs zero-shot generalization to satisfy them. This is a novel form
of multi-task learning for RL agents where agents learn from one diverse set of
tasks and generalize to a new set of diverse tasks. The formulation of the
network enables this capacity to generalize. We demonstrate this ability in two
domains. In a symbolic domain, the agent finds a sequence of letters that is
accepted. In a Minecraft-like environment, the agent finds a sequence of
actions that conform to the formula. While prior work could learn to execute
one formula reliably given examples of that formula, we demonstrate how to
encode all formulas reliably. This could form the basis of new multitask agents
that discover sub-tasks and execute them without any additional training, as
well as the agents which follow more complex linguistic commands. The
structures required for this generalization are specific to LTL formulas, which
opens up an interesting theoretical question: what structures are required in
neural networks for zero-shot generalization to different logics?Comment: Accepted in IROS 202
- ā¦