4,694 research outputs found
Mining declarative models using time intervals
A common problem in process mining is the interpretation of the time stamp of events, e.g., whether it represents the moment of recording, or its occurrence. Often, this interpretation is left implicit. In this paper, we make this interpretation explicit using time intervals: an event occurs somewhere during a time window. The time window may be fine, e.g., a single point in time, or coarse, like a day. As each event is related to an activity within some process, we obtain for each activity a set of intervals in which the activity occurred. Based on these sets of intervals, we define ordering and simultaneousness relations. These relations form the basis of the discovery of a declarative process model describing the behavior in the event log
Business Process Configuration According to Data Dependency Specification
Configuration techniques have been used in several fields, such as the design of business
process models. Sometimes these models depend on the data dependencies, being easier to describe
what has to be done instead of how. Configuration models enable to use a declarative representation
of business processes, deciding the most appropriate work-flow in each case. Unfortunately,
data dependencies among the activities and how they can affect the correct execution of the process,
has been overlooked in the declarative specifications and configurable systems found in the literature.
In order to find the best process configuration for optimizing the execution time of processes according
to data dependencies, we propose the use of Constraint Programming paradigm with the aim of
obtaining an adaptable imperative model in function of the data dependencies of the activities
described declarative.Ministerio de Ciencia y TecnologĆa TIN2015-63502-C3-2-RFondo Europeo de Desarrollo Regiona
Data-aware Synthetic Log Generation for Declarative Process Models
Ćriprotsesside juhtimises on protsessikaeve klass meetodeid, mida kasutatakse protsessi struktuuri Ƶppimiseks tƤitmislogist. Selle struktuur on esindatud kui protsessi mudel: kas menetluslik vƵi deklaratiivne. NƤited deklaratiivsetest keeltest on Declare, DPIL ja DCR Graphs. Selleks, et testida ja parandada protsessi kaevandamise algoritme on vaja palju logisid erinevate parameetritega ja alati ei ole vƵimalik saada piisavalt reaalseid logisid. See on koht, kus tehislikud logid tulevad kasuks. On olemas meetodeid logi genereerimiseks DPIL-ist ja deklaratiivsetest mudelitest, kuid puuduvad vahendid logi genereerimiseks MPDeclare-ist, mis on multiperspektiivne versioon Declare-ist andmete toega. KƤesolev magistritƶƶ kƤsitleb MP-Declare mudelitest logide genereerimist kasutades kaht erinevat mudelite kontrollijat: Alloy ja NuSMV. Selleks, et parandada jƵudlust, optimeerisime kirjanduses saadaval olevaid baaslƤhenemisi. KƵik kƤsitletud tehnikad implementeeritakse ja testitakse kasutades saadaval olevat sobivuse testimise tƶƶriistu ja meie enda vƤljatƶƶtatud teste. Meie generaatorite hindamiseks ja vƵrdluseks olemasolevate lahendustega mƵƵtsime me logide genereerimise aega ja seda, kuidas see muutub erinevate parameetrite ja mudelitega. Me tƶƶtasime vƤlja erinevad mƵƵdupuud logide varieeruvuse arvutamiseks ja rakendasime neid uuritavatele generaatoritele.In Business Process Management, process mining is a class of techniques for learning process structure from an execution log. This structure is represented as a process model: either procedural or declarative. Examples of declarative languages are Declare, DPIL and DCR Graphs. In order to test and improve process mining algorithms a lot of logs with different parameters are required, and it is not always possible to get enough real logs. And this is where artificial logs are useful. There exist techniques for log generation from DPIL and declare-based models. But there are no tools for generating logs from MP-Declare ā multiperspective version of Declare with data support. This thesis introduces an approach to log generation from MP-Declare models using two different model checkers: Alloy and NuSMV. In order to improve performance, we applied optimization to baseline approaches available in the literature. All of the discussed techniques are implemented and tested using existing conformance checking tools and our tests. To evaluate performance of our generators and compare them with existing ones, we measured time required for generating log and how it changes with different parameters and models. We also designed several metrics for computing log variability, and applied them to reviewed generators
Design-time Models for Resiliency
Resiliency in process-aware information systems is based on the availability of recovery flows and alternative data for coping with missing data. In this paper, we discuss an approach to process and information modeling to support the specification of recovery flows and alternative data. In particular, we focus on processes using sensor data from different sources. The proposed model can be adopted to specify resiliency levels of information systems, based on event-based and temporal constraints
Dura
The reactive event processing language, that is developed in the context of this project, has been called DEAL in previous documents. When we chose this name for our language it has not been used by other authors working in the same research area (complex event processing). However, in the meantime it appears in publications of other authors and because we have not used the name in publications yet we cannot claim that we were the first to use it. In order to avoid ambiguities and name conflicts in future publications we decided to rename our language to Dura which stands for āDeclarative uniform reactive event processing languageā. Therefore the title of this deliverable has been updated to āDura ā Concepts and Examplesā
Logistic Knowledge Tracing: A Constrained Framework for Learner Modeling
Adaptive learning technology solutions often use a learner model to trace
learning and make pedagogical decisions. The present research introduces a
formalized methodology for specifying learner models, Logistic Knowledge
Tracing (LKT), that consolidates many extant learner modeling methods. The
strength of LKT is the specification of a symbolic notation system for
alternative logistic regression models that is powerful enough to specify many
extant models in the literature and many new models. To demonstrate the
generality of LKT, we fit 12 models, some variants of well-known models and
some newly devised, to 6 learning technology datasets. The results indicated
that no single learner model was best in all cases, further justifying a broad
approach that considers multiple learner model features and the learning
context. The models presented here avoid student-level fixed parameters to
increase generalizability. We also introduce features to stand in for these
intercepts. We argue that to be maximally applicable, a learner model needs to
adapt to student differences, rather than needing to be pre-parameterized with
the level of each student's ability
A Survey on IT-Techniques for a Dynamic Emergency Management in Large Infrastructures
This deliverable is a survey on the IT techniques that are relevant to the three use cases of the project EMILI. It describes the state-of-the-art in four complementary IT areas: Data cleansing, supervisory control and data acquisition, wireless sensor networks and complex event processing. Even though the deliverableās authors have tried to avoid a too technical language and have tried to explain every concept referred to, the deliverable might seem rather technical to readers so far little familiar with the techniques it describes
- ā¦