396 research outputs found
Enhancing declarative process models with DMN decision logic
Modeling dynamic, human-centric, non-standardized and knowledge-intensive business processes with imperative process modeling approaches is very challenging. Declarative process modeling approaches are more appropriate for these processes, as they offer the run-time flexibility typically required in these cases. However, by means of a realistic healthcare process that falls in the aforementioned category, we demonstrate in this paper that current declarative approaches do not incorporate all the details needed. More specifically, they lack a way to model decision logic, which is important when attempting to fully capture these processes. We propose a new declarative language, Declare-R-DMN, which combines the declarative process modeling language Declare-R with the newly adopted OMG standard Decision Model and Notation. Aside from supporting the functionality of both languages, Declare-R-DMN also creates bridges between them. We will show that using this language results in process models that encapsulate much more knowledge, while still offering the same flexibility
Integrating BPMN and DMN: Modeling and Analysis
AbstractThe operational backbone of modern organizations is the target of business process management, where business process models are produced to describe how the organization should react to events and coordinate the execution of activities so as to satisfy its business goals. At the same time, operational decisions are made by considering internal and external contextual factors, according to decision models that are typically based on declarative, rule-based specifications that describe how input configurations correspond to output results. The increasing importance and maturity of these two intertwined dimensions, those of processes and decisions, have led to a wide range of data-aware models and associated methodologies, such as BPMN for processes and DMN for operational decisions. While it is important to analyze these two aspects independently, it has been pointed out by several authors that it is also crucial to analyze them in combination. In this paper, we provide a native, formal definition of DBPMN models, namely data-aware and decision-aware processes that build on BPMN and DMN S-FEEL, illustrating their use and giving their formal execution semantics via an encoding into Data Petri nets (DPNs). By exploiting this encoding, we then build on previous work in which we lifted the classical notion of soundness of processes to this richer, data-aware setting, and show how the abstraction and verification techniques that were devised for DPNs can be directly used for DBPMN models. This paves the way towards even richer forms of analysis, beyond that of assessing soundness, that are based on the same technique
Context-Aware Verification of DMN
The Decision Model and Notation (DMN) standard is a user-friendly notation for decision logic. To verify correctness of DMN decision tables, many tools are available. However, most of these look at a table in isolation, with little or no regards for its context. In this work, we argue for the importance of context, and extend the formal verification criteria to include it. We identify two forms of context, namely in-model context and background knowledge. We also present our own context-aware verification tool, implemented in our DMN-IDP interface, and show that this context-aware approach allows us to perform more thorough verification than any other available tool
AI-Enhanced Hybrid Decision Management
The Decision Model and Notation (DMN) modeling language allows the precise specification of business decisions and business rules. DMN is readily understandable by business users involved in decision management. However, as the models get complex, the cognitive abilities of humans threaten manual maintainability and comprehensibility. Proper design of the decision logic thus requires comprehensive automated analysis of e.g., all possible cases the decision shall cover; correlations between inputs and outputs; and the importance of inputs for deriving the output. In the paper, the authors explore the mutual benefits of combining human-driven DMN decision modeling with the computational power of Artificial Intelligence for DMN model analysis and improved comprehension. The authors propose a model-driven approach that uses DMN models to generate Machine Learning (ML) training data and show, how the trained ML models can inform human decision modelers by means of superimposing the feature importance within the original DMN models. An evaluation with multiple real DMN models from an insurance company evaluates the feasibility and the utility of the approach
Evolving Graphs with Semantic Neutral Drift
We introduce the concept of Semantic Neutral Drift (SND) for genetic
programming (GP), where we exploit equivalence laws to design semantics
preserving mutations guaranteed to preserve individuals' fitness scores. A
number of digital circuit benchmark problems have been implemented with
rule-based graph programs and empirically evaluated, demonstrating quantitative
improvements in evolutionary performance. Analysis reveals that the benefits of
the designed SND reside in more complex processes than simple growth of
individuals, and that there are circumstances where it is beneficial to choose
otherwise detrimental parameters for a GP system if that facilitates the
inclusion of SND
Applying the Decision Model and Notation in Practice: A Method to Design and Specify Business Decisions and Business Logic
Proper decision-making is one of the most important capabilities of an organization. Therefore, it is important to make explicit all decisions that are relevant to manage for an organization. In 2015 the Object Management Group published the Decision Model and Notation (DMN) standard that focuses on modelling business decisions and underlying business logic. DMN is being adopted at an increas-ing rate, however, theory does not adequately cover activities or methods to guide practitioners mod-elling with DMN. To tackle this problem this paper presents a method to guide the modelling process of business decisions with DMN. The method has been validated and improved with an experiment using thirty participants. Based on this method, future research could focus on further validation and improvement by using more participants from different industries
On the enhancement of Big Data Pipelines through Data Preparation, Data Quality, and the distribution of Optimisation Problems
Nowadays, data are fundamental for companies, providing operational support by facilitating daily
transactions. Data has also become the cornerstone of strategic decision-making processes in
businesses. For this purpose, there are numerous techniques that allow to extract knowledge and
value from data. For example, optimisation algorithms excel at supporting decision-making
processes to improve the use of resources, time and costs in the organisation. In the current
industrial context, organisations usually rely on business processes to orchestrate their daily
activities while collecting large amounts of information from heterogeneous sources. Therefore,
the support of Big Data technologies (which are based on distributed environments) is required
given the volume, variety and speed of data. Then, in order to extract value from the data, a set
of techniques or activities is applied in an orderly way and at different stages. This set of
techniques or activities, which facilitate the acquisition, preparation, and analysis of data, is known
in the literature as Big Data pipelines.
In this thesis, the improvement of three stages of the Big Data pipelines is tackled: Data
Preparation, Data Quality assessment, and Data Analysis. These improvements can be
addressed from an individual perspective, by focussing on each stage, or from a more complex
and global perspective, implying the coordination of these stages to create data workflows.
The first stage to improve is the Data Preparation by supporting the preparation of data with
complex structures (i.e., data with various levels of nested structures, such as arrays).
Shortcomings have been found in the literature and current technologies for transforming complex
data in a simple way. Therefore, this thesis aims to improve the Data Preparation stage through
Domain-Specific Languages (DSLs). Specifically, two DSLs are proposed for different use cases.
While one of them is a general-purpose Data Transformation language, the other is a DSL aimed
at extracting event logs in a standard format for process mining algorithms.
The second area for improvement is related to the assessment of Data Quality. Depending on the
type of Data Analysis algorithm, poor-quality data can seriously skew the results. A clear example
are optimisation algorithms. If the data are not sufficiently accurate and complete, the search
space can be severely affected. Therefore, this thesis formulates a methodology for modelling
Data Quality rules adjusted to the context of use, as well as a tool that facilitates the automation
of their assessment. This allows to discard the data that do not meet the quality criteria defined
by the organisation. In addition, the proposal includes a framework that helps to select actions to
improve the usability of the data.
The third and last proposal involves the Data Analysis stage. In this case, this thesis faces the
challenge of supporting the use of optimisation problems in Big Data pipelines. There is a lack of
methodological solutions that allow computing exhaustive optimisation problems in distributed
environments (i.e., those optimisation problems that guarantee the finding of an optimal solution
by exploring the whole search space). The resolution of this type of problem in the Big Data
context is computationally complex, and can be NP-complete. This is caused by two different
factors. On the one hand, the search space can increase significantly as the amount of data to
be processed by the optimisation algorithms increases. This challenge is addressed through a
technique to generate and group problems with distributed data. On the other hand, processing
optimisation problems with complex models and large search spaces in distributed environments
is not trivial. Therefore, a proposal is presented for a particular case in this type of scenario.
As a result, this thesis develops methodologies that have been published in scientific journals and
conferences.The methodologies have been implemented in software tools that are integrated with
the Apache Spark data processing engine. The solutions have been validated through tests and use cases with real datasets
DMN for Data Quality Measurement and Assessment
Data Quality assessment is aimed at evaluating the suitability
of a dataset for an intended task. The extensive literature on data
quality describes the various methodologies for assessing data quality
by means of data profiling techniques of the whole datasets. Our investigations
are aimed to provide solutions to the need of automatically
assessing the level of quality of the records of a dataset, where data profiling
tools do not provide an adequate level of information. As most of
the times, it is easier to describe when a record has quality enough than
calculating a qualitative indicator, we propose a semi-automatically business
rule-guided data quality assessment methodology for every record.
This involves first listing the business rules that describe the data (data
requirements), then those describing how to produce measures (business
rules for data quality measurements), and finally, those defining how to
assess the level of data quality of a data set (business rules for data quality
assessment). The main contribution of this paper is the adoption of
the OMG standard DMN (Decision Model and Notation) to support the
data quality requirement description and their automatic assessment by
using the existing DMN engines.Ministerio de Ciencia y Tecnología RTI2018-094283-B-C33Ministerio de Ciencia y Tecnología RTI2018-094283-B-C31European Regional Development Fund SBPLY/17/180501/00029
- …