8 research outputs found

    An evolutionary technique to approximate multiple optimal alignments

    Get PDF
    The alignment of observed and modeled behavior is an essential aid for organizations, since it opens the door for root-cause analysis and enhancement of processes. The state-of-the-art technique for computing alignments has exponential time and space complexity, hindering its applicability for medium and large instances. Moreover, the fact that there may be multiple optimal alignments is perceived as a negative situation, while in reality it may provide a more comprehensive picture of the model’s explanation of observed behavior, from which other techniques may benefit. This paper presents a novel evolutionary technique for approximating multiple optimal alignments. Remarkably, the memory footprint of the proposed technique is bounded, representing an unprecedented guarantee with respect to the state-of-the-art methods for the same task. The technique is implemented into a tool, and experiments on several benchmarks are provided.Peer ReviewedPostprint (author's final draft

    Approximate computation of alignments of business processes through relaxation labelling

    Get PDF
    A fundamental problem in conformance checking is aligning event data with process models. Unfortunately, existing techniques for this task are either complex, or can only be applicable to restricted classes of models. This in practice means that for large inputs, current techniques often fail to produce a result. In this paper we propose a method to approximate alignments for unconstrained process models, which relies on the use of relaxation labelling techniques on top of a partial order representation of the process model. The implementation on the proposed technique achieves a speed-up of several orders of magnitude with respect to the approaches in the literature (either optimal or approximate), often with a reasonable trade-off on the cost of the obtained alignment.Peer ReviewedPostprint (author's final draft

    Repairing Alignments of Process Models

    Get PDF
    Process mining represents a collection of data driven techniques that support the analysis, understanding and improvement of business processes. A core branch of process mining is conformance checking, i.e., assessing to what extent a business process model conforms to observed business process execution data. Alignments are the de facto standard instrument to compute such conformance statistics. However, computing alignments is a combinatorial problem and hence extremely costly. At the same time, many process models share a similar structure and/or a great deal of behavior. For collections of such models, computing alignments from scratch is inefficient, since large parts of the alignments are likely to be the same. This paper presents a technique that exploits process model similarity and repairs existing alignments by updating those parts that do not fit a given process model. The technique effectively reduces the size of the combinatorial alignment problem, and hence decreases computation time significantly. Moreover, the potential loss of optimality is limited and stays within acceptable bounds

    Supply and demand prediction by 3PL for assortment planning

    Get PDF
    To underscore the critical role of predictive capabilities in third-party logistics (3PL) companies for assortment planning, particularly within the rapidly evolving e-commerce sector and business to business (B2B) flows. This study employs a comprehensive literature review on the forecasting capabilities of 3PL firms, enriched by empirical research across nine logistics facilities. It leverages statistical tools and the ARIMA_PLUS algorithm to evaluate the precision and dependability of demand and supply forecasts generated by these companies. The research reveals that 3PLs possess the ability to generate accurate demand and supply forecasts utilizing advanced forecasting tools. The effectiveness of these forecasts is closely linked to the quality of data available, and the expertise of the personnel involved. Challenges arise in forecasting for smaller order volumes, which are more common in e-commerce flows. The study also highlights that technological advancements and investments in data analytics are pivotal in enhancing forecast accuracy. The investigation focuses on a select group of 3PL companies, potentially limiting the generalizability of the findings. Moreover, the study underscores the necessity for further exploration into how technological innovations impact forecasting capabilities. By emphasizing the significance of 3PL firms' predictive abilities, also for e-commerce assortment planning, this paper addresses a notable gap in existing research. Its insights are invaluable for businesses contemplating logistics outsourcing and for 3PL providers aiming to advance their forecasting proficiency. The findings stress the importance of integrating advanced forecasting models and analytics to stay competitive in the dynamic e-commerce landscape

    Partitionnement d’instances de processus basé sur les techniques de conformité de modèles

    Get PDF
    As event data becomes an ubiquitous source of information, data science techniques represent an unprecedented opportunity to analyze and react to the processes that generate this data. Process Mining is an emerging field that bridges the gap between traditional data analysis techniques, like Data Mining, and Business Process Management. One core value of Process Mining is the discovery of formal process models like Petri nets or BPMN models which attempt to make sense of the events recorded in logs. Due to the complexity of event data, automated process discovery algorithms tend to create dense process models which are hard to interpret by humans. Fortunately, Conformance Checking, a sub-field of Process Mining, enables relating observed and modeled behavior, so that humans can map these two pieces of process information. Conformance checking is possible through alignment artefacts, which associate process models and event logs. Different types of alignment artefacts exist, namely alignments, multi-alignments and anti-alignments. Currently, only alignment artefacts are deeply addressed in the literature. It allows to relate the process model to a given process instance. However, because many behaviors exist in logs, identifying an alignment per process instance hinders the readability of the log-to-model relationships.The present thesis proposes to exploit the conformance checking artefacts for clustering the process executions recorded in event logs, thereby extracting a restrictive number of modeled representatives. Data clustering is a common method for extracting information from dense and complex data. By grouping objects by similarities into clusters, data clustering enables to mine simpler datasets which embrace the similarities and the differences contained in data. Using the conformance checking artefacts in a clustering approach allows to consider a reliable process model as a baseline for grouping the process instances. Hence, the discovered clusters are associated with modeled artefacts, that we call model-based trace variants, which provides opportune log-to-model explanations.From this motivation, we have elaborated a set of methods for computing conformance checking artefacts. The first contribution is the computation of a unique modeled behavior that represents of a set of process instances, namely multi-alignment. Then, we propose several alignment-based clustering approaches which provide clusters of process instances associated to a modeled artefact. Finally, we highlight the interest of anti-alignment for extracting deviations of process models with respect to the log. This latter artefact enables to estimate model precision, and we show its impact in model-based clustering. We provide SAT encoding for all the proposed techniques. Heuristic algorithms are then added to deal with computing capacity of today’s computers, at the expense of loosing optimality.Les données d'événements devenant une source d'information omniprésente, les techniques d'analyse de données représentent une opportunité sans précédent pour étudier et réagir aux processus qui génèrent ces données. Le Process Mining est un domaine émergent qui comble le fossé entre les techniques d'analyse de données, comme le Data Mining, et les techniques de management des entreprises, à savoir, le Business Process Management. L'une des bases fondamentales du Process Mining est la découverte de modèles de processus formels tels que les réseaux de Petri ou les modèles BPMN qui tentent de donner un sens aux événements enregistrés dans les journaux. En raison de la complexité des données d'événements, les algorithmes de découverte de processus ont tendance à créer des modèles de processus denses, qui sont difficiles à interpréter par les humains. Heureusement, la Vérification de Conformité, un sous-domaine du Process Mining, permet d'établir des liens entre le comportement observé et le comportement modélisé, facilitant ainsi la compréhension des correspondance entre ces deux éléments d'information sur les processus. La Vérification de Conformité est possible grâce aux artefacts d'alignement, qui associent les modèles de processus et les journaux d'événements. Il existe différents types d'artefacts d'alignement, à savoir les alignements, les multi-alignements et les anti-alignements. Actuellement, seuls les alignements sont traités en profondeur dans la littérature scientifique. Un alignement permet de relier le modèle de processus à une instance de processus donnée. Cependant, étant donné que de nombreux comportements existent dans les logs, l'identification d'un alignement par instance de processus nuit à la lisibilité des relations log-modèle.La présente thèse propose d'exploiter les artefacts de conformité pour regrouper les exécutions de processus enregistrées dans les journaux d'événements, et ainsi extraire un nombre restrictif de représentations modélisées. Le regroupement de données, communément appelé partitionnement, est une méthode courante pour extraire l'information de données denses et complexes. En regroupant les objets par similarité dans des clusters, le partitionnement permet d'extraire des ensembles de données plus simples qui englobent les similarités et les différences contenues dans les données. L'utilisation des artefacts de conformité dans une approche de partitionnement permet de considérer un modèle de processus fiable comme une base de référence pour le regroupement des instances de processus. Ainsi, les clusters découverts sont associés à des artefacts modélisés, que nous appelons variantes modélisées des traces, ce qui fournit des explications opportunes sur les relations entre le journal et le modèle.Avec cette motivation, nous avons élaboré un ensemble de méthodes pour calculer les artefacts de conformité. La première contribution est le calcul d'un comportement modélisé unique qui représente un ensemble d'instances de processus, à savoir le multi-alignement. Ensuite, nous proposons plusieurs approches de partitionnement basées sur l'alignement qui fournissent des clusters d'instances de processus associés à un artefact modélisé. Enfin, nous soulignons l'intérêt de l'anti-alignement pour extraire les déviations des modèles de processus par rapport au journal. Ce dernier artefact permet d'estimer la précision du modèle. Nous montrons son impact sur nos approches de partitionnement basées sur des modèles. Nous fournissons un encodage SAT pour toutes les techniques proposées. Des heuristiques sont ensuite ajoutées pour tenir compte de la capacité de calcul des ordinateurs actuels, au prix d'une perte d'optimalité
    corecore