1,205 research outputs found

    An integer programming based approach for diagnosing workflows

    Get PDF
    Workflow analysis is indispensable to capture modeling errors in workflow designs. While in the past several analysis approaches for workflows have been defined, these approaches do not give precise feedback, making it hard for a designer to pinpoint the exact cause of modeling errors. In this paper we introduce a novel approach for analyzing and diagnosing workflows based on integer programming (IP). Each workflow model is translated into a set of IP constraints. Faulty control flow connectors can be easily detected using the approach by relaxing the corresponding constraints. We show that this approach is correct, and illustrate it with realistic examples where the CPLEX tool is used to solve the IP formulations. Moreover, the approach is flexible and can be extended to handle a variety of new constraints, as well as to support new workflow patterns. Its features complement those of existing approaches

    Diagnosing correctness of semantic workflow models

    Get PDF
    To model operational business processes in an accurate way, workflow models need to reference both the control flow and dataflow perspectives. Checking the correctness of such workflow models and giving precise feedback in case of errors is challenging due to the interplay between these different perspectives. In this paper, we propose a fully automated approach for diagnosing correctness of semantic workflow models in which the semantics of activities are specified with pre and postconditions. The control flow and dataflow perspectives of a semantic workflow are modeled in an integrated way using Artificial Intelligence techniques (Integer Programming and Constraint Programming). The approach has been implemented in the DiagFlow tool, which reads and diagnoses annotated XPDL models, using a state-of-the-art constraint solver as back end. Using this novel approach, complex semantic workflow models can be verified and diagnosed in an efficient way.Ministerio de Educación y Ciencia TIN2009-1371

    Diagnosis of Errors in Stalled Inter-Organizational Workflow Processes

    Get PDF
    Fault-tolerant inter-organizational workflow processes help participant organizations efficiently complete their business activities and operations without extended delays. The stalling of inter-organizational workflow processes is a common hurdle that causes organizations immense losses and operational difficulties. The complexity of software requirements, incapability of workflow systems to properly handle exceptions, and inadequate process modeling are the leading causes of errors in the workflow processes. The dissertation effort is essentially about diagnosing errors in stalled inter-organizational workflow processes. The goals and objectives of this dissertation were achieved by designing a fault-tolerant software architecture of workflow system’s components/modules (i.e., workflow process designer, workflow engine, workflow monitoring, workflow administrative panel, service integration, workflow client) relevant to exception handling and troubleshooting. The complexity and improper implementation of software requirements were handled by building a framework of guiding principles and the best practices for modeling and designing inter-organizational workflow processes. Theoretical and empirical/experimental research methodologies were used to find the root causes of errors in stalled workflow processes. Error detection and diagnosis are critical steps that can be further used to design a strategy to resolve the stalled processes. Diagnosis of errors in stalled workflow processes was in scope, but the resolution of stalled workflow process was out of the scope in this dissertation. The software architecture facilitated automatic and semi-automatic diagnostics of errors in stalled workflow processes from real-time and historical perspectives. The empirical/experimental study was justified by creating state-of-the-art inter-organizational workflow processes using an API-based workflow system, a low code workflow automation platform, a supported high-level programming language, and a storage system. The empirical/experimental measurements and dissertation goals were explained by collecting, analyzing, and interpreting the workflow data. The methodology was evaluated based on its ability to diagnose errors successfully (i.e., identifying the root cause) in stalled processes caused by web service failures in the inter-organizational workflow processes. Fourteen datasets were created to analyze, verify, and validate hypotheses and the software architecture. Amongst fourteen datasets, seven datasets were created for end-to-end IOWF process scenarios, including IOWF web service consumption, and seven datasets were for IOWF web service alone. The results of data analysis strongly supported and validated the software architecture and hypotheses. The guiding principles and the best practices of workflow process modeling and designing conclude opportunities to prevent processes from getting stalled. The outcome of the dissertation, i.e., diagnosis of errors in stalled inter-organization processes, can be utilized to resolve these stalled processes

    Automating correctness verification of artifact-centric business process models

    Get PDF
    Context: The artifact-centric methodology has emerged as a new paradigm to support business process management over the last few years. This way, business processes are described from the point of view of the artifacts that are manipulated during the process. Objective: One of the research challenges in this area is the verification of the correctness of this kind of business process models where the model is formed of various artifacts that interact among them. Method: In this paper, we propose a fully automated approach for verifying correctness of artifact-centric business process models, taking into account that the state (lifecycle) and the values of each artifact (numerical data described by pre and postconditions) influence in the values and the state of the others. The lifecycles of the artifacts and the numerical data managed are modeled by using the Constraint Programming paradigm, an Artificial Intelligence technique. Results: Two correctness notions for artifact-centric business process models are distinguished (reachability and weak termination), and novel verification algorithms are developed to check them. The algorithms are complete: neither false positives nor false negatives are generated. Moreover, the algorithms offer precise diagnosis of the detected errors, indicating the execution causing the error where the lifecycle gets stuck. Conclusion: To the best of our knowledge, this paper presents the first verification approach for artifact-centric business process models that integrates pre and postconditions, which define the behavior of the services, and numerical data verification when the model is formed of more than one artifact. The approach can detect errors not detectable with other approaches.Ministerio de Educación y Ciencia TIN2009-1371

    Multi-criteria decision analysis for non-conformance diagnosis: A priority-based strategy combining data and business rules

    Get PDF
    Business process analytics and verification have become a major challenge for companies, especially when process data is stored across different systems. It is important to ensure Business Process Compliance in both data-flow perspectives and business rules that govern the organisation. In the verification of data-flow accuracy, the conformance of data to business rules is a key element, since essential to fulfil policies and statements that govern corporate behaviour. The inclusion of business rules in an existing and already deployed process, which therefore already counts on stored data, requires the checking of business rules against data to guarantee compliance. If inconsistency is detected then the source of the problem should be determined, by discerning whether it is due to an erroneous rule or to erroneous data. To automate this, a diagnosis methodology following the incorporation of business rules is proposed, which simultaneously combines business rules and data produced during the execution of the company processes. Due to the high number of possible explanations of faults (data and/or business rules), the likelihood of faults has been included to propose an ordered list. In order to reduce these possibilities, we rely on the ranking calculated by means of an AHP (Analytic Hierarchy Process) and incorporate the experience described by users and/or experts. The methodology proposed is based on the Constraint Programming paradigm which is evaluated using a real example. .Ministerio de Ciencia y Tecnología RTI2018–094283-B-C3

    Perfomance Analysis and Resource Optimisation of Critical Systems Modelled by Petri Nets

    Get PDF
    Un sistema crítico debe cumplir con su misión a pesar de la presencia de problemas de seguridad. Este tipo de sistemas se suele desplegar en entornos heterogéneos, donde pueden ser objeto de intentos de intrusión, robo de información confidencial u otro tipo de ataques. Los sistemas, en general, tienen que ser rediseñados después de que ocurra un incidente de seguridad, lo que puede conducir a consecuencias graves, como el enorme costo de reimplementar o reprogramar todo el sistema, así como las posibles pérdidas económicas. Así, la seguridad ha de ser concebida como una parte integral del desarrollo de sistemas y como una necesidad singular de lo que el sistema debe realizar (es decir, un requisito no funcional del sistema). Así pues, al diseñar sistemas críticos es fundamental estudiar los ataques que se pueden producir y planificar cómo reaccionar frente a ellos, con el fin de mantener el cumplimiento de requerimientos funcionales y no funcionales del sistema. A pesar de que los problemas de seguridad se consideren, también es necesario tener en cuenta los costes incurridos para garantizar un determinado nivel de seguridad en sistemas críticos. De hecho, los costes de seguridad puede ser un factor muy relevante ya que puede abarcar diferentes dimensiones, como el presupuesto, el rendimiento y la fiabilidad. Muchos de estos sistemas críticos que incorporan técnicas de tolerancia a fallos (sistemas FT) para hacer frente a las cuestiones de seguridad son sistemas complejos, que utilizan recursos que pueden estar comprometidos (es decir, pueden fallar) por la activación de los fallos y/o errores provocados por posibles ataques. Estos sistemas pueden ser modelados como sistemas de eventos discretos donde los recursos son compartidos, también llamados sistemas de asignación de recursos. Esta tesis se centra en los sistemas FT con recursos compartidos modelados mediante redes de Petri (Petri nets, PN). Estos sistemas son generalmente tan grandes que el cálculo exacto de su rendimiento se convierte en una tarea de cálculo muy compleja, debido al problema de la explosión del espacio de estados. Como resultado de ello, una tarea que requiere una exploración exhaustiva en el espacio de estados es incomputable (en un plazo prudencial) para sistemas grandes. Las principales aportaciones de esta tesis son tres. Primero, se ofrecen diferentes modelos, usando el Lenguaje Unificado de Modelado (Unified Modelling Language, UML) y las redes de Petri, que ayudan a incorporar las cuestiones de seguridad y tolerancia a fallos en primer plano durante la fase de diseño de los sistemas, permitiendo así, por ejemplo, el análisis del compromiso entre seguridad y rendimiento. En segundo lugar, se proporcionan varios algoritmos para calcular el rendimiento (también bajo condiciones de fallo) mediante el cálculo de cotas de rendimiento superiores, evitando así el problema de la explosión del espacio de estados. Por último, se proporcionan algoritmos para calcular cómo compensar la degradación de rendimiento que se produce ante una situación inesperada en un sistema con tolerancia a fallos

    Basis marking representation of Petri net reachability spaces and its application to the reachability problem

    Get PDF
    In this paper a compact representation of the reachability graph of a Petri net is proposed. The transition set of a Petri net is partitioned into the subsets of explicit and implicit transitions, in such a way that the subnet induced by implicit transitions does not contain directed cycles. The firing of implicit transitions can be abstracted so that the reachability set of the net can be completely characterized by a subset of reachable markings called basis makings. We show that to determine a max-cardinality-T_I basis partition is an NPhard problem, but a max-set-T_I basis partition can be determined in polynomial time. The generalized version of the marking reachability problem in a Petri net can be solved by a practically efficient algorithm based on the basis reachability graph. Finally this approach is further extended to unbounded nets

    at the 14th Conference of the Spanish Association for Artificial Intelligence (CAEPIA 2011)

    Get PDF
    Technical Report TR-2011/1, Department of Languages and Computation. University of Almeria November 2011. Joaquín Cañadas, Grzegorz J. Nalepa, Joachim Baumeister (Editors)The seventh workshop on Knowledge Engineering and Software Engineering (KESE7) was held at the Conference of the Spanish Association for Artificial Intelligence (CAEPIA-2011) in La Laguna (Tenerife), Spain, and brought together researchers and practitioners from both fields of software engineering and artificial intelligence. The intention was to give ample space for exchanging latest research results as well as knowledge about practical experience.University of Almería, Almería, Spain. AGH University of Science and Technology, Kraków, Poland. University of Würzburg, Würzburg, Germany

    Technology Readiness Levels for Machine Learning Systems

    Full text link
    The development and deployment of machine learning (ML) systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end. The lack of diligence can lead to technical debt, scope creep and misaligned objectives, model misuse and failures, and expensive consequences. Engineering systems, on the other hand, follow well-defined processes and testing standards to streamline development for high-quality, reliable results. The extreme is spacecraft systems, where mission critical measures and robustness are ingrained in the development process. Drawing on experience in both spacecraft engineering and ML (from research through product across domain areas), we have developed a proven systems engineering approach for machine learning development and deployment. Our "Machine Learning Technology Readiness Levels" (MLTRL) framework defines a principled process to ensure robust, reliable, and responsible systems while being streamlined for ML workflows, including key distinctions from traditional software engineering. Even more, MLTRL defines a lingua franca for people across teams and organizations to work collaboratively on artificial intelligence and machine learning technologies. Here we describe the framework and elucidate it with several real world use-cases of developing ML methods from basic research through productization and deployment, in areas such as medical diagnostics, consumer computer vision, satellite imagery, and particle physics
    • …
    corecore