360 research outputs found

    Existence dependency-based domain modeling for improving stateless process enactment.

    Get PDF
    In a process-enabled service oriented architecture, a process engine typically stores the state of the process instances during enactment. As an alternative, stateless process enactment entails that process state is derived from the state of business objects, which are organized in a domain model. The business objects are referred to in pre- and post-conditions of activities, which determine when the activity is enabled and completed, respectively. Despite the fact that the latter approach has multiple benefits compared with the former, the repeated state (re)calculations deteriorate performance and the formulation of clear conditions is not self-evident if typical domain modeling techniques (e.g. UML or ER) are adopted. In this paper we show that by adopting a specific domain modeling technique, which is based on the notion of existence dependency between the business objects, the performance and comprehensibility issues can proficiently be dealt with. We illustrate the technique using a real-world case from the insurance domain and analyze the emerging duality between process modeling and domain modeling.

    Existence Dependency-Based Domain Modeling for Improving Stateless Process Enactment

    Full text link

    IT supported business process negotiation, reconciliation and execution for cross-organisational e-business collaboration

    Get PDF
    In modern enterprises, workflow technology is commonly used for business process automation. Established business processes represent successful business practice and become a crucial part of corporate assets. In the Internet era, electronic business is chosen by more and more organisations as a preferred way of conducting business practice. In response to the increasing demands for cross-organisational business automation, especially those raised by the B2B electronic commerce community, the concept of collaboration between automated business processes, i.e. workflow collaboration, is emerging. Otherwise, automation would be confined within individual organisations and cross-organisational collaboration would still have to be carried out manually. However, much of the previous research work overlooks the acquisition of the compatible workflows at build time and simply assumes that compatibility is achieved through face-toface negotiation followed by a design from scratch approach that creates collaborative workflows based on the agreement resulted from the negotiation. The resource-intensive and error-prone approach can hardly keep up with the pace of today’s marketplace with increasing transaction volume and complexity. This thesis identifies the requirements for cross-organisational workflow collaboration (COWCO) through an integrated approach, proposes a comprehensive supporting framework, explains the key enabling techniques of the framework, and implements and evaluates them in the form of a prototype system – COWCO-Guru. With the support of such a framework, cross-organisational workflow collaboration can be managed and conducted with reduced human effort, which will further facilitate cross-organisational e-business, especially B2B e-commerce practices

    A bottom-up process management environment dedicated to process actors

    Get PDF
    Les organisations adoptent de plus en plus les environnements de gestion des processus car ils offrent des perspectives prometteuses d'exécution en termes de flexibilité et d'efficacité. Les environnements traditionnels proposent cependant une approche descendante qui nécessite, de la part de concepteurs, l'élaboration d'un modèle avant sa mise en oeuvre par les acteurs qui le déploient tout au long du cycle d'ingénierie. En raison de cette divergence, un différentiel important est souvent constaté entre les modèles de processus et leur mise en oeuvre. De par l'absence de prise directe avec les acteurs de terrain, le niveau opérationnel des environnements de processus est trop faiblement exploité, en particulier en ingénierie des systèmes et des logiciels. Afin de faciliter l'utilisation des environnements de processus, cette thèse présente une approche ascendante mettant les acteurs du processus au coeur de la problématique. L'approche proposée autorise conjointement la modélisation et la mise en oeuvre de leurs activités quotidiennes. Dans cet objectif, notre approche s'appuie sur la description des artéfacts produits et consommés durant l'exécution d'une activité. Cette description permet à chaque acteur du processus de décrire le fragment de processus exprimant les activités dictées par son rôle. Le processus global se décompose ainsi en plusieurs fragments appartenant à différents rôles. Chaque fragment est modélisé indépendamment des autres fragments ; il peut aussi être greffé progressivement au modèle de processus initial. La modélisation des processus devient ainsi moins complexe et plus parcellaire. En outre, un fragment de processus ne modélise que l'aspect structurel des activités d'un rôle sans anticiper sur le comportement des activités ; il est moins prescriptif qu'un ordonnancement des activités de l'acteur. Un moteur de processus basé sur la production et la consommation d'artéfacts a été développé pour promulguer des activités provenant de différents fragments de processus. Ce moteur ne requiert pas de relations prédéfinies d'ordonnancement entre les activités pour les synchroniser, mais déduit leur dépendance à partir de leurs artéfacts échangés. Les dépendances sont représentées et actualisées au sein d'un graphe appelé Process Dependency Graph (PDG) qui reflète à tout instant l'état courant de l'exécution du processus. Cet environnement a été étendu afin de gérer les changements imprévus qui se produisent inévitablement lors de la mise en oeuvre des processus. Ce dispositif permet aux acteurs de signaler des changements émergents, d'analyser les impacts possibles et de notifier les personnes affectées par les modifications. En résumé, notre approche préconise de répartir les tâches d'un processus en plusieurs fragments, modélisés et adoptés séparément par les acteurs du processus. Le moteur de processus, qui s'appuie sur la disponibilité des artéfacts pour synchroniser les activités, permet d'exécuter indépendamment les fragments des processus. Il permet aussi l'exécution d'un processus partiellement défini pour lequel certains fragments seraient manquants. La vision globale de l'état d'avancement des différents acteurs concernés émerge au fur et à mesure de l'exécution des fragments. Cette nouvelle approche vise à intégrer au mieux les acteurs du processus dans le cycle de vie de la gestion des processus, ce qui rend ces systèmes plus attractifs et plus proches de leurs préoccupations.Companies increasingly adopt process management environments, which offer promising perspectives for a more flexible and efficient process execution. Traditional process management environments embodies a top-down approach in which process modeling is performed by process designers and process enacting is performed by process actors. Due to this separation, there is often a gap between process models and their real enactments. As a consequence, the operational level of top down process environments has stayed low, especially in system and software industry, because they are not directly relevant to process actors' needs. In order to facilitate the usage of process environments for process actors, this thesis presents a user-centric and bottom-up approach that enables integration of process actors into process management life cycle by allowing them to perform both the modeling and enacting of their real processes. To this end, first, a bottom-up approach based on the artifact-centric modeling paradigm was proposed to allow each process actor to easily describe the process fragment containing the activities carried out by his role. The global process is thus decomposed into several fragments belonging to different roles. Each fragment can be modeled independently of other fragments and can be added progressively to the process model; therefore the process modeling becomes less complex and more partial. Moreover, a process fragment models only the structural aspect of a role's activities without anticipating the behavior of these activities; therefore the process model is less prescriptive. Second, a data-driven process engine was developed to enact activities coming from different process fragments. Our process engine does not require predefined work-sequence relations among these activities to synchronize them, but deduces such dependencies from their enactment-time exchanged artifacts. We used a graph structure name Process Dependency Graph (PDG) to store enactment-time process information and establish the dependencies among process elements. Third, we extend our process environment in order to handle unforeseen changes occurring during process enactment. This results in a Change-Aware Process Environment that allows process actors reporting emergent changes, analyzing possible impacts and notifying people affected by the changes. In our bottom-up approach, a process is split into several fragments separately modeled and enacted by process actors. Our data-driven process engine, which uses the availability of working artifacts to synchronize activities, enables enacting independently process fragments, and even a partially modeled process where some fragments are missing. The global process progressively emerges only at enactment time from the execution of process fragments. This new approach, with its simpler modeling and more flexible enactment, integrates better process actors into process management life cycle, and hence makes process management systems more attractive and useful for them

    An overview of S-OGSA: A Reference Semantic Grid Architecture

    Get PDF
    The Grid's vision, of sharing diverse resources in a flexible, coordinated and secure manner through dynamic formation and disbanding of virtual communities, strongly depends on metadata. Currently, Grid metadata is generated and used in an ad hoc fashion, much of it buried in the Grid middleware's code libraries and database schemas. This ad hoc expression and use of metadata causes chronic dependency on human intervention during the operation of Grid machinery, leading to systems which are brittle when faced with frequent syntactic changes in resource coordination and sharing protocols. The Semantic Grid is an extension of the Grid in which rich resource metadata is exposed and handled explicitly, and shared and managed via Grid protocols. The layering of an explicit semantic infrastructure over the Grid Infrastructure potentially leads to increased interoperability and greater flexibility. In recent years, several projects have embraced the Semantic Grid vision. However, the Semantic Grid lacks a Reference Architecture or any kind of systematic framework for designing Semantic Grid components or applications. The Open Grid Service Architecture ( OGSA) aims to define a core set of capabilities and behaviours for Grid systems. We propose a Reference Architecture that extends OGSA to support the explicit handling of semantics, and defines the associated knowledge services to support a spectrum of service capabilities. Guided by a set of design principles, Semantic-OGSA ( S-OGSA) defines a model, the capabilities and the mechanisms for the Semantic Grid. We conclude by highlighting the commonalities and differences that the proposed architecture has with respect to other Grid frameworks. (c) 2006 Elsevier B. V. All rights reserved

    Declarative techniques for modeling and mining business processes..

    Get PDF
    Organisaties worden vandaag de dag geconfronteerd met een schijnbare tegenstelling. Hoewel ze aan de ene kant veel geld geïnvesteerd hebben in informatiesystemen die hun bedrijfsprocessen automatiseren, lijken ze hierdoor minder in staat om een goed inzicht te krijgen in het verloop van deze processen. Een gebrekkig inzicht in de bedrijfsprocessen bedreigt hun flexibiliteit en conformiteit. Flexibiliteit is belangrijk, omdat organisaties door continu wijzigende marktomstandigheden gedwongen worden hun bedrijfsprocessen snel en soepel aan te passen. Daarnaast moeten organisaties ook kunnen garanderen dan hun bedrijfsvoering conform is aan de wetten, richtlijnen, en normen die hun opgelegd worden. Schandalen zoals de recent aan het licht gekomen fraude bij de Franse bank Société Générale toont het belang aan van conformiteit en flexibiliteit. Door het afleveren van valse bewijsstukken en het omzeilen van vaste controlemomenten, kon één effectenhandelaar een risicoloze arbitragehandel op prijsverschillen in futures omtoveren tot een risicovolle, speculatieve handel in deze financiële derivaten. De niet-ingedekte, niet-geautoriseerde posities bleven lange tijd verborgen door een gebrekkige interne controle, en tekortkomingen in de IT beveiliging en toegangscontrole. Om deze fraude in de toekomst te voorkomen, is het in de eerste plaats noodzakelijk om inzicht te verkrijgen in de operationele processen van de bank en de hieraan gerelateerde controleprocessen. In deze tekst behandelen we twee benaderingen die gebruikt kunnen worden om het inzicht in de bedrijfsprocessen te verhogen: procesmodellering en procesontginning. In het onderzoek is getracht technieken te ontwikkelen voor procesmodellering en procesontginning die declaratief zijn. Procesmodellering process modeling is de manuele constructie van een formeel model dat een relevant aspect van een bedrijfsproces beschrijft op basis van informatie die grotendeels verworven is uit interviews. Procesmodellen moeten adequate informatie te verschaffen over de bedrijfsprocessen om zinvol te kunnen worden gebruikt bij hun ontwerp, implementatie, uitvoering, en analyse. De uitdaging bestaat erin om nieuwe talen voor procesmodellering te ontwikkelen die adequate informatie verschaffen om deze doelstelling realiseren. Declaratieve procestalen maken de informatie omtrent bedrijfsbekommernissen expliciet. We karakteriseren en motiveren declaratieve procestalen, en nemen we een aantal bestaande technieken onder de loep. Voorts introduceren we een veralgemenend raamwerk voor declaratieve procesmodellering waarbinnen bestaande procestalen gepositioneerd kunnen worden. Dit raamwerk heet het EM-BrA�CE raamwerk, en staat voor `Enterprise Modeling using Business Rules, Agents, Activities, Concepts and Events'. Het bestaat uit een formele ontolgie en een formeel uitvoeringsmodel. Dit raamwerk legt de ontologische basis voor de talen en technieken die verder in het doctoraat ontwikkeld worden. Procesontginning process mining is de automatische constructie van een procesmodel op basis van de zogenaamde event logs uit informatiesystemen. Vandaag de dag worden heel wat processen door informatiesystemen in event logs geregistreerd. In event logs vindt men in chronologische volgorde terug wie, wanneer, welke activiteit verricht heeft. De analyse van event logs kan een accuraat beeld opleveren van wat er zich in werkelijkheid afspeelt binnen een organisatie. Om bruikbaar te zijn, moeten de ontgonnen procesmodellen voldoen aan criteria zoals accuraatheid, verstaanbaarheid, en justifieerbaarheid. Bestaande technieken voor procesontginning focussen vooral op het eerste criterium: accuraatheid. Declaratieve technieken voor procesontginning richten zich ook op de verstaanbaarheid en justifieerbaarheid van de ontgonnen modellen. Declaratieve technieken voor procesontginning zijn meer verstaanbaar omdat ze pogen procesmodellen voor te stellen aan de hand van declaratieve voorstellingsvormen. Daarenboven verhogen declaratieve technieken de justifieerbaarheid van de ontgonnen modellen. Dit komt omdat deze technieken toelaten de apriori kennis, inductieve bias, en taal bias van een leeralgoritme in te stellen. Inductief logisch programmeren (ILP) is een leertechniek die inherent declaratief is. In de tekst tonen we hoe proces mining voorgesteld kan worden als een ILP classificatieprobleem, dat de logische voorwaarden leert waaronder gebeurtenis plaats vindt (positief event) of niet plaatsvindt (een negatief event). Vele event logs bevatten van nature geen negatieve events die aangeven dat een bepaalde activiteit niet kon plaatsvinden. Om aan dit probleem tegemoet te komen, beschrijven we een techniek om artificiële negatieve events te genereren, genaamd AGNEs (process discovery by Artificially Generated Negative Events). De generatie van artificiële negatieve events komt neer op een configureerbare inductieve bias. De AGNEs techniek is geïmplementeerd als een mining plugin in het ProM raamwerk. Door process discovery voor te stellen als een eerste-orde classificatieprobleem op event logs met artificiële negatieve events, kunnen de traditionele metrieken voor het kwantificeren van precisie (precision) en volledigheid (recall) toegepast worden voor het kwantificeren van de precisie en volledigheid van een procesmodel ten opzicht van een event log. In de tekst stellen we twee nieuwe metrieken voor. Deze nieuwe metrieken, in combinatie met bestaande metrieken, werden gebruikt voor een uitgebreide evaluatie van de AGNEs techniek voor process discovery in zowel een experimentele als een praktijkopstelling.

    Multi-agent communication for the realization of business-processes

    Get PDF
    As Internet and information technologies expand further into daily business activities, new solutions and techniques are required to cope with the growing complexity. One area that has gained attention is systems and organizations interoperability and Service Oriented Architectures (SOA). Web Services have grown as a preferred technology in this area. Although these techniques have proved to solve problems of low level integration of heterogeneous systems, there has been little advance at higher levels of integration like how to rule complex conversations between participants that are autonomous and cannot depend on some ruling or orchestrating system. Multi-agent technology has studied techniques for content-rich communication, negotiation, autonomous problem solving and conversation protocols. These techniques have solved some of the problems that emerge when integrating autonomous systems to perform complex business processes. The present research work intends to provide a solution for the realization of complex Business Process between heterogeneous autonomous participants using multi-agent technology. We developed an integration of Web Services and agent-based technologies along with a model for creating conversation protocols that respect the autonomy of participants. A modeling tool has been developed to create conversation protocols in a modular and reusable manner. BDI-Agents implementations that communicate over Web Services are automatically generated out of these models.Internet und Informationstechnik finden immer mehr Verwendung in alltäglichen Geschäftsaktivitäten und als Folge dessen, werden neue Lösungen und Verfahren gebraucht, um der steigenden Komplexität gerecht zu werden. Insbesondere Bereiche wie System- und Organizations- Interoperabilität, wie auch dienst-orientierte Architekturen (SOA) haben demzufolge mehr Aufmerksamkeit bekommen. Dabei sind Web Services zur bevorzugten Technologie geworden. Tatsächlich haben diese Techniken Probleme in niedrigeren Ebenen gelöst, die beim lntegrieren von heterogenen Systemen entstehen. Allerdings gab es bisher weniger Fortschritte in höheren Ebenen, wie der Regelung von komplexen Dialogen zwischen Teilnehmern, die aufgrund ihrer Autonomie, sich nicht nach anderen kontrollierenden oder orchestrierenden Systemen richten lassen. Multiagenten-Systeme haben Bereiche wie inhaltreiche Kommunikation, Handel, autonome Problemlösung und Interaktionsprotokolle im Detail geforscht. Diese Techniken haben Probleme gelöst, die beim Ausführen von komplexen Geschäftsprozessen auftreten. Die vorliegende Doktorarbeit beabsichtigt, mit Verwendung von Multiagenten-Technologien, eine Lösung für die Umsetzung von komplexen Geschäftsprozessen zwischen heterogenen autonomen Teilnehmern bereitzustellen. Wir haben eine Integrationslösung für Web Services und agenten-basierte Technologien zur Verfügung gestellt, zusammen mit einem Model für die Erstellung von Interaktions-Protokollen, die die Autonomie der Teilnehmer berücksichtigt. Ein Modellierungstool wurde entwickelt, um modulare und wiederverwendbare Interaktionsprotokolle gestalten zu können. Aus diesen Modellen kann man auch Implementierungen automatisch erzeugen lassen, welche BDI-Agenten, die über Web Services kommunizieren, verwenden

    \u27Home was Congo\u27: Refugees and Durable Displacement in the Borderlands of 1,000 Hills

    Get PDF
    As forced migrants linger at the borders of the world’s conflicts, refugees from the Democratic Republic of Congo in Rwanda remain in camps where they have waited for ‘durable solutions’ to their geographic and political existence for nearly two decades. Protracted displacement such as this results from processes at the local, state, regional, and international levels, with consequences reverberating each of these levels, including insecurity, expenditure of already limited resources, and strained interstate political relationships. As refugees’ stays extend to increasingly long periods of time, situations once assumed to be temporary take on a semblance of permanence. Forced displacement increasingly transitions to relatively durable living instances of conflict spillover, articulating the wider human impacts of such patterns of vital, and often understudied, outcomes of conflict and power struggles. Using a qualitative approach within a specific site of displacement in the Great Lakes Region of Africa, this study engages in dialog with notions of sovereignty, post-colonialism, social constructivism, burden-sharing, and social stratification to uncover the possible motivations for making refugee situations permanent. Home to approximately 16,000 Congolese refugees, Kiziba Camp in western Rwanda serves as a microcosm through which one can observe these multi-layered humanitarian aid and refugee hosting processes. By analyzing semi-structured interview and ethnographic data collected from Kiziba Camp in 2011, 2013, and 2014, interviews with key Rwandan government representatives, and existing media sources and nongovernmental organization reports, this research links the pursuit and maintenance of state sovereignty, as well as aspects of social construction at the local-level, with processes that contribute to protracted displacement. Analyses of this original data reveal intentional and unintentional factors emanating from state foreign and domestic policies, NGO disaster and humanitarian assistance rhetoric, as well as refugees’ own conceptualization of citizenship, identity, and belonging that contribute to the durability refugee displacement. Through the personal narratives of community leaders of Kiziba Camp, this study begins to reveal a theory about state dependency on refugee hosting and the agency of refugees to imagine and define themselves, and how these factors contribute to a form of displacement that becomes increasingly durable over time

    Active provenance for data intensive research

    Get PDF
    The role of provenance information in data-intensive research is a significant topic of discussion among technical experts and scientists. Typical use cases addressing traceability, versioning and reproducibility of the research findings are extended with more interactive scenarios in support, for instance, of computational steering and results management. In this thesis we investigate the impact that lineage records can have on the early phases of the analysis, for instance performed through near-real-time systems and Virtual Research Environments (VREs) tailored to the requirements of a specific community. By positioning provenance at the centre of the computational research cycle, we highlight the importance of having mechanisms at the data-scientists’ side that, by integrating with the abstractions offered by the processing technologies, such as scientific workflows and data-intensive tools, facilitate the experts’ contribution to the lineage at runtime. Ultimately, by encouraging tuning and use of provenance for rapid feedback, the thesis aims at improving the synergy between different user groups to increase productivity and understanding of their processes. We present a model of provenance, called S-PROV, that uses and further extends PROV and ProvONE. The relationships and properties characterising the workflow’s abstractions and their concrete executions are re-elaborated to include aspects related to delegation, distribution and steering of stateful streaming operators. The model is supported by the Active framework for tuneable and actionable lineage ensuring the user’s engagement by fostering rapid exploitation. Here, concepts such as provenance types, configuration and explicit state management allow users to capture complex provenance scenarios and activate selective controls based on domain and user-defined metadata. We outline how the traces are recorded in a new comprehensive system, called S-ProvFlow, enabling different classes of consumers to explore the provenance data with services and tools for monitoring, in-depth validation and comprehensive visual-analytics. The work of this thesis will be discussed in the context of an existing computational framework and the experience matured in implementing provenance-aware tools for seismology and climate VREs. It will continue to evolve through newly funded projects, thereby providing generic and user-centred solutions for data-intensive research
    corecore