203 research outputs found
Automated web services composition with the event calculus
As the web services proliferate and complicate it is becoming an overwhelming job to manually prepare the web service compositions which describe the communication and integration between web services. This paper analyzes the usage of the Event Calculus, which is one of the logical action-effect definition languages, for the automated preparation and execution of web service compositions. In this context, abductive planning capabilities of the Event Calculus are utilized. It is shown that composite process definitions in OWL-S can be translated into Event Calculus axioms so that planning with generic process definitions is possible within this framework. © 2008 Springer-Verlag Berlin Heidelberg
Ontological Formalization for Workflow-based Computational Experiments
AbstractWorkflow-based computational experiment is a widespread way to organize distributed simulations. But the lack of IT experience and skills is the critical issue which scientists usually face with. By this paper we describe the reasoning capabilities, which are obtained from the proposed hierarchical structure for expert's knowledge formalization. The contribution of this paper is the ontological representation of a structure, which make end-users to deal with domain models compiled of fine-grained domain and infrastructural entities in order to generate an executable workflow as a result. A task of forecasting of storm surges and decision support for gates maneuvering is presented a use-case of the paper
A cross organisation compatible workflows generation and execution framework
With the development of the Internet, the demand for electronic and online commerce has increased. This has, in turn, increased the demand for business process automation. In this paper, we look at the use of workflows for business process automation. An automatically generated workflow can save time and resources needed for running online businesses. In general, due to the interdependencies between their activities, multiple business organisations will need to work together by collaborating and coordinating their activities with each other. This gives rise to the need for workflow collaboration across organisations. Current systems for workflow collaboration are only capable of reconciling existing workflows of the collaborating organisations. Automatic workflow generation systems only generate workflows for individual organisations and cannot handle the automatic generation of compatible workflows for multiple collaborating organisations. To overcome this problem, in this paper, we present a framework that is able to generate multiple sets of compatible workflows for multiple collaborating organisations. The proposed framework supports runtime enactment and runtime collaboration of the generated workflows. This framework enables users to save the time and resources that would otherwise be spent in modelling, reconciling and reengineering workflows
DISC: A declarative framework for self-healing Web services composition
International audienceWeb services composition design, verification and monitoring are active and widely studied research directions. Little work however has been done in integrating these related dimensions using a unified formalism. In this paper we propose a declarative event-oriented framework, called DISC, that serves as a unified framework to bridge the gap between the process design, verification and monitoring. Proposed framework allows for a composition design to accommodate various aspects such as data relationships and constraints, Web services dynamic binding, compliance regulations, security or temporal requirements and others. Then, it allows for instantiating, verifying and executing the composition design and for monitoring the process while in execution. The effect of run-time violations can also be calculated and a set of recovery actions can be taken, allowing for the self-healing Web services composition
Web Service Mining and Verification of Properties: An approach based on Event Calculus
http://www.springerlink.com/Web services are becoming more and more complex, involving numerous interacting business objects within complex distributed processes. In order to fully explore Web service business opportunities, while ensuring a correct and reliable execution, analyzing and tracking Web services interactions will enable them to be well understood and controlled. The work described in this paper is a contribution to these issues for Web services based process applications. This article describes a novel way of applying process mining techniques to Web services logs in order to enable ''Web service intelligence''. Our work attempts to apply Web service log-based analysis and process mining techniques in order to provide semantical knowledge about the context of and the reasons for discrepancies between process models and related instances
Automatic Dynamic Web Service Composition: A Survey and Problem Formalization
The aim of Web service composition is to arrange multiple services into workflows supplying complex user needs. Due to the huge amount of Web services and the need to supply dynamically varying user goals, it is necessary to perform the composition automatically. The objective of this article is to overview the issues of automatic dynamic Web service composition. We discuss the issues related to the semantics of services, which is important for automatic Web service composition. We propose a problem formalization contributing to the formal definition of the pre-/post-conditions, with possible value restrictions, and their relation to the semantics of services. We also provide an overview of several existing approaches dealing with the problem of Web service composition and discuss the current achievements in the field and depict some open research areas
Recommended from our members
Runtime monitoring of service based systems
With the growing popularity of web services the demand of highly reliable service based systems (SBS) is increasing. Formal verification and testing are performed to ensure the correctness of a system before it is deployed in a real environment. But the high complexity of complete fielded systems puts their effectiveness into questions. Runtime monitoring is the potential technique to cover the area not covered by formal verification and testing. This technique aims to assure the correctness of the current execution of a system. Substantial amount of research has been carried out in runtime monitoring to ensure the reliability of autonomous legacy software. However in service based system some significant complications arises as they focus on systems with no autonomous components, that make the approaches applied to monitor legacy software inadequate for service based system. In this thesis we present a framework for runtime monitoring of service based systems. We establish the necessity of introducing new types of inconsistencies beyond the classical inconsistencies that may occur during the execution of service based systems and develop reasoning mechanism to detect them at run time.
In the proposed framework, the properties to be monitored include: (i) behavioural properties of the co-ordination process of the service based system, (ii) functional properties that express functional requirements for the individual services of a service based system or groups of such services, (ii) assumptions regarding the behaviour of the service based system and its constituent services and their effects on the state of the system and (iii) Quality-of- Service (QOS) properties for the service based systems and its constituent services. All types of properties are expressed in a property specification language which is based on event- calculus [Sha99]. The behavioural properties to be monitored at run-time are extracted automatically from the specification of the co-ordination process of a service-based system in BPEL [Bpe03] while the other types of properties to be monitored must be specified by the providers of the system. These properties must be specified in terms of: (i) events that can be observed at run-time and correspond to either operation invocation and response messages or the assignment of values to global variables used by the co-ordination process of the system, and (ii) conditions over the state of the co-ordination process of the system and/or the individual services deployed by it. These restrictions ensure that property monitoring can be based solely on events which are generated by virtue of the normal operation of the system without the need for instrumenting the individual services deployed by it. The property specification language that is used by this framework is a first-order logic language that incorporates special predicates to signify assertions about time and, to this end, it provides a very expressive framework for specifying properties of service based system, which may include temporal characteristics.
At run-time, the framework deploys an event receiver that catches events which are exchanged by the different services and the co-ordination process of the system and stores them in an event database. This database is accessed by a monitor that can detect different types of violations of properties. These types are: (i) violations of functional properties and quality-of-service properties by the recorded behaviour of the service based system, (ii) violations and potential violations of behavioural properties, functional properties and quality- of-service properties by the expected system behaviour, and (iii) unjustified and potentially unjustified actions which the system has taken by wrongly assuming that certain pre-conditions associated with the undertaken actions were satisfied at run-time. The detection of these types of violations is fully automatic and is based on an algorithm that has been developed as a variant of algorithms for integrity constraint checking in temporal deductive databases [Ple93, Cho95]. We have implemented a prototype of the proposed monitoring framework and showed the effectiveness of the monitoring prototype through several case studies
Declarative techniques for modeling and mining business processes..
Organisaties worden vandaag de dag geconfronteerd met een schijnbare tegenstelling. Hoewel ze aan de ene kant veel geld geïnvesteerd hebben in informatiesystemen die hun bedrijfsprocessen automatiseren, lijken ze hierdoor minder in staat om een goed inzicht te krijgen in het verloop van deze processen. Een gebrekkig inzicht in de bedrijfsprocessen bedreigt hun flexibiliteit en conformiteit. Flexibiliteit is belangrijk, omdat organisaties door continu wijzigende marktomstandigheden gedwongen worden hun bedrijfsprocessen snel en soepel aan te passen. Daarnaast moeten organisaties ook kunnen garanderen dan hun bedrijfsvoering conform is aan de wetten, richtlijnen, en normen die hun opgelegd worden. Schandalen zoals de recent aan het licht gekomen fraude bij de Franse bank Société Générale toont het belang aan van conformiteit en flexibiliteit. Door het afleveren van valse bewijsstukken en het omzeilen van vaste controlemomenten, kon één effectenhandelaar een risicoloze arbitragehandel op prijsverschillen in futures omtoveren tot een risicovolle, speculatieve handel in deze financiële derivaten. De niet-ingedekte, niet-geautoriseerde posities bleven lange tijd verborgen door een gebrekkige interne controle, en tekortkomingen in de IT beveiliging en toegangscontrole. Om deze fraude in de toekomst te voorkomen, is het in de eerste plaats noodzakelijk om inzicht te verkrijgen in de operationele processen van de bank en de hieraan gerelateerde controleprocessen. In deze tekst behandelen we twee benaderingen die gebruikt kunnen worden om het inzicht in de bedrijfsprocessen te verhogen: procesmodellering en procesontginning. In het onderzoek is getracht technieken te ontwikkelen voor procesmodellering en procesontginning die declaratief zijn. Procesmodellering process modeling is de manuele constructie van een formeel model dat een relevant aspect van een bedrijfsproces beschrijft op basis van informatie die grotendeels verworven is uit interviews. Procesmodellen moeten adequate informatie te verschaffen over de bedrijfsprocessen om zinvol te kunnen worden gebruikt bij hun ontwerp, implementatie, uitvoering, en analyse. De uitdaging bestaat erin om nieuwe talen voor procesmodellering te ontwikkelen die adequate informatie verschaffen om deze doelstelling realiseren. Declaratieve procestalen maken de informatie omtrent bedrijfsbekommernissen expliciet. We karakteriseren en motiveren declaratieve procestalen, en nemen we een aantal bestaande technieken onder de loep. Voorts introduceren we een veralgemenend raamwerk voor declaratieve procesmodellering waarbinnen bestaande procestalen gepositioneerd kunnen worden. Dit raamwerk heet het EM-BrA�CE raamwerk, en staat voor `Enterprise Modeling using Business Rules, Agents, Activities, Concepts and Events'. Het bestaat uit een formele ontolgie en een formeel uitvoeringsmodel. Dit raamwerk legt de ontologische basis voor de talen en technieken die verder in het doctoraat ontwikkeld worden. Procesontginning process mining is de automatische constructie van een procesmodel op basis van de zogenaamde event logs uit informatiesystemen. Vandaag de dag worden heel wat processen door informatiesystemen in event logs geregistreerd. In event logs vindt men in chronologische volgorde terug wie, wanneer, welke activiteit verricht heeft. De analyse van event logs kan een accuraat beeld opleveren van wat er zich in werkelijkheid afspeelt binnen een organisatie. Om bruikbaar te zijn, moeten de ontgonnen procesmodellen voldoen aan criteria zoals accuraatheid, verstaanbaarheid, en justifieerbaarheid. Bestaande technieken voor procesontginning focussen vooral op het eerste criterium: accuraatheid. Declaratieve technieken voor procesontginning richten zich ook op de verstaanbaarheid en justifieerbaarheid van de ontgonnen modellen. Declaratieve technieken voor procesontginning zijn meer verstaanbaar omdat ze pogen procesmodellen voor te stellen aan de hand van declaratieve voorstellingsvormen. Daarenboven verhogen declaratieve technieken de justifieerbaarheid van de ontgonnen modellen. Dit komt omdat deze technieken toelaten de apriori kennis, inductieve bias, en taal bias van een leeralgoritme in te stellen. Inductief logisch programmeren (ILP) is een leertechniek die inherent declaratief is. In de tekst tonen we hoe proces mining voorgesteld kan worden als een ILP classificatieprobleem, dat de logische voorwaarden leert waaronder gebeurtenis plaats vindt (positief event) of niet plaatsvindt (een negatief event). Vele event logs bevatten van nature geen negatieve events die aangeven dat een bepaalde activiteit niet kon plaatsvinden. Om aan dit probleem tegemoet te komen, beschrijven we een techniek om artificiële negatieve events te genereren, genaamd AGNEs (process discovery by Artificially Generated Negative Events). De generatie van artificiële negatieve events komt neer op een configureerbare inductieve bias. De AGNEs techniek is geïmplementeerd als een mining plugin in het ProM raamwerk. Door process discovery voor te stellen als een eerste-orde classificatieprobleem op event logs met artificiële negatieve events, kunnen de traditionele metrieken voor het kwantificeren van precisie (precision) en volledigheid (recall) toegepast worden voor het kwantificeren van de precisie en volledigheid van een procesmodel ten opzicht van een event log. In de tekst stellen we twee nieuwe metrieken voor. Deze nieuwe metrieken, in combinatie met bestaande metrieken, werden gebruikt voor een uitgebreide evaluatie van de AGNEs techniek voor process discovery in zowel een experimentele als een praktijkopstelling.
- …