541 research outputs found

    A Linear Logic Based Approach to Timed Petri Nets

    Get PDF
    1.1 Relationship between Petri net and linear logic Petri nets were first introduced by Petri in his seminal Ph.D. thesis, and both the theory and the applications of his model have flourished in concurrency theory (Reisig & Rozenberg, 1998a; Reisig & Rozenberg, 1998b)

    Proceedings of the Workshop on Linear Logic and Logic Programming

    Get PDF
    Declarative programming languages often fail to effectively address many aspects of control and resource management. Linear logic provides a framework for increasing the strength of declarative programming languages to embrace these aspects. Linear logic has been used to provide new analyses of Prolog\u27s operational semantics, including left-to-right/depth-first search and negation-as-failure. It has also been used to design new logic programming languages for handling concurrency and for viewing program clauses as (possibly) limited resources. Such logic programming languages have proved useful in areas such as databases, object-oriented programming, theorem proving, and natural language parsing. This workshop is intended to bring together researchers involved in all aspects of relating linear logic and logic programming. The proceedings includes two high-level overviews of linear logic, and six contributed papers. Workshop organizers: Jean-Yves Girard (CNRS and University of Paris VII), Dale Miller (chair, University of Pennsylvania, Philadelphia), and Remo Pareschi, (ECRC, Munich)

    Testing Self-Adaptive Systems

    Get PDF
    Autonomy is the most demanded yet hard-to-achieve feature of recent and future software systems. Self-driving cars, mail-delivering drones, automated guided vehicles in production sites, and housekeeping robots need to decide autonomously during most of their operation time. As soon as human intervention becomes necessary, the cost of ownership increases, and this must be avoided. Although the algorithms controlling autonomous systems become more and more intelligent, their hardest opponent is their inflexibility. The more environmental situations such a system is confronted with, the more complexity the control of the autonomous system will have to master. To cope with this challenge, engineers have approached a system design, which adopts feedback loops from nature. The resulting architectural principle, which they call self-adaptive systems, follows the idea of iteratively gathering sensor data, analyzing it, planning new adaptations of the system, and finally executing the plan. Often, adaptation means to alter the system setup, re-wire components, or even exchange control algorithms to keep meeting goals and requirements in the newly appeared situation. Although self-adaptivity helps engineers to organize the vast amount of information in a self-deciding system, it remains hard to deal with the variety of contexts, which involve both environmental influences and knowledge about the system\'s internals. This challenge not only holds for the construction phase but also for verification and validation, including software test. To assure sufficient quality of a system, it must be tested under an enormous and, thus, unmanageable, number of different contextual situations and manual test-cases. This thesis proposes a novel set of methods and model types, which help test engineers to specify precisely what they expect from a self-adaptive system under test. The formal nature of the introduced artifacts allows for automatically generating test-suites or running simulations in the loop so that a qualitative verdict on the system\'s correctness can be gained. Additional to these conceptional contributions, the thesis describes a model-based adaptivity test environment, which test engineers can use for testing actual self-adaptive systems. The implementation includes comprehensive tooling for creating the introduced types of models, generating test-cases, simulating them in the loop, automating tests, and reporting. Composing all enabling components for these tasks constitutes a reference architecture of integrated test environments for self-adaptive systems. We demonstrate the completeness and accuracy of the technical approach together with the underlying concepts by evaluating them in an experimental case study where an autonomous robot interacts with human co-workers. In summary, this thesis proposes concepts for automatically and, thus, efficiently testing self-adaptive systems. The quality, which is fostered by this novel approach, is resilience: the ability of a system to maintain its promises while facing changing environments.:1 Introduction 1 1.1 Problem Description 1 1.2 Overview of Adopted Methods 3 1.3 Hypothesis and Main Contributions 4 1.4 Organization of This Thesis 5 I Foundations 7 2 Background 9 2.1 Self-adaptive Software and Autonomic Computing 9 2.1.1 Common Principles and Components of SAS 10 2.1.2 Concrete Implementations and Applications of SAS 12 2.2 Model-based Testing 13 2.2.1 Testing for Dependability 14 2.2.2 The Basics of Testing 15 2.2.3 Automated Test Design 18 2.3 Dynamic Variability Management 22 2.3.1 Software Product Lines 23 2.3.2 Dynamic Software Product Lines 25 3 Related Work: Existing Research on Testing Self-Adaptive Systems 29 3.1 Testing Context-Aware Applications 30 3.2 The SimSOTA Project 31 3.3 Dynamic Variability in Complex Adaptive Systems (DiVA) 33 3.4 Other Early-Stage Research 34 3.5 Taxonomy of Requirements of Model-based SAS Testing 36 II Methods 39 4 Model-driven SAS Testing 41 4.1 Problem/Solution Fit 41 4.2 Example: Surveillance Drone 43 4.3 Concepts and Models for Testing Self-Adaptive Systems 44 4.3.1 Test Case Generation vs. Simulation in the Loop 44 4.3.2 Incremental Modeling Process 45 4.3.3 Basic Representation Format: Petri Nets 46 4.3.4 Context Variation 50 4.3.5 Modeling Adaptive Behavior 53 4.3.6 Dynamic Context Change 57 4.3.7 Interfacing Context from Behavioral Representation 62 4.3.8 Adaptation Mode Variation 64 4.3.9 Context-Dependent Recon guration 67 4.4 Adequacy Criteria for SAS Test Models 71 4.5 Discussion on the Viability of the Employed Models 71 4.6 Comparison to Related Work 73 4.7 Summary and Discussion 74 5 Model-based Adaptivity Test Environment 75 5.1 Technological Foundation 76 5.2 MATE Base Components 77 5.3 Metamodel Implementation 78 5.3.1 Feature-based Variability Model 79 5.3.2 Abstract and Concrete Syntax for Textual Notations 80 5.3.3 Adaptive Petri Nets 86 5.3.4 Stimulus and Recon guration Automata 87 5.3.5 Test Suite and Report Model 87 5.4 Test Generation Framework 87 5.5 Test Automation Framework 91 5.6 MATE Tooling and the SAS Test Process 93 5.6.1 Test Modeling 94 5.6.2 Test Case Generation 95 5.6.3 Test Case Execution and Test Reporting 96 5.6.4 Interactive Simulation Frontend 96 5.7 Summary and Discussion 97 III Evaluation 99 6 Experimental Study: Self-Adaptive Co-Working Robots 101 6.1 Robot Teaching and Co-Working with WEIR 103 6.1.1 WEIR Hardware Components 104 6.1.2 WEIR Software Infrastructure 105 6.1.3 KUKA LBR iiwa as WEIR Manipulator 106 6.1.4 Self-Adaptation Capabilities of WEIR 107 6.2 Cinderella as Testable Co-Working Application 109 6.2.1 Cinderella Setup and Basic Functionality 109 6.2.2 Co-Working with Cinderella 110 6.3 Testing Cinderella with MATE 112 6.3.1 Automating Test Execution 112 6.3.2 Modeling Cinderella in MATE 113 6.3.3 Testing Cinderella in the Loop 121 6.4 Evaluation Verdict and Summary 123 7 Summary and Discussion 125 7.1 Summary of Contributions 126 7.2 Open Research Questions 127 Bibliography 129 Appendices 137 Appendix Cinderella De nitions 139 1 Cinderella Adaptation Bounds 139 2 Cinderella Self-adaptive Workflow 14

    Declarative techniques for modeling and mining business processes..

    Get PDF
    Organisaties worden vandaag de dag geconfronteerd met een schijnbare tegenstelling. Hoewel ze aan de ene kant veel geld geĆÆnvesteerd hebben in informatiesystemen die hun bedrijfsprocessen automatiseren, lijken ze hierdoor minder in staat om een goed inzicht te krijgen in het verloop van deze processen. Een gebrekkig inzicht in de bedrijfsprocessen bedreigt hun flexibiliteit en conformiteit. Flexibiliteit is belangrijk, omdat organisaties door continu wijzigende marktomstandigheden gedwongen worden hun bedrijfsprocessen snel en soepel aan te passen. Daarnaast moeten organisaties ook kunnen garanderen dan hun bedrijfsvoering conform is aan de wetten, richtlijnen, en normen die hun opgelegd worden. Schandalen zoals de recent aan het licht gekomen fraude bij de Franse bank SociĆ©tĆ© GĆ©nĆ©rale toont het belang aan van conformiteit en flexibiliteit. Door het afleveren van valse bewijsstukken en het omzeilen van vaste controlemomenten, kon Ć©Ć©n effectenhandelaar een risicoloze arbitragehandel op prijsverschillen in futures omtoveren tot een risicovolle, speculatieve handel in deze financiĆ«le derivaten. De niet-ingedekte, niet-geautoriseerde posities bleven lange tijd verborgen door een gebrekkige interne controle, en tekortkomingen in de IT beveiliging en toegangscontrole. Om deze fraude in de toekomst te voorkomen, is het in de eerste plaats noodzakelijk om inzicht te verkrijgen in de operationele processen van de bank en de hieraan gerelateerde controleprocessen. In deze tekst behandelen we twee benaderingen die gebruikt kunnen worden om het inzicht in de bedrijfsprocessen te verhogen: procesmodellering en procesontginning. In het onderzoek is getracht technieken te ontwikkelen voor procesmodellering en procesontginning die declaratief zijn. Procesmodellering process modeling is de manuele constructie van een formeel model dat een relevant aspect van een bedrijfsproces beschrijft op basis van informatie die grotendeels verworven is uit interviews. Procesmodellen moeten adequate informatie te verschaffen over de bedrijfsprocessen om zinvol te kunnen worden gebruikt bij hun ontwerp, implementatie, uitvoering, en analyse. De uitdaging bestaat erin om nieuwe talen voor procesmodellering te ontwikkelen die adequate informatie verschaffen om deze doelstelling realiseren. Declaratieve procestalen maken de informatie omtrent bedrijfsbekommernissen expliciet. We karakteriseren en motiveren declaratieve procestalen, en nemen we een aantal bestaande technieken onder de loep. Voorts introduceren we een veralgemenend raamwerk voor declaratieve procesmodellering waarbinnen bestaande procestalen gepositioneerd kunnen worden. Dit raamwerk heet het EM-BrAļæ½CE raamwerk, en staat voor `Enterprise Modeling using Business Rules, Agents, Activities, Concepts and Events'. Het bestaat uit een formele ontolgie en een formeel uitvoeringsmodel. Dit raamwerk legt de ontologische basis voor de talen en technieken die verder in het doctoraat ontwikkeld worden. Procesontginning process mining is de automatische constructie van een procesmodel op basis van de zogenaamde event logs uit informatiesystemen. Vandaag de dag worden heel wat processen door informatiesystemen in event logs geregistreerd. In event logs vindt men in chronologische volgorde terug wie, wanneer, welke activiteit verricht heeft. De analyse van event logs kan een accuraat beeld opleveren van wat er zich in werkelijkheid afspeelt binnen een organisatie. Om bruikbaar te zijn, moeten de ontgonnen procesmodellen voldoen aan criteria zoals accuraatheid, verstaanbaarheid, en justifieerbaarheid. Bestaande technieken voor procesontginning focussen vooral op het eerste criterium: accuraatheid. Declaratieve technieken voor procesontginning richten zich ook op de verstaanbaarheid en justifieerbaarheid van de ontgonnen modellen. Declaratieve technieken voor procesontginning zijn meer verstaanbaar omdat ze pogen procesmodellen voor te stellen aan de hand van declaratieve voorstellingsvormen. Daarenboven verhogen declaratieve technieken de justifieerbaarheid van de ontgonnen modellen. Dit komt omdat deze technieken toelaten de apriori kennis, inductieve bias, en taal bias van een leeralgoritme in te stellen. Inductief logisch programmeren (ILP) is een leertechniek die inherent declaratief is. In de tekst tonen we hoe proces mining voorgesteld kan worden als een ILP classificatieprobleem, dat de logische voorwaarden leert waaronder gebeurtenis plaats vindt (positief event) of niet plaatsvindt (een negatief event). Vele event logs bevatten van nature geen negatieve events die aangeven dat een bepaalde activiteit niet kon plaatsvinden. Om aan dit probleem tegemoet te komen, beschrijven we een techniek om artificiĆ«le negatieve events te genereren, genaamd AGNEs (process discovery by Artificially Generated Negative Events). De generatie van artificiĆ«le negatieve events komt neer op een configureerbare inductieve bias. De AGNEs techniek is geĆÆmplementeerd als een mining plugin in het ProM raamwerk. Door process discovery voor te stellen als een eerste-orde classificatieprobleem op event logs met artificiĆ«le negatieve events, kunnen de traditionele metrieken voor het kwantificeren van precisie (precision) en volledigheid (recall) toegepast worden voor het kwantificeren van de precisie en volledigheid van een procesmodel ten opzicht van een event log. In de tekst stellen we twee nieuwe metrieken voor. Deze nieuwe metrieken, in combinatie met bestaande metrieken, werden gebruikt voor een uitgebreide evaluatie van de AGNEs techniek voor process discovery in zowel een experimentele als een praktijkopstelling.

    Proceedings of The Multi-Agent Logics, Languages, and Organisations Federated Workshops (MALLOW 2010)

    Get PDF
    http://ceur-ws.org/Vol-627/allproceedings.pdfInternational audienceMALLOW-2010 is a third edition of a series initiated in 2007 in Durham, and pursued in 2009 in Turin. The objective, as initially stated, is to "provide a venue where: the cost of participation was minimum; participants were able to attend various workshops, so fostering collaboration and cross-fertilization; there was a friendly atmosphere and plenty of time for networking, by maximizing the time participants spent together"

    What's next? : operational support for business process execution

    Get PDF
    In the last decade flexibility has become an increasingly important in the area of business process management. Information systems that support the execution of the process are required to work in a dynamic environment that imposes changing demands on the execution of the process. In academia and industry a variety of paradigms and implementations has been developed to support flexibility. While on the one hand these approaches address the industry demands in flexibility, on the other hand, they result in confronting the user with many choices between different alternatives. As a consequence, methods to support users in selecting the best alternative during execution have become essential. In this thesis we introduce a formal framework for providing support to users based on historical evidence available in the execution log of the process. This thesis focuses on support by means of (1) recommendations that provide the user an ordered list of execution alternatives based on estimated utilities and (2) predictions that provide the user general statistics for each execution alternative. Typically, estimations are not an average over all observations, but they are based on observations for "similar" situations. The main question is what similarity means in the context of business process execution. We introduce abstractions on execution traces to capture similarity between execution traces in the log. A trace abstraction considers some trace characteristics rather than the exact trace. Traces that have identical abstraction values are said to be similar. The challenge is to determine those abstractions (characteristics) that are good predictors for the parameter to be estimated in the recommendation or prediction. We analyse the dependency between values of an abstraction and the mean of the parameter to be estimated by means of regression analysis. With regression we obtain a set of abstractions that explain the parameter to be estimated. Dependencies do not only play a role in providing predictions and recommendations to instances at run-time, but they are also essential for simulating the effect of changes in the environment on the processes, both locally and globally. We use stochastic simulation models to simulate the effect of changes in the environment, in particular changed probability distribution caused by recommendations. The novelty of these models is that they include dependencies between abstraction values and simulation parameters, which are estimated from log data. We demonstrate that these models give better approximations of reality than traditional models. A framework for offering operational support has been implemented in the context of the process mining framework ProM

    Reversible Computation: Extending Horizons of Computing

    Get PDF
    This open access State-of-the-Art Survey presents the main recent scientific outcomes in the area of reversible computation, focusing on those that have emerged during COST Action IC1405 "Reversible Computation - Extending Horizons of Computing", a European research network that operated from May 2015 to April 2019. Reversible computation is a new paradigm that extends the traditional forwards-only mode of computation with the ability to execute in reverse, so that computation can run backwards as easily and naturally as forwards. It aims to deliver novel computing devices and software, and to enhance existing systems by equipping them with reversibility. There are many potential applications of reversible computation, including languages and software tools for reliable and recovery-oriented distributed systems and revolutionary reversible logic gates and circuits, but they can only be realized and have lasting effect if conceptual and firm theoretical foundations are established first

    A Survey on IT-Techniques for a Dynamic Emergency Management in Large Infrastructures

    Get PDF
    This deliverable is a survey on the IT techniques that are relevant to the three use cases of the project EMILI. It describes the state-of-the-art in four complementary IT areas: Data cleansing, supervisory control and data acquisition, wireless sensor networks and complex event processing. Even though the deliverableā€™s authors have tried to avoid a too technical language and have tried to explain every concept referred to, the deliverable might seem rather technical to readers so far little familiar with the techniques it describes
    • ā€¦
    corecore