25 research outputs found

    Dynamic variability support in context-aware workflow-based systems

    Get PDF
    Workflow-based systems are increasingly becomingmore complex and dynamic. Besides the large sets of process variants to be managed, process variants need to be context sensitive in order to accommodate new user requirements and intrinsic complexity. This paradigm shift forces us to defer decisions to run time where process variants must be customized and executed based on a recognized context. However, few efforts have been focused on dynamic variability for process families. This dissertation proposes an approach for variant-rich workflow-based systems that can comprise context data while deferring process configuration to run time. Whereas existing early process variability approaches, like Worklets, VxBPEL, or Provop handle run-time reconfiguration, ours lets us resolve variants at execution time and supports multiple binding required for dynamic environments. Finally, unlike the specialized reconfiguration solutions for some workflow-based systems, our approach allows an automated decision making, enabling different run-time resolution strategies that intermix constraint solving and feature models. We achieve these results through a simple extension to BPMN that adds primitives for process variability constructs. We show that this is enough to eficiently model process variability while preserving separation of concerns. We implemented our approach in the LateVa framework and evaluated it using both synthetic and realworld scenarios. LateVa achieves a reasonable performance over runtime resolution, which means that can facilitate practical adoption in context-aware and variant-rich work ow-based systems

    Dagstuhl News January - December 2000

    Get PDF
    "Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic

    Engineering coordination : eine Methodologie für die Koordination von Planungssystemen

    Get PDF
    Planning problems, like real-world planning and scheduling problems, are complex tasks. As an efficient strategy for handing such problems is the ‘divide and conquer’ strategy has been identified. Each sub problem is then solved independently. Typically the sub problems are solved in a linear way. This approach enables the generation of sub-optimal plans for a number of real world problems. Today, this approach is widely accepted and has been established e.g. in the organizational structure of companies. But existing interdependencies between the sub problems are not sufficiently regarded, as each problem are solved sequentially and no feedback information is given. The field of coordination has been covered by a number of academic fields, like the distributed artificial intelligence, economics or game theory. An important result is, that there exist no method that leads to optimal results in any given coordination problem. Consequently, a suitable coordination mechanism has to be identified for each single coordination problem. Up to now, there exists no process for the selection of a coordination mechanism, neither in the engineering of distributed systems nor in agent oriented software engineering. Within the scope of this work the ECo process is presented, that address exactly this selection problem. The Eco process contains the following five steps. • Modeling of the coordination problem • Defining the coordination requirements • Selection / Design of the coordination mechanism • Implementation • Evaluation Each of these steps is detailed in the thesis. The modeling has to be done to enable a systemic analysis of the coordination problem. Coordination mechanisms have to respect the given situation and the context in which the coordination has to be done. The requirements imposed by the context of the coordination problem are formalized in the coordination requirements. The selection process is driven by these coordination requirements. Using the requirements as a distinction for the selection of a coordination mechanism is a central aspect of this thesis. Additionally these requirements can be used for documentation of design decisions. Therefore, it is reasonable to annotate the coordination mechanisms with the coordination requirements they fulfill and fail to ease the selection process, for a given situation. For that reason we present a new classification scheme for coordination methods within this thesis that classifies existing coordination methods according to a set of criteria that has been identified as important for the distinction between different coordination methods. The implementation phase of the ECo process is supported by the CoPS process and CoPS framework that has been developed within this thesis, as well. The CoPS process structures the design making that has to be done during the implementation phase. The CoPS framework provides a set of basic features software agents need for realizing the selected coordination method. Within the CoPS process techniques are presented for the design and implementation of conversations between agents that can be applied not only within the context of the coordination of planning systems, but for multiagent systems in general. The ECo-CoPS approach has been successfully validated in two case studies from the logistic domain.Reale Planungsprobleme, wie etwa die Produktionsplanung in einer Supply Chain, sind komplex Planungsprobleme. Eine übliche Strategie derart komplexen Problemen zu lösen, ist es diese Probleme in einfachere Teilprobleme zu zerlegen und diese dann separat, meist sequentiell, zu lösen (divide-and-conquer Strategie). Dieser Ansatz erlaubt die Erstellung von (suboptimalen) Plänen für eine Reihe von realen Anwendungen, und ist heute in den Organisationsstrukturen von größeren Unternehmen institutionalisiert worden. Allerdings werden Abhängigkeiten zwischen den Teilproblemen nicht ausreichend berücksichtigt, da die Partialprobleme sequentiell ohne Feedback gelöst werden. Die erstellten Teillösungen müssen deswegen oft nachträglich koordiniert werden. Das Gebiet der Koordination wird in verschiedenen Forschungsgebieten, wie etwa der verteilten Künstlichen Intelligenz, den Wirtschaftswissenschaften oder der Spieltheorie untersucht. Ein zentrales Ergebnis dieser Forschung ist, dass es keinen für alle Situationen geeigneten Koordinationsmechanismus gibt. Es stellt sich also die Aufgabe aus den zahlreichen vorgeschlagenen Koordinationsmechanismen eine Auswahl zu treffen, die für die aktuelle Situation den geeigneten Mechanismus identifiziert. Für die Auswahl eines solchen Mechanismus existiert bisher jedoch kein strukturiertes Verfahren für die Entwicklung von verteilten Systems und insbesondere im Bereich der Agenten orientierter Softwareentwicklung. Im Rahmen dieser Arbeit wird genau hierfür ein Verfahren vorgestellt, der ECo-Prozess. Mit Hilfe dieses Prozesses wird der Auswahlprozess in die folgenden Schritte eingeteilt: • Modellierung der Problemstellung und des relevante Kontextes • Formulierung von Anforderungen an einen Koordinationsmechanismus (coordination requirements) • Auswahl/Entwurf eines Koordinationsmechanismuses • Implementierung des Koordinationsverfahrens • Evaluation des Koordinationsverfahrens Diese Schritte werden im Rahmen der vorliegenden Arbeit detailliert beschrieben. Die Modellierung der Problemstellung stellt dabei den ersten Schritt dar, um die Problemstellung analytisch zugänglich zu machen. Koordinationsverfahren müssen die Gegebenheiten, den Kontext und die Domäne, in der sie angewendet werden sollen hinreichend berücksichtigen um anwendbar zu sein. Dieses kann über Anforderungen an den Koordinationsprozess formalisiert werden. Der von den Anforderungen getrieben Auswahlprozess ist ein Kernstück der hier vorgestellten Arbeit. Durch die Formulierung der Anforderungen und der Annotation eines Koordinationsmechanismus bezüglich der erfüllten und nicht erfüllten Anforderungen werden die Motive für Designentscheidungen dieses Verfahren expliziert. Wenn Koordinationsverfahren anhand dieser Anforderungen klassifiziert werden können, ist es weiterhin möglich den Auswahlprozess (unabhängig vom ECo-Ansatz) zu vereinfachen und zu beschleunigen. Im Rahmen dieser Arbeit wird eine Klassifikation von Koordinationsansätzen anhand von allgemeinen Kriterien vorgestellt, die die Identifikation von geeigneten Kandidaten erleichtern. Diese Kandidaten können dann detaillierter untersucht werden. Dies wurde in den vorgestellten Fallstudien erfolgreich demonstriert. Für die Unterstützung der Implementierung eines Koordinationsansatzes wird in dieser Arbeit zusätzlich der CoPS Prozess vorgeschlagen. Der CoPS Prozess erlaubt einen ganzheitlichen systematischen Ansatz für den Entwurf und die Implementierung eines Koordinationsverfahrens. Unterstürzt wird der CoPS Prozess durch das CoPS Framework, das die Implementierung erleichtert, indem es als eine Plattform mit Basisfunktionalität eines Agenten bereitstellt, der für die Koordination von Planungssystemen verantwortlich ist. Im Rahmen des CoPS Verfahrens werden Techniken für den Entwurf und die Implementierung von Konversation im Kontext des agenten-orientiertem Software Engineerings ausführlich behandelt. Der Entwurf von Konversationen geht dabei weit über Fragestellung der Formatierung von Nachrichten hinaus, wie dies etwa in den FIPA Standards geregelt ist, und ist für die Implementierung von agentenbasierten Systemen im Allgemeinen von Bedeutung. Die Funktionsweise des ECo-CoPS Ansatzes wird anhand von zweierfolgreich durchgeführten Fallstudien aus dem betriebswirtschaftlichen Kontext vorgestellt

    Critical Success Factors to Improve the Game Development Process from a Developer\u27s Perspective

    Get PDF
    The growth of the software game development industry is enormous and is gaining importance day by day. This growth imposes severe pressure and a number of issues and challenges on the game development community. Game development is a complex process, and one important game development choice is to consider the developer’s perspective to produce good-quality software games by improving the game development process. The objective of this study is to provide a better understanding of the developer’s dimension as a factor in software game success. It focuses mainly on an empirical investigation of the effect of key developer’s factors on the software game development process and eventually on the quality of the resulting game. A quantitative survey was developed and conducted to identify key developer’s factors for an enhanced game development process. For this study, the developed survey was used to test the research model and hypotheses. The results provide evidence that game development organizations must deal with multiple key factors to remain competitive and to handle high pressure in the software game industry. The main contribution of this paper is to investigate empirically the influence of key developer’s factors on the game development process

    Extension d'un cadre de composition de comportements en présence de pannes à l'aide de techniques de reprise et de AKKA

    Get PDF
    Abstract: Fault tolerance is an essential property to be satis ed in the composition of services, but reaching a high level of fault tolerance remains a challenge. In the area of ubiquitous computing, the composition of services is inevitable when a request cannot be carried out by a single service, but by a combination of several services. This thesis studies fault tolerance in the context of a general behavior composition framework. This approach raises, rst, the problem of the synthesis of controllers (or compositions) in order to coordinate a set of available services to achieve a new service, the target service and, second, the exploitation of all compositions to make the new service fault tolerant. Although a solution has been proposed by the authors of the behavior composition framework, it is incomplete and has not been evaluated experimentally or in situ. This thesis brings two contributions to this problem. On one hand, it considers the case in which the service selected by the controller is temporarily or permanently unavailable by exploiting recovery techniques to identify a consistent state of the system from which it may progress using other services or leave it in a coherent state when none of the available services no longer allows progression. On the other hand, it evaluates several recovery solutions, each useful in services malfunction situations, using a case study implemented with the aid of Akka, a tool that facilitates the development of reactive, concurrent and distributed systems.La tolérance aux fautes est une propriété indispensable à satisfaire dans la composition de services, mais atteindre un haut de niveau de tolérance aux fautes représente un défi majeur. Dans l'ère de l'informatique ubiquitaire, la composition de services est inévitable lorsqu'une requête ne peut être réalisée par un seul service, mais par la combinaison de plusieurs services. Ce mémoire étudie la tolérance aux fautes dans le contexte d'un cadre général de composition de comportements (behavior composition framework en anglais). Cette approche soulève, tout d'abord, le problème de la synthèse de contrôleurs (ou compositions) de façon à coordonner un ensemble de services disponibles afin de réaliser un nouveau service, le service cible et, ensuite, celui de l'exploitation de l'ensemble des compositions afin de rendre le nouveau service tolérant aux fautes. Bien qu'une solution ait été proposée par les auteurs de ce cadre de composition, elle est incomplète et elle n'a pas été évaluée expérimentalement ou in situ. Ce mémoire apporte deux contributions à ce problème. D'une part, il considère le cas dans lequel le service visé par le contrôleur est temporairement ou définitivement non disponible en exploitant des techniques de reprise afin d'identifier un état cohérent du système à partir duquel il peut progresser en utilisant d'autres services ou de le laisser dans un état cohérent lorsqu'aucun service, parmi ceux disponibles, ne permet plus de progression. D'autre part, il évalue plusieurs solutions de reprise, chacune utile dans des situations particulières de pannes, à l'aide d'une étude de cas implémentée en Akka, un outil qui permet aisément de mettre en oeuvre des systèmes réactifs, concurrents et répartis

    ERP implementation methodologies and frameworks: a literature review

    Get PDF
    Enterprise Resource Planning (ERP) implementation is a complex and vibrant process, one that involves a combination of technological and organizational interactions. Often an ERP implementation project is the single largest IT project that an organization has ever launched and requires a mutual fit of system and organization. Also the concept of an ERP implementation supporting business processes across many different departments is not a generic, rigid and uniform concept and depends on variety of factors. As a result, the issues addressing the ERP implementation process have been one of the major concerns in industry. Therefore ERP implementation receives attention from practitioners and scholars and both, business as well as academic literature is abundant and not always very conclusive or coherent. However, research on ERP systems so far has been mainly focused on diffusion, use and impact issues. Less attention has been given to the methods used during the configuration and the implementation of ERP systems, even though they are commonly used in practice, they still remain largely unexplored and undocumented in Information Systems research. So, the academic relevance of this research is the contribution to the existing body of scientific knowledge. An annotated brief literature review is done in order to evaluate the current state of the existing academic literature. The purpose is to present a systematic overview of relevant ERP implementation methodologies and frameworks as a desire for achieving a better taxonomy of ERP implementation methodologies. This paper is useful to researchers who are interested in ERP implementation methodologies and frameworks. Results will serve as an input for a classification of the existing ERP implementation methodologies and frameworks. Also, this paper aims also at the professional ERP community involved in the process of ERP implementation by promoting a better understanding of ERP implementation methodologies and frameworks, its variety and history

    Data and Process Mining Applications on a Multi-Cell Factory Automation Testbed

    Get PDF
    This paper presents applications of both data mining and process mining in a factory automation testbed. It mainly concentrates on the Manufacturing Execution System (MES) level of production hierarchy. Unexpected failures might lead to vast losses on investment or irrecoverable damages. Predictive maintenance techniques, active/passive, have shown high potential of preventing such detriments. Condition monitoring of target pieces of equipment beside defined thresholds forms basis of the prediction. However, monitored parameters must be independent of environment changes, e.g. vibration of transportation equipments such as conveyor systems is variable to workload. This work aims to propose and demonstrate an approach to identify incipient faults of the transportation systems in discrete manufacturing settings. The method correlates energy consumption of the described devices with the workloads. At runtime, machine learning is used to classify the input energy data into two pattern descriptions. Consecutive mismatches between the output of the classifier and the workloads observed in real time indicate possibility of incipient failure at device level. Currently, as a result of high interaction between information systems and operational processes, and due to increase in the number of embedded heterogeneous resources, information systems generate unstructured and massive amount of events. Organizations have shown difficulties to deal with such an unstructured and huge amount of data. Process mining as a new research area has shown strong capabilities to overcome such problems. It applies both process modelling and data mining techniques to extract knowledge from data by discovering models from the event logs. Although process mining is recognised mostly as a business-oriented technique and recognised as a complementary of Business Process Management (BPM) systems, in this paper, capabilities of process mining are exploited on a factory automation testbed. Multiple perspectives of process mining is employed on the event logs produced by deploying Service Oriented Architecture through Web Services in a real multi-robot factory automation industrial testbed, originally used for assembly of mobile phones

    Business process modelling in ERP implementation literature review

    Get PDF
    Business processes are the backbone of any Enterprise Resource Planning (ERP) implementation. Business process modelling (BPM) has become essential for modern, process driven enterprises due to the vibrant business environments. As a consequence enterprises are dealing with a substantial rate of organizational and business processes change. Business process modelling enables a common understanding and analysis of the business processes, which is the first step in every ERP implementation methodology (blueprint phase). In order to represent enterprise processes models in an accurate manner, it is paramount to choose a right business process modeling technique and tool. The problem of many ERP projects rated as unsuccessful is directly connected to a lack of use of business process models and notations during the blueprint phase. Also, blueprint implementation phase is crucial in order to fit planned processes in an organization with processes implemented in the solution. However, business analysts and ERP implementation professionals have substantial difficulties to navigate through a large number of theoretical models and representational notations that have been proposed for business process modeling (BPM). As the availability of different business process modeling references is huge, it is time consuming to make review and classification of all modeling techniques. Therefor, in reality majority of ERP implementations blueprint documents have no business process modeling included in generating blueprint documents. Choosing the right model comprise the purpose of the analysis and acquaintance of the available process modelling techniques and tools. The number of references on business modelling is quit large, so it is very hard to make a decision which modeling notation or technique to use. The main purpose of this paper is to make a review of business process modelling literature and describe the key process modelling techniques. The focus will be on all business process modeling that could be used in ERP implementations, specifically during the blueprint phase of the implementation process. Detailed review of BPM (Business process modeling) theoretical models and representational notations, should assist decision makers and ERP integrators in comparatively evaluating and selecting suitable modeling approaches

    A conceptual framework for capability sourcing modeling

    Get PDF
    Companies need to acquire the right capabilities from the right source, and the right shore, at the right cost to improve their competitive position. Capability sourcing is an organizing process to gain access to best-in-class capabilities for all activities in a firm's value chain to ensure long-term competitive advantage. Capability sourcing modeling is a technique that helps investigating sourcing alternative solutions to facilitate strategic sourcing decision making. Our position is applying conceptual models as intermediate artifacts which are schematic descriptions of sourcing alternatives based on organization's capabilities. The contribution of this paper is introducing a conceptual framework in the form of five views (to organize all perspectives) and a conceptualisation (to formulate a language) for capability sourcing modelling
    corecore