32 research outputs found

    Multi-Agent Approach to Modeling and Implementing Fault-Tolerance in Reactive Autonomic Systems

    Get PDF
    Recently, autonomic computing has been proposed as a promising solution for software complexity in IT industry. As an autonomic approach, the Reactive Autonomic Systems Framework (RASF) proposes a formal modeling based on mathematical category theory, which addresses the self-* properties of reactive autonomic systems in a more abstract level. This thesis is about the specification and implementation of the reactive autonomic systems (RAS) through multi-agent approach by laying emphasis on the fault-tolerance property of RAS. Furthermore, this thesis proposes a model-driven approach to transform the RAS model to agent templates in multi-agent model using Extensible Stylesheet Language Transformation (XSLT). The multi-agent approach in this research is implemented by Jadex, a high-level Java-based agent programming language. The intelligent agents are created in Jadex based on the Belief-Desire-Intension (BDI) agent architecture. The approach is illustrated on a case study

    Towards a Formal Reactive Autonomic Systems Framework using Category Theory

    Get PDF
    Software complexity is the main obstacle to further progress in IT industry, as the difficulty of managing complex and massive computing systems goes well beyond IT administrators’ capabilities. One of the remaining options is autonomic computing, which helps to address complexity by using technology to manage technology in terms of hiding and removing low level complexities from end users. Real-time reactive systems are some of the most complex systems that have become increasingly heterogeneous and intelligent. Thus, we want to add autonomic features to real-time reactive systems by building a formal framework, Reactive Autonomic Systems Framework (RASF), which can leverage specification, modeling and development of Reactive Autonomic Systems (RAS). With autonomic behavior, the real-time reactive systems are more self-managed to themselves and more adaptive to their environment. Formal methods are proven approaches to ensure the correct operation of complex interacting systems. However, many current formal approaches do not have appropriate mechanisms to specify RAS and have not addressed well on verifying self-management behavior, which is one of the most important features of the RAS. The management of evolving specifications and analysis of changes require a specification structure, which can isolate those changes in a small number of components and analyze the impacts of a change on interconnected components. Category theory has been proposed as a framework to offer that structure; it has a rich body of theory to reason about objects and their relations. Furthermore, category theory adopts a correct by construction approach by which components can be specified, proved and composed in the way of preserving their properties. In the multi-agent community, agent-based approach is considered as a natural way to model and implement autonomic systems, as the ability of an autonomous agent can be easily mapped to the self-management behaviors in autonomic systems. Thus, many ideas from the Multi-Agent Systems (MAS) community can be adapted to implement the autonomic systems, such as the self-management behavior, automatic group formation, interfacing and evolution. Therefore, in terms of achieving our research goal, we need to i) build an architecture and corresponding communication mechanism for modeling both reactive and autonomic behavior of the RAS, ii) formally specify the architecture, communication and behavior above using category theory, iii) design and implement the architecture, communication as well as behavior of the RAS model by the MAS approach with its implementation and iv) illustrate our RASF methodology and approach with case studies

    Modeling Multi-Agent Systems with Category Theory

    Get PDF
    The rapidly growing complexity of integrating and monitoring computing systems is beyond the capabilities of even the most expert systems and software developers. The solution is systems must learn to monitor their own behaviors and conform to the requirements – a vision referred to as Autonomic Computing. Reactive Autonomic Systems Framework (RASF) is introduced for real-time reactive systems, which contain autonomic self-managing properties and are adaptive to their environments. The goal of this thesis is about modeling Multi-Agent Systems (MAS) with Category Theory (CAT). MAS is introduced as the realization of Reactive Autonomic Systems, and Jadex is used as a representation of MAS approach. This thesis respects Belief-Desire-Intension (BDI) agent architecture, models the entire Multi-Agent Systems (MAS), zooms into individual intelligent agent, analyzes the relationships among agent plans, goals and beliefs, and provides a fully formal CAT representation on MAS structure. Furthermore, this thesis proposes a formalization of fault-tolerance property of MAS using CAT

    Conception et implémentation de systèmes résilients par une approche à composants

    Get PDF
    L'évolution des systèmes pendant leur vie opérationnelle est incontournable. Les systèmes sûrs de fonctionnement doivent évoluer pour s'adapter à des changements comme la confrontation à de nouveaux types de fautes ou la perte de ressources. L'ajout de cette dimension évolutive à la fiabilité conduit à la notion de résilience informatique. Parmi les différents aspects de la résilience, nous nous concentrons sur l'adaptativité. La sûreté de fonctionnement informatique est basée sur plusieurs moyens, dont la tolérance aux fautes à l'exécution, où l'on attache des mécanismes spécifiques (Fault Tolerance Mechanisms, FTMs) à l'application. A ce titre, l'adaptation des FTMs à l'exécution s'avère un défi pour développer des systèmes résilients. Dans la plupart des travaux de recherche existants, l'adaptation des FTMs à l'exécution est réalisée de manière préprogrammée ou se limite à faire varier quelques paramètres. Tous les FTMs envisageables doivent être connus dès le design du système et déployés et attachés à l'application dès le début. Pourtant, les changements ont des origines variées et, donc, vouloir équiper un système pour le pire scénario est impossible. Selon les observations pendant la vie opérationnelle, de nouveaux FTMs peuvent être développés hors-ligne, mais intégrés pendant l'exécution. On dénote cette capacité comme adaptation agile, par opposition à l'adaptation préprogrammée. Dans cette thèse, nous présentons une approche pour développer des systèmes sûrs de fonctionnement flexibles dont les FTMs peuvent s'adapter à l'exécution de manière agile par des modifications à grain fin pour minimiser l'impact sur l'architecture initiale. D'abord, nous proposons une classification d'un ensemble de FTMs existants basée sur des critères comme le modèle de faute, les caractéristiques de l'application et les ressources nécessaires. Ensuite, nous analysons ces FTMs et extrayons un schéma d'exécution générique identifiant leurs parties communes et leurs points de variabilité. Après, nous démontrons les bénéfices apportés par les outils et les concepts issus du domaine du génie logiciel, comme les intergiciels réflexifs à base de composants, pour développer une librairie de FTMs adaptatifs à grain fin. Nous évaluons l'agilité de l'approche et illustrons son utilité à travers deux exemples d'intégration : premièrement, dans un processus de développement dirigé par le design pour les systèmes ubiquitaires et, deuxièmement, dans un environnement pour le développement d'applications pour des réseaux de capteurs. ABSTRACT : Evolution during service life is mandatory, particularly for long-lived systems. Dependable systems, which continuously deliver trustworthy services, must evolve to accommodate changes e.g., new fault tolerance requirements or variations in available resources. The addition of this evolutionary dimension to dependability leads to the notion of resilient computing. Among the various aspects of resilience, we focus on adaptivity. Dependability relies on fault tolerant computing at runtime, applications being augmented with fault tolerance mechanisms (FTMs). As such, on-line adaptation of FTMs is a key challenge towards resilience. In related work, on-line adaption of FTMs is most often performed in a preprogrammed manner or consists in tuning some parameters. Besides, FTMs are replaced monolithically. All the envisaged FTMs must be known at design time and deployed from the beginning. However, dynamics occurs along multiple dimensions and developing a system for the worst-case scenario is impossible. According to runtime observations, new FTMs can be developed off-line but integrated on-line. We denote this ability as agile adaption, as opposed to the preprogrammed one. In this thesis, we present an approach for developing flexible fault-tolerant systems in which FTMs can be adapted at runtime in an agile manner through fine-grained modifications for minimizing impact on the initial architecture. We first propose a classification of a set of existing FTMs based on criteria such as fault model, application characteristics and necessary resources. Next, we analyze these FTMs and extract a generic execution scheme which pinpoints the common parts and the variable features between them. Then, we demonstrate the use of state-of-the-art tools and concepts from the field of software engineering, such as component-based software engineering and reflective component-based middleware, for developing a library of fine-grained adaptive FTMs. We evaluate the agility of the approach and illustrate its usability throughout two examples of integration of the library: first, in a design-driven development process for applications in pervasive computing and, second, in a toolkit for developing applications for WSNs

    How To Touch a Running System

    Get PDF
    The increasing importance of distributed and decentralized software architectures entails more and more attention for adaptive software. Obtaining adaptiveness, however, is a difficult task as the software design needs to foresee and cope with a variety of situations. Using reconfiguration of components facilitates this task, as the adaptivity is conducted on an architecture level instead of directly in the code. This results in a separation of concerns; the appropriate reconfiguration can be devised on a coarse level, while the implementation of the components can remain largely unaware of reconfiguration scenarios. We study reconfiguration in component frameworks based on formal theory. We first discuss programming with components, exemplified with the development of the cmc model checker. This highly efficient model checker is made of C++ components and serves as an example for component-based software development practice in general, and also provides insights into the principles of adaptivity. However, the component model focuses on high performance and is not geared towards using the structuring principle of components for controlled reconfiguration. We thus complement this highly optimized model by a message passing-based component model which takes reconfigurability to be its central principle. Supporting reconfiguration in a framework is about alleviating the programmer from caring about the peculiarities as much as possible. We utilize the formal description of the component model to provide an algorithm for reconfiguration that retains as much flexibility as possible, while avoiding most problems that arise due to concurrency. This algorithm is embedded in a general four-stage adaptivity model inspired by physical control loops. The reconfiguration is devised to work with stateful components, retaining their data and unprocessed messages. Reconfiguration plans, which are provided with a formal semantics, form the input of the reconfiguration algorithm. We show that the algorithm achieves perceived atomicity of the reconfiguration process for an important class of plans, i.e., the whole process of reconfiguration is perceived as one atomic step, while minimizing the use of blocking of components. We illustrate the applicability of our approach to reconfiguration by providing several examples like fault-tolerance and automated resource control

    Robusta (une approche pour la construction d'applications dynamiques)

    Get PDF
    Les domaines de recherche actuels, tels que l'informatique ubiquitaire et l'informatique en nuage (cloud computing), considèrent que ces environnements d exécution sont en changement continue. Les applications dynamiques, où les composants peuvent être ajoutés et supprimés pendant l'exécution, permettent à un logiciel de s'adapter et de s'ajuster à l'évolution des environnements, et de tenir compte de l évolution du logiciel. Malheureusement, les applications dynamiques soulèvent des questions de conception et de développement qui n'ont pas encore été pleinement explorées.Dans cette thèse, nous montrons que le dynamisme est une préoccupation transversale qui rompt avec un grand nombre d hypothèses que les développeurs d applications classiques sont autorisés à prendre. Le dynamisme affecte profondément la conception et développement de logiciels. S'il n'est pas manipulé correctement, le dynamisme peut silencieusement corrompre l'application. De plus, l'écriture d'applications dynamiques est complexe et sujette à erreur. Et compte tenu du niveau de complexité et de l impact du dynamisme sur le processus du développement, le logiciel ne peut pas devenir dynamique sans (de large) modification et le dynamisme ne peut pas être totalement transparent (bien que beaucoup de celui-ci peut souvent être externalisées ou automatisées).Ce travail a pour but d offrir à l architecte logiciel le contrôle sur le niveau, la nature et la granularité du dynamisme qui est nécessaire dans les applications dynamiques. Cela permet aux architectes et aux développeurs de choisir les zones de l'application où les efforts de programmation des composants dynamiques seront investis, en évitant le coût et la complexité de rendre tous les composants dynamiques. L'idée est de permettre aux architectes de déterminer l'équilibre entre les efforts à fournir et le niveau de dynamisme requis pour les besoins de l'application.Current areas of research, such as ubiquitous and cloud computing, consider execution environments to be in a constant state of change. Dynamic applications where components can be added, removed and substituted during execution allow software to adapt and adjust to changing environments, and to accommodate evolving features. Unfortunately, dynamic applications raise design and development issues that have yet to be fully addressed. In this dissertation we show that dynamism is a crosscutting concern that breaks many of the assumptions that developers are otherwise allowed to make in classic applications. Dynamism deeply impacts software design and development. If not handled correctly, dynamism can silently corrupt the application. Furthermore, writing dynamic applications is complex and error-prone, and given the level of complexity and the impact dynamism has on the development process, software cannot become dynamic without (extensive) modification and dynamism cannot be entirely transparent (although much of it may often be externalized or automated). This work focuses on giving the software architect control over the level, the nature and the granularity of dynamism that is required in dynamic applications. This allows architects and developers to choose where the efforts of programming dynamic components are best spent, avoiding the cost and complexity of making all components dynamic. The idea is to allow architects to determine the balance between the efforts spent and the level of dynamism required for the application's needs. At design-time we perform an impact analysis using the architect's requirements for dynamism. This serves to identify components that can be corrupted by dynamism and to at the architect's disposition render selected components resilient to dynamism. The application becomes a well-defined mix of dynamic areas, where components are expected to change at runtime, and static areas that are protected from dynamism and where programming is simpler and less restrictive. At runtime, our framework ensures the application remains consistent even after unexpected dynamic events by computing and removing potentially corrupt components. The framework attempts to recover quickly from dynamism and to minimize the impact of dynamism on the application. Our work builds on recent Software Engineering and Middleware technologies namely, OSGi, iPOJO and APAM that provide basic mechanisms to handle dynamism, such as dependency injection, late-binding, service availability notifications, deployment, lifecycle and dependency management. Our approach, implemented in the Robusta prototype, extends and complements these technologies by providing design and development-time support, and enforcing application execution consistency in the face of dynamism.SAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF

    Engineering Self-Adaptive Collective Processes for Cyber-Physical Ecosystems

    Get PDF
    The pervasiveness of computing and networking is creating significant opportunities for building valuable socio-technical systems. However, the scale, density, heterogeneity, interdependence, and QoS constraints of many target systems pose severe operational and engineering challenges. Beyond individual smart devices, cyber-physical collectives can provide services or solve complex problems by leveraging a “system effect” while coordinating and adapting to context or environment change. Understanding and building systems exhibiting collective intelligence and autonomic capabilities represent a prominent research goal, partly covered, e.g., by the field of collective adaptive systems. Therefore, drawing inspiration from and building on the long-time research activity on coordination, multi-agent systems, autonomic/self-* systems, spatial computing, and especially on the recent aggregate computing paradigm, this thesis investigates concepts, methods, and tools for the engineering of possibly large-scale, heterogeneous ensembles of situated components that should be able to operate, adapt and self-organise in a decentralised fashion. The primary contribution of this thesis consists of four main parts. First, we define and implement an aggregate programming language (ScaFi), internal to the mainstream Scala programming language, for describing collective adaptive behaviour, based on field calculi. Second, we conceive of a “dynamic collective computation” abstraction, also called aggregate process, formalised by an extension to the field calculus, and implemented in ScaFi. Third, we characterise and provide a proof-of-concept implementation of a middleware for aggregate computing that enables the development of aggregate systems according to multiple architectural styles. Fourth, we apply and evaluate aggregate computing techniques to edge computing scenarios, and characterise a design pattern, called Self-organising Coordination Regions (SCR), that supports adjustable, decentralised decision-making and activity in dynamic environments.Con lo sviluppo di informatica e intelligenza artificiale, la diffusione pervasiva di device computazionali e la crescente interconnessione tra elementi fisici e digitali, emergono innumerevoli opportunità per la costruzione di sistemi socio-tecnici di nuova generazione. Tuttavia, l'ingegneria di tali sistemi presenta notevoli sfide, data la loro complessità—si pensi ai livelli, scale, eterogeneità, e interdipendenze coinvolti. Oltre a dispositivi smart individuali, collettivi cyber-fisici possono fornire servizi o risolvere problemi complessi con un “effetto sistema” che emerge dalla coordinazione e l'adattamento di componenti fra loro, l'ambiente e il contesto. Comprendere e costruire sistemi in grado di esibire intelligenza collettiva e capacità autonomiche è un importante problema di ricerca studiato, ad esempio, nel campo dei sistemi collettivi adattativi. Perciò, traendo ispirazione e partendo dall'attività di ricerca su coordinazione, sistemi multiagente e self-*, modelli di computazione spazio-temporali e, specialmente, sul recente paradigma di programmazione aggregata, questa tesi tratta concetti, metodi, e strumenti per l'ingegneria di ensemble di elementi situati eterogenei che devono essere in grado di lavorare, adattarsi, e auto-organizzarsi in modo decentralizzato. Il contributo di questa tesi consiste in quattro parti principali. In primo luogo, viene definito e implementato un linguaggio di programmazione aggregata (ScaFi), interno al linguaggio Scala, per descrivere comportamenti collettivi e adattativi secondo l'approccio dei campi computazionali. In secondo luogo, si propone e caratterizza l'astrazione di processo aggregato per rappresentare computazioni collettive dinamiche concorrenti, formalizzata come estensione al field calculus e implementata in ScaFi. Inoltre, si analizza e implementa un prototipo di middleware per sistemi aggregati, in grado di supportare più stili architetturali. Infine, si applicano e valutano tecniche di programmazione aggregata in scenari di edge computing, e si propone un pattern, Self-Organising Coordination Regions, per supportare, in modo decentralizzato, attività decisionali e di regolazione in ambienti dinamici

    Self-Adaptive Performance Monitoring for Component-Based Software Systems

    Get PDF
    Effective monitoring of a software system’s runtime behavior is necessary to evaluate the compliance of performance objectives. This thesis has emerged in the context of the Kieker framework addressing application performance monitoring. The contribution includes a self-adaptive performance monitoring approach allowing for dynamic adaptation of the monitoring coverage at runtime. The monitoring data includes performance measures such as throughput and response time statistics, the utilization of system resources, as well as the inter- and intra-component control flow. Based on this data, performance anomaly scores are computed using time series analysis and clustering methods. The self-adaptive performance monitoring approach reduces the business-critical failure diagnosis time, as it saves time-consuming manual debugging activities. The approach and its underlying anomaly scores are extensively evaluated in lab experiments

    n-Dimensional Prediction of RT-SOA QoS

    Get PDF
    Service-Orientation has long provided an effective mechanism to integrate heterogeneous systems in a loosely coupled fashion as services. However, with the emergence of Internet of Things (IoT) there is a growing need to facilitate the integration of real-time services executing in non-controlled, non-real-time, environments such as the Cloud. As such there has been a drive in recent years to develop mechanisms for deriving reliable Quality of Service (QoS) definitions based on the observed performance of services, specifically in order to facilitate a Real-Time Quality of Service (RT-QoS) definition. Due to the overriding challenge in achieving this is the lack of control over the hosting Cloud system many approaches either look at alternative methods that ignore the underlying infrastructure or assume some level of control over interference such as the provision of a Real-Time Operating System (RTOS). There is therefore a major research challenge to find methods that facilitate RT-QoS in environments that do not provide the level of control over interference that is traditionally required for real-time systems. This thesis presents a comprehensive review and analysis of existing QoS and RT-QoS techniques. The techniques are classified into seven categories and the most significant approaches are tested for their ability to provide QoS definitions that are not susceptible to dynamic changing levels of interference. This work then proposes a new n-dimensional framework that models the relationship between resource utilisation, resource availability on host servers, and the response-times of services. The framework is combined with real-time schedulability tests to dynamically provide guarantees on response-times for ranges of resource availabilities and identifies when those conditions are no longer suitable. The proposed framework is compared against the existing techniques using simulation and then evaluated in the domain of Cloud computing where the approach demonstrates an average overallocation of 12%, and provides alerts across 94% of QoS violations within the first 14% of execution progress
    corecore