13 research outputs found

    A deliberative model for self-adaptation middleware using architectural dependency

    Get PDF
    A crucial prerequisite to externalized adaptation is an understanding of how components are interconnected, or more particularly how and why they depend on one another. Such dependencies can be used to provide an architectural model, which provides a reference point for externalized adaptation. In this paper, it is described how dependencies are used as a basis to systems' self-understanding and subsequent architectural reconfigurations. The approach is based on the combination of: instrumentation services, a dependency meta-model and a system controller. In particular, the latter uses self-healing repair rules (or conflict resolution strategies), based on extensible beliefs, desires and intention (EBDI) model, to reflect reconfiguration changes back to a target application under examination

    Dependencies Management in Dynamically Updateable Component-Based System.

    Get PDF
    Component-based software systems achieve their functionalities through interaction between their components. Analyzing the dependencies between systems components is an essential task in system reconfiguration. This paper discusses dependencies analysis significance when updating component-based system dynamically. It presents a service-based matrix model and nested graph as approaches to capture components' dependencies; it discusses using dependencies analysis for safe dynamic updating in component-based software systems

    Towards Semantic KPI Measurement

    Get PDF
    Linked Data (LD) represent a great mechanism towards integrating information across disparate sources. The respective technology can also be exploited to perform inferencing for deriving added-value knowledge. As such, LD technology can really assist in performing various analysis tasks over information related to business process execution. In the context of Business Process as a Service (BPaaS), the first real challenge is to collect and link information originating from different systems by following a certain structure. As such, this paper proposes two main ontologies that serve this purpose: a KPI and a Dependency one. Based on these well-connected ontologies, an innovative Key Performance Indicator (KPI) analysis system is then built which exhibits two main analysis capabilities: KPI assessment and drill-down, where the second can be exploited to find root causes of KPI violations. Compared to other KPI analysis systems, LD usage enables the flexible construction and assessment of any KPI kind allowing experts to better explore the possible KPI space

    Multi-Objective Service Composition in Ubiquitous Environments with Service Dependencies

    Get PDF
    International audienceService composition is a widely used method in ubiquitous computing that enables accomplishing complex tasks required by users based on elementary (hardware and software) services available in ubiquitous environments. To ensure that users experience the best Quality of Service (QoS) with respect to their quality needs, service composition has to be QoS-aware. Establishing QoS-aware service compositions entails efficient service selection taking into account the QoS requirements of users. A challenging issue towards this purpose is to consider service selection under global QoS requirements (i.e., requirements imposed by the user on the whole task), which is of high computational cost. This challenge is even more relevant when we consider the dynamics, limited computational resources and timeliness constraints of ubiquitous environments. To cope with the above challenge, we present QASSA, an efficient service selection algorithm that provides the appropriate ground for QoS-aware service composition in ubiquitous environments. QASSA formulates service selection under global QoS requirements as a set-based optimisation problem, and solves this problem by combining local and global selection techniques. In particular, it introduces a novel way of using clustering techniques to enable fine-grained management of trade-offs between QoS objectives. QASSA further considers: (i) dependencies between services, (ii) adaptation at run-time, and (iii) both centralised and distributed design fashions. Results of experimental studies performed using real QoS data are presented to illustrate the timeliness and optimality of QASSA

    Automated IT Service Fault Diagnosis Based on Event Correlation Techniques

    Get PDF
    In the previous years a paradigm shift in the area of IT service management could be witnessed. IT management does not only deal with the network, end systems, or applications anymore, but is more and more concerned with IT services. This is caused by the need of organizations to monitor the efficiency of internal IT departments and to have the possibility to subscribe IT services from external providers. This trend has raised new challenges in the area of IT service management, especially with respect to service level agreements laying down the quality of service to be guaranteed by a service provider. Fault management is also facing new challenges which are related to ensuring the compliance to these service level agreements. For example, a high utilization of network links in the infrastructure can imply a delay increase in the delivery of services with respect to agreed time constraints. Such relationships have to be detected and treated in a service-oriented fault diagnosis which therefore does not deal with faults in a narrow sense, but with service quality degradations. This thesis aims at providing a concept for service fault diagnosis which is an important part of IT service fault management. At first, a motivation of the need of further examinations regarding this issue is given which is based on the analysis of services offered by a large IT service provider. A generalization of the scenario forms the basis for the specification of requirements which are used for a review of related research work and commercial products. Even though some solutions for particular challenges have already been provided, a general approach for service fault diagnosis is still missing. For addressing this issue, a framework is presented in the main part of this thesis using an event correlation component as its central part. Event correlation techniques which have been successfully applied to fault management in the area of network and systems management are adapted and extended accordingly. Guidelines for the application of the framework to a given scenario are provided afterwards. For showing their feasibility in a real world scenario, they are used for both example services referenced earlier

    Component-based Adaptation Methods for Service-Oriented Peer-to-Peer Software Architectures

    Get PDF
    Service-oriented peer-to-peer architectures aim at supporting application scenarios of dispersed collaborating groups in which the participating users are capable of providing and consuming local resources in terms of peer services. From a conceptual perspective, service-oriented peer-to-peer architectures adopt relevant concepts of two well-established state-of-the-art software architectural styles, namely service-oriented architectures (also known as SOA) and peer-to-peer architectures (P2P). One major argumentation of this thesis is that the adoption of end-user adaptability (or tailorability) concepts is of major importance for the successful deployment of service-oriented peer-to-peer architectures that support user collaboration. Since tailorability concepts have so far not been analyzed for both peer-to-peer and service-oriented architectures, no relevant models exist that could serve as a tailorability model for service-oriented peer-to-peer architectures. In order to master the adaptation of peer services, as well as peer service compositions within service-oriented peer-to-peer architectures, this dissertation proposes the adoption of component-oriented development methods. These so-called component-based adaptation methods enable service providers to adapt their provided services during runtime. Here, a model for analyzing existing dependencies on subscribed ser-vice consumers ensures that a service provider is able to adapt his peer services without violating any dependencies. In doing so, an adaptation policy that can be pre-arranged within a peer group regulates the procedures of how to cope with existing dependencies in the scope of a group. The same methods also serve as a way to handle exceptional cases, in particular the failure of a dependent service provider peer and, hence, a service that is part of a local service composition. In this, the hosting runtime environment is responsible for detecting exceptions and for initiating the process of exception resolution. During the resolution phase, a user can be actively involved at selected decision points in order to resolve the occurred exception in unpredictable contexts. An exception could also be the reason for the violation of an integrity constraint that serves as a contract between various peers that interact within a given collaboration. The notion of integrity constraints and the model of handling the constraint violation aim at improving the reliability of target-oriented peer collaborations. This dissertation is composed of three major parts that each makes a significant contribution to the state of the art. First of all, a formal architectural style (SOP2PA) is introduced to define the fundamental elements that are necessary to build service-oriented peer-to-peer architectures, as well as their relationships, constraints, and operational semantics. This architectural style also formalizes the above-mentioned adaptation methods, the exception handling model that embraces these methods, the analysis model for managing consumer dependencies, as well as the integrity constraints model. Subsequently, on this formal basis, a concrete (specific) service-oriented peer-to-peer architecture (DEEVOLVE) is conceptualized that serves as the default implementation of that style. Here, the notions described above are materialized based on state-of-the-art software engineering methods and models. Finally, the third contribution of this work outlines an application scenario stemming from the area of construction informatics, in which the default implementation DEEVOLVE is deployed in order to support dispersed planning activities of structural engineers

    Automated hierarchical service level agreements

    Get PDF
    The present dissertation concerns the area of Service Computing. More specifically, it contributes to the topic of enabling IT service stacks with dependability, such that they can be used even further in pragmatic business environments and applications. The instrument used for this purpose is a Service Level Agreement (SLA). The main focus is on SLA Hierarchies, which reflect corresponding Service Hierarchies. SLAs may be established manually, or automatically among software agents; it is mainly the latter case that is considered here. The thesis contributes by means of a formal problem definition for the construction of SLA hierarchies using a translation process, a management architecture, a formal model for defining penalties and a representation that facilitates the processing of SLAs. Using these tools, it is shown that automated SLA management in hierarchical setups is possible, through an application to Multi-Domain Infrastructure-as-a-Service. Within this specific technical area, different SLA-based resource capacity planning approaches are examined via simulation -- both for online and offline planning. The former case concerns normal runtime operations, and the thesis examines two greedy algorithms with regard to their energy-savings efficiency and their performance. In the latter case, a resource-scarce environment is simulated with the purpose of minimizing penalties from already established SLAs. This is achieved via formally-defined combinatorial models, which are solved and compared to two greedy algorithms

    Supporting IT Service Fault Recovery with an Automated Planning Method

    Get PDF
    Despite advances in software and hardware technologies, faults are still inevitable in a highly-dependent, human-engineered and administrated IT environment. Given the critical role of IT services today, it is imperative that faults, having once occurred, have to be dealt with eciently and eeffectively to avoid or reduce the actual losses. Nevertheless, the complexities of current IT services, e.g., with regard to their scales, heterogeneity and highly dynamic infrastructures, make the recovery operation a challenging task for operators. Such complexities will eventually outgrow the human capability to manage them. Such diculty is augmented by the fact that there are few well-devised methods available to support fault recovery. To tackle this issue, this thesis aims at providing a computer-aided approach to assist operators with fault recovery planning and, consequently, to increase the eciency of recovery activities.We propose a generic framework based on the automated planning theory to generate plans for recoveries of IT services. At the heart of the framework is a planning component. Assisted by the other participants in the framework, the planning component aggregates the relevant information and computes recovery steps accordingly. The main idea behind the planning component is to sustain the planning operations with automated planning techniques, which is one of the research fields of articial intelligence. Provided with a general planning model, we show theoretically that the service fault recovery problem can be indeed solved by automated planning techniques. The relationship between a planning problem and a fault recovery problem is shown by means of reduction between these problems. After an extensive investigation, we choose a planning paradigm that based on Hierarchical Task Networks (HTN) as the guideline for the design of our main planning algorithm called H2MAP. To sustain the operation of the planner, a set of components revolving around the planning component is provided. These components are responsible for tasks such as translation between dierent knowledge formats, persistent storage of planning knowledge and communication with external systems. To ensure extendibility in our design, we apply dierent design patterns for the components. We sketch and discuss the technical aspects of implementations of the core components. Finally, as proof of the concept, the framework is instantiated to two distinguishing application scenarios
    corecore