11 research outputs found
Constraint-based runtime prediction of SLA violations in service orchestrations
Service compositions put together loosely-coupled component services to perform more complex, higher level, or cross-organizational tasks in a platform-independent manner. Quality-of-Service (QoS) properties, such as execution time, availability, or cost, are critical for their usability, and permissible boundaries for their values are defined in Service Level Agreements (SLAs). We propose a method whereby constraints that model SLA conformance and
violation are derived at any given point of the execution of a service composition. These constraints are generated using the structure of the composition and properties of the component services, which can be either known or empirically measured. Violation of these constraints means that the corresponding scenario is unfeasible, while satisfaction gives values for the constrained variables (start / end times for activities, or number of loop iterations) which make the scenario possible. These results can be used to perform optimized service matching or trigger preventive adaptation or healing
Evaluator services for optimised service placement in distributed heterogeneous cloud infrastructures
Optimal placement of demanding real-time interactive applications in a distributed heterogeneous cloud very quickly results in a complex tradeoff between the application constraints and resource capabilities. This requires very detailed information of the various requirements and capabilities of the applications and available resources. In this paper, we present a mathematical model for the service optimization problem and study the concept of evaluator services as a flexible and efficient solution for this complex problem. An evaluator service is a service probe that is deployed in particular runtime environments to assess the feasibility and cost-effectiveness of deploying a specific application in such environment. We discuss how this concept can be incorporated in a general framework such as the FUSION architecture and discuss the key benefits and tradeoffs for doing evaluator-based optimal service placement in widely distributed heterogeneous cloud environments
Recommended from our members
Identifying Test Requirements by Analyzing SLA Guarantee Terms
Service Level Agreements (SLAs) are used to specify the negotiated conditions between the provider and the consumer of services. In this paper we present a stepwise method to identify and categorize a set of test requirements that represent the potential situations that can be exercised regarding the specification of each isolated guarantee term of an SLA. This identification is addressed by means of devising a set of coverage levels that allow grading the thoroughness of the tests. The utilization of these test requirements would focus on twofold objectives: (1) the generation of a test suite that allows exercising the situations described in the test requirements and (2) the support for the derivation of a monitoring plan that checks the compliance of these requirements at runtime. The approach is illustrated over an eHealth case study
A constraint-based approach to quality assurance in service choreographies.
Knowledge about the quality characteristics (QoS) of service com- positions is crucial for determining their usability and economic value. Ser- vice quality is usually regulated using Service Level Agreements (SLA). While end-to-end SLAs are well suited for request-reply interactions, more complex, decentralized, multiparticipant compositions (service choreographies) typ- ically involve multiple message exchanges between stateful parties and the corresponding SLAs thus encompass several cooperating parties with interde- pendent QoS. The usual approaches to determining QoS ranges structurally (which are by construction easily composable) are not applicable in this sce- nario. Additionally, the intervening SLAs may depend on the exchanged data. We present an approach to data-aware QoS assurance in choreographies through the automatic derivation of composable QoS models from partici- pant descriptions. Such models are based on a message typing system with size constraints and are derived using abstract interpretation. The models ob- tained have multiple uses including run-time prediction, adaptive participant selection, or design-time compliance checking. We also present an experimen- tal evaluation and discuss the benefits of the proposed approach
Recommended from our members
Automatic test case generation for WS-Agreements using combinatorial testing
In the scope of the applications developed under the service-based paradigm, Service Level Agreements (SLAs) are a standard mechanism used to flexibly specify the Quality of Service (QoS) that must be delivered. These agreements contain the conditions negotiated between the service provider and consumers as well as the potential penalties derived from the violation of such conditions. In this context, it is important to assure that the service based application (SBA) behaves as expected in order to avoid potential consequences like penalties or dissatisfaction between the stakeholders that have negotiated and signed the SLA. In this article we address the testing of SLAs specified using the WS-Agreement standard by means of applying testing techniques such as the Classification Tree Method and Combinatorial Testing to generate test cases. From the content of the individual terms of the SLA, we identify situations that need to be tested. We also obtain a set of constraints based on the SLA specification and the behavior of the SBA in order to guarantee the testability of the test cases. Furthermore, we define three different coverage strategies with the aim at grading the intensity of the tests. Finally, we have developed a tool named SLACT (SLA Combinatorial Testing) in order to automate the process and we have applied the whole approach to an eHealth case study
Run-time Architecture Models for Dynamic Adaptation and Evolution of Cloud Applications
Cloud applications are subject to continuous change due to modifications of the software application itself and, in particular, its environment. To manage changes, cloud-based systems provide diverse self-adaptation mechanisms based on run-time models. Observed run-time models are means for leveraging self- adaption, however, are hard to apply during software evolution as they are usually too detailed for comprehension by humans.In this paper, we propose iObserve, an approach to cloud-based system adaptation and evolution through run-time observation and continuous quality analysis. With iObserve, run-time adaptation and evolution are two mutual, interwoven activities that influence each other. Central to iObserve is (a) the specification of the correspondence between observation results and design models, and (b) their use in both adaptation and evolution. Run-time observation data is promoted to meaningful values mapped to design models, thereby continuously updating and calibrating those design models during run-time while keeping the models comprehendible by humans. This engineering approach allows for automated adaptation at run-time and simultaneously supports software evolution. Model-driven software engineering is employed for various purposes such as monitoring instrumentation and model transformation. We report on the experimental evaluation of this approach in lab experiments using the CoCoME benchmark deployed on an OpenStack cloud
CloudOps: Towards the Operationalization of the Cloud Continuum: Concepts, Challenges and a Reference Framework
The current trend of developing highly distributed, context aware, heterogeneous computing intense and data-sensitive applications is changing the boundaries of cloud computing. Encouraged by the growing IoT paradigm and with flexible edge devices available, an ecosystem of a combination of resources, ranging from high density compute and storage to very lightweight embedded computers running on batteries or solar power, is available for DevOps teams from what is known as the Cloud Continuum. In this dynamic context, manageability is key, as well as controlled operations and resources monitoring for handling anomalies. Unfortunately, the operation and management of such heterogeneous computing environments (including edge, cloud and network services) is complex and operators face challenges such as the continuous optimization and autonomous (re-)deployment of context-aware stateless and stateful applications where, however, they must ensure service continuity while anticipating potential failures in the underlying infrastructure. In this paper, we propose a novel CloudOps workflow (extending the traditional DevOps pipeline), proposing techniques and methods for applications’ operators to fully embrace the possibilities of the Cloud Continuum. Our approach will support DevOps teams in the operationalization of the Cloud Continuum. Secondly, we provide an extensive explanation of the scope, possibilities and future of the CloudOps.This research was funded by the European project PIACERE (Horizon 2020 Research and Innovation Programme, under grant agreement No. 101000162)
CloudOps: Towards the Operationalization of the Cloud Continuum: Concepts, Challenges and a Reference Framework
The current trend of developing highly distributed, context aware, heterogeneous computing intense and data-sensitive applications is changing the boundaries of cloud computing. Encouraged by the growing IoT paradigm and with flexible edge devices available, an ecosystem of a combination of resources, ranging from high density compute and storage to very lightweight embedded computers running on batteries or solar power, is available for DevOps teams from what is known as the Cloud Continuum. In this dynamic context, manageability is key, as well as controlled operations and resources monitoring for handling anomalies. Unfortunately, the operation and management of such heterogeneous computing environments (including edge, cloud and network services) is complex and operators face challenges such as the continuous optimization and autonomous (re-)deployment of context-aware stateless and stateful applications where, however, they must ensure service continuity while anticipating potential failures in the underlying infrastructure. In this paper, we propose a novel CloudOps workflow (extending the traditional DevOps pipeline), proposing techniques and methods for applications’ operators to fully embrace the possibilities of the Cloud Continuum. Our approach will support DevOps teams in the operationalization of the Cloud Continuum. Secondly, we provide an extensive explanation of the scope, possibilities and future of the CloudOps.This research was funded by the European project PIACERE (Horizon 2020 Research and Innovation Programme, under grant agreement No. 101000162)
Architecture-based Evolution of Dependable Software-intensive Systems
This cumulative habilitation thesis, proposes concepts for (i) modelling and analysing dependability based on architectural models of software-intensive systems early in development, (ii) decomposition and composition of modelling languages and analysis techniques to enable more flexibility in evolution, and (iii) bridging the divergent levels of abstraction between data of the operation phase, architectural models and source code of the development phase
Constraint-based runtime prediction of SLA violations in service orchestrations (extended abstract)
Quality of Service (QoS) attributes, such as execution time, availability,
or cost, are critical for the usability of Web services. This in particular applies
to service compositions, which are commonly used for implementing more complex,
higher level, and/or cross-organizational tasks by assembling loosely-coupled
individual service components (often provided and controlled by third parties). The
QoS attributes of service compositions depend on the QoS attributes of the service
components, as well as on environmental factors and the actual data being handled,
and are usually regulated by means of Service-Level Agreements (SLAs), which
define the permissible boundaries for the values of the related properties. Predicting
whether an SLA will be violated for a given executing instance of a service composition
is therefore very important. Such a prediction can be used for preventing or
mitigating the consequences of SLA violations ahead of time