87 research outputs found
Recommended from our members
EVEREST+: Run-time SLA violations prediction
Monitoring the preservation of QoS properties during the operation of service-based systems at run-time is an important verification measure for checking if the current service usage is compliant with agreed SLAs. Monitoring, however, does not always provide sufficient scope for taking control actions against violations as it only detects violations after they occur. In this paper we describe a model-based prediction framework, EVEREST+, for both QoS predictors development and execution. EVEREST+ was designed to provide a framework for developing in an easy and fast way QoS predictors only focusing on their prediction algorithms implementation without the need for caring about how to collect or retrieve historical data or how to infer models out of collected data. It also provides a run-time environment for executing QoS predictors and storing their predictions
Recommended from our members
Detection of Security and Dependability Threats: A Belief Based Reasoning Approach
Monitoring the preservation of security and dependability (S&D) properties during the operation of systems at runtime is an important verification measure that can increase system resilience. However it does not always provide sufficient scope for taking control actions against violations as it only detects problems after they occur. In this paper, we describe a proactive monitoring approach that detects potential violations of S&D properties, called ldquothreatsrdquo, and discuss the results of an initial evaluation of it
Recommended from our members
Coverage Based Testing for Service Level Agreements
Service level agreements (SLAs) are typically used to specify rules regarding the consumption of services that are agreed between the providers of the service-based applications (SBAs) and their consumers. An SLA includes a list of terms that contain the guarantees that must be fulfilled during the provisioning and consumption of the services. Since the violation of such guarantees may lead to the application of potential penalties, it is important to assure that the SBA behaves as expected. In this paper, we propose a proactive approach to test SLA-aware SBAs by means of identifying test requirements, which represent situations that are relevant to be tested. To address this issue, we define a four-valued logic that allows evaluating both the individual guarantee terms and their logical relationships. Grounded in this logic, we devise a test criterion based on the modified condition decision coverage (MCDC) in order to obtain a cost-effective set of test requirements from the structure of the SLA. Furthermore by analyzing the syntax and semantics of the agreement, we define specific rules to avoid non-feasible test requirements. The whole approach has been automated and applied over an eHealth case study
Recommended from our members
Automatic test case generation for WS-Agreements using combinatorial testing
In the scope of the applications developed under the service-based paradigm, Service Level Agreements (SLAs) are a standard mechanism used to flexibly specify the Quality of Service (QoS) that must be delivered. These agreements contain the conditions negotiated between the service provider and consumers as well as the potential penalties derived from the violation of such conditions. In this context, it is important to assure that the service based application (SBA) behaves as expected in order to avoid potential consequences like penalties or dissatisfaction between the stakeholders that have negotiated and signed the SLA. In this article we address the testing of SLAs specified using the WS-Agreement standard by means of applying testing techniques such as the Classification Tree Method and Combinatorial Testing to generate test cases. From the content of the individual terms of the SLA, we identify situations that need to be tested. We also obtain a set of constraints based on the SLA specification and the behavior of the SBA in order to guarantee the testability of the test cases. Furthermore, we define three different coverage strategies with the aim at grading the intensity of the tests. Finally, we have developed a tool named SLACT (SLA Combinatorial Testing) in order to automate the process and we have applied the whole approach to an eHealth case study
Recommended from our members
Runtime monitoring of security SLAs for big data pipelines: design implementation and evaluation of a framework for monitoring security SLAs in big data pipelines with the assistance of run-time code instrumentation
The Big Data processing ecosystem has been constantly growing in recent years. This has been significantly reinforced by the advent of cloud computing platforms where Big Data analytics can be offered on an as-a-service basis. The ease with which users can leverage the capabilities of Big Data processing frameworks in the cloud has made them a popular solution with low up-front expenditure and a flexible deployment model. In spite of their cost benefits and flexibility of use, Big Data services in cloud platforms present us with an array of new challenges compared to traditional web services especially in the domain of data security and privacy. Their distributed nature makes them more dynamic with regards to deployment and execution but at the same time it exacerbates challenges related to data and operation security since both data and operations are shared across multiple nodes. Inevitably, distributing data and operations on multiple nodes leads to an increase in the attack surface. Given the need for systems that react fast and produce results as quickly as possible, more emphasis has been placed on performance and less so on security. Having said that, as the use of cloud computing is becoming more widespread, concerns with regards to non-functional properties such as data security are becoming more pronounced for the users. Runtime security monitoring is a mechanism that can be employed to alleviate some of the issues that emerge with respect to the activity of security monitoring for Big Data analytics services that are outsourced in the cloud. In this thesis we make the case for a monitoring framework where monitoring events are collected and evaluated against a set of monitoring rules that describe monitorable security properties of the system. The framework that we put forward can be used to assess the level of security of Big Data analytics pipelines at runtime. For our proof of concept we examine three security properties namely the service response time, the location of execution of service operations and the integrity of the intermediate data produced during the service execution
Recommended from our members
Model driven certification of Cloud service security based on continuous monitoring
Cloud Computing technology offers an advanced approach for the provision of infrastructure, platform and software services without the need of extensive cost of owning, operating or maintaining the computational infrastructures required. However, despite being cost effective, this technology has raised concerns regarding the security, privacy and compliance of data or services offered through cloud systems. This is mainly due to the lack of transparency of services to the consumers, or due to the fact that service providers are unwilling to take full responsibility for the security of services that they offer through cloud systems, and accept liability for security breaches [18]. In such circumstances, there is a trust deficiency that needs to be addressed.
The potential of certification as a means of addressing the lack of trust regarding the security of different types of services, including the cloud, has been widely recognised [149]. However, the recognition of this potential has not led to a wide adoption, as it was expected. The reason could be that certification has traditionally been carried out through standards and certification schemes (e.g., ISO27001 [149], ISO27002 [149] and Common Criteria [65]), which involve predominantly manual systems for security auditing, testing and inspection processes. Such processes tend to be lengthy and have a significant financial cost, which often prevents small technology vendors from adopting it [87].
In this thesis, we present an automated approach for cloud service certification, where the evidence is gathered through continuous monitoring. This approach can be used to: (a) define and execute automatically certification models, to continuously acquire and analyse evidence regarding the provision of services on cloud infrastructures through continuous monitoring; (b) use this evidence to assess whether the provision is compliant with required security properties; and (c) generate and manage digital certificates to confirm the compliance of services with specific security properties
Infrastructure-as-a-Service Usage Determinants in Enterprises
The thesis focuses on the research question, what the determinants of Infrastructure-as-a-Service usage of enterprises are. A wide range of IaaS determinants is collected for an IaaS adoption model of enterprises, which is evaluated in a Web survey. As the economical determinants are especially important, they are separately investigated using a cost-optimizing decision support model. This decision support model is then applied to a potential IaaS use case of a large automobile manufacturer
A methodology for automated service level agreement compliance prediction
PhD ThesisService Level Agreement (SLA) specification languages express monitorable contracts between service providers and consumers. It is of interest to determine if
predictive models can be derived for SLAs expressed in such languages, if possible
in a fashion that is as automated as possible. Assuming that the service developer or user uses some SLA specification languages during the service development
or deployment process,the Service level agreement Compliance Prediction(SlaCP)
methodology is proposed as a general engineering methodology for predicting SLA
compliance.This methodology helps contractual parties to assess the probability of
SLA compliance,as automatically as is feasible,by mapping an existing SLA on a
stochastic model of the service and using existing numerical solution algorithms or
discrete event simulation to solve the model.The SlaCP methodology is generic,
but the methodology is mostly described,in this thesis,assuming the use of the
Web Service Level Agreement(WSLA)and the Stochastic Discrete Event Systems
(SDES)formalism.The approach taken in this methodology is firstly to associate
formal semantics with WSLA elements in order to be understood mathematically
precise.Then,a five-step mapping process between the source and the target formalisms is conducted.These steps include:mapping into model primitives,reward
metrics,expressions for functions of the semetrics,the time at which the prediction
occurs,and the ultimate probability of SLA compliance.The proposed methodology
is implemented in a software tool that automates most of its steps using Mobius and
SPNP.The methodology is evaluated using a case study which shows the methodology’s feasibility and limitations in both theoretical and practical terms.Tishreen University,
Ministry of Higher Education in Syri
Development and validation of a conceptual framework for IT offshoring engagement success
“A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of Philosophy”.The study presented in this thesis investigates Offshore Information Technology Outsourcing (IT offshoring) relationships from clients’ perspective. With more client
companies outsourcing their IT operations offshore, issues associated with the establishment and management of IT offshoring relationships have become very important. With the growing volume of offshore outsourcing, the numbers of
failures are also increasing. Therefore, both clients (service receivers) and suppliers (service providers) face increasing pressure to meet with the objectives of IT offshoring initiatives. Improving the quality of the relationship between client and supplier has frequently been suggested in the literature as probable solution area, however not much literature and empirical evidence is available in this respect.
The aim of the study is to make a theoretical and practical contribution by studying the interplay between the critical factors influencing the relationship intensity level
of the exchange partners and suggest measures that can potentially increase the success rate in IT offshoring engagements.
The objectives of this study are:
1. To identify the relevant critical factors and explore its causes and effects (antecedents and consequences) on the relationship intensity significance
level.
2. To develop an integrated conceptual framework combining the hypothetical
relationship among these identified critical factors.
3. To empirically validate the conceptual framework.
To accomplish the first objective and building the theoretical platform for the second objective, three research questions are identified and answered through empirical study backed by literature evidence. The second objective is addressed
through an integrative conceptual framework by analysing the related studies across other disciplines, gaps in the existing theories and models in the outsourcing literature. Coupled with literature gap analysis, the researcher adopted some of the relevant features from across various disciplines of management and social sciences that are relevant to this study. After that, the third objective, the research
hypotheses are validated with empirical examination conducted in Europe. Seven research hypotheses are developed based on literature analysis on the
relationship of the key constructs in the conceptual framework. This study is explanatory and deductive in nature. It is underpinned mainly by a quantitative
research design with structured questionnaire surveys conducted with stratified sampling of 136 client organisations in Europe. Individual client firm is the unit of
analysis for this study. Data analysis was conducted using partial least squares (PLS) structural equation modelling techniques.
In this research, empirical support was found for most of the research hypotheses
and conclusions of the study is derived. An investigation into trust as a concept is used to denote relationship intensity, as the central construct of the framework.
The validated conceptual framework and tested hypothesis results are the main contributions of this study.
The results of this study will also be useful in terms of adopting the conceptual framework linked with hypotheses as a point of reference to begin with, in order to accomplish a healthy exchange relationship. However, a further deep dive and fine tuning the sub-units/composition characteristics of each critical factor may be needed for individual outsourcing initiative(s).
This study is particularly relevant to the client-supplier firms already engaged in a relationship but can also be useful to those clients who are planning to begin their
journey in IT offshoring in the near future, as a preparatory platform
- …