625 research outputs found
A2THOS: Availability Analysis and Optimisation in SLAs
IT service availability is at the core of customer satisfaction and business success for today’s organisations. Many medium-large size organisations outsource part of their IT services to external providers, with Service Level Agreements describing the agreed availability of outsourced service components. Availability management of partially outsourced IT services is a non trivial task since classic approaches for calculating availability are not applicable, and IT managers can only rely on their expertise to fulfil it. This often leads to the adoption of non optimal solutions. In this paper we present A2THOS, a framework to calculate the availability of partially outsourced IT services in the presence of SLAs and to achieve a cost-optimal choice of availability levels for outsourced IT components while guaranteeing a target availability level for the service
Supporting Quality of Service in Scientific Workflows
While workflow management systems have been utilized in enterprises to support
businesses for almost two decades, the use of workflows in scientific environments
was fairly uncommon until recently. Nowadays, scientists use workflow systems to
conduct scientific experiments, simulations, and distributed computations. However,
most scientific workflow management systems have not been built using existing
workflow technology; rather they have been designed and developed from
scratch. Due to the lack of generality of early scientific workflow systems, many
domain-specific workflow systems have been developed. Generally speaking, those
domain-specific approaches lack common acceptance and tool support and offer
lower robustness compared to business workflow systems.
In this thesis, the use of the industry standard BPEL, a workflow language
for modeling business processes, is proposed for the modeling and the execution of
scientific workflows. Due to the widespread use of BPEL in enterprises, a number
of stable and mature software products exist. The language is expressive (Turingcomplete)
and not restricted to specific applications. BPEL is well suited for the
modeling of scientific workflows, but existing implementations of the standard lack
important features that are necessary for the execution of scientific workflows.
This work presents components that extend an existing implementation of the
BPEL standard and eliminate the identified weaknesses. The components thus provide
the technical basis for use of BPEL in academia. The particular focus is on
so-called non-functional (Quality of Service) requirements. These requirements include
scalability, reliability (fault tolerance), data security, and cost (of executing a
workflow). From a technical perspective, the workflow system must be able to interface
with the middleware systems that are commonly used by the scientific workflow
community to allow access to heterogeneous, distributed resources (especially Grid
and Cloud resources).
The major components cover exactly these requirements:
Cloud Resource Provisioner Scalability of the workflow system is achieved by
automatically adding additional (Cloud) resources to the workflow system’s
resource pool when the workflow system is heavily loaded.
Fault Tolerance Module High reliability is achieved via continuous monitoring
of workflow execution and corrective interventions, such as re-execution of a
failed workflow step or replacement of the faulty resource.
Cost Aware Data Flow Aware Scheduler The majority of scientific workflow
systems only take the performance and utilization of resources for the execution
of workflow steps into account when making scheduling decisions. The
presented workflow system goes beyond that. By defining preference values
for the weighting of costs and the anticipated workflow execution time,
workflow users may influence the resource selection process. The developed multiobjective
scheduling algorithm respects the defined weighting and makes both
efficient and advantageous decisions using a heuristic approach.
Security Extensions Because it supports various encryption, signature and authentication
mechanisms (e.g., Grid Security Infrastructure), the workflow
system guarantees data security in the transfer of workflow data.
Furthermore, this work identifies the need to equip workflow developers with
workflow modeling tools that can be used intuitively. This dissertation presents
two modeling tools that support users with different needs. The first tool, DAVO
(domain-adaptable, Visual BPEL Orchestrator), operates at a low level of abstraction
and allows users with knowledge of BPEL to use the full extent of the language.
DAVO is a software that offers extensibility and customizability for different application
domains. These features are used in the implementation of the second tool,
SimpleBPEL Composer. SimpleBPEL is aimed at users with little or no background
in computer science and allows for quick and intuitive development of BPEL workflows based on predefined components
Recommended from our members
A Proactive Adaptation Framework for Composite Web Services
Service orientation is a design paradigm consisting of a set of principles governed by a service-oriented architecture (SOA) to support the creation of software systems as a composition of interoperable services. The ability to effectively compose services is not a trivial task due to the dynamic nature of the execution environment of service compositions. In this context, dynamic service selection and composition is a critical requirement and one of the major research challenges for service-based systems.
This research investigates the identification, detection and prediction of the need for adaptation as well as ways to autonomously reconfigure the service composition during its execution time in order to improve service reliability and conformance with systems requirements and policies. We propose a framework for proactive adaptation of service compositions that extends current approaches for dynamic service composition by proactively and individually identifying the need for adaptation for each parallel running instance of service composition while avoiding unnecessary changes and distributing load request among different service operations when necessary.
Our framework has been tested and validated using different prototypes implemented in both simulated and real environments. The results were favourable with the research objectives and indicates a major gain in the use of the proposed proactive techniques in the execution and adaptation of web service compositions
Context-Aware and Secure Workflow Systems
Businesses do evolve. Their evolution necessitates the re-engineering of their existing "business processes”, with the objectives of reducing costs, delivering services on time, and enhancing their profitability in a competitive market. This is generally true and particularly in domains such as manufacturing, pharmaceuticals and education). The central objective of workflow technologies is to separate business policies (which normally are encoded in business logics) from the underlying business applications. Such a separation is desirable as it improves the evolution of business processes and, more often than not, facilitates the re-engineering at the organisation level without the need to detail knowledge or analyses of the application themselves. Workflow systems are currently used by many organisations with a wide range of interests and specialisations in many domains. These include, but not limited to, office automation, finance and banking sector, health-care, art, telecommunications, manufacturing and education. We take the view that a workflow is a set of "activities”, each performs a piece of functionality within a given "context” and may be constrained by some security requirements. These activities are coordinated to collectively achieve a required business objective. The specification of such coordination is presented as a set of "execution constraints” which include parallelisation (concurrency/distribution), serialisation, restriction, alternation, compensation and so on. Activities within workflows could be carried out by humans, various software based application programs, or processing entities according to the organisational rules, such as meeting deadlines or performance improvement. Workflow execution can involve a large number of different participants, services and devices which may cross the boundaries of various organisations and accessing variety of data.
This raises the importance of
_ context variations and context-awareness and
_ security (e.g. access control and privacy).
The specification of precise rules, which prevent unauthorised participants from executing sensitive tasks and also to prevent tasks from accessing unauthorised services or (commercially) sensitive information, are crucially important. For example, medical scenarios will require that:
_ only authorised doctors are permitted to perform certain tasks,
_ a patient medical records are not allowed to be accessed by anyone without
the patient consent and
_ that only specific machines are used to perform given tasks at a given time.
If a workflow execution cannot guarantee these requirements, then the flow will
be rejected. Furthermore, features/characteristics of security requirement are both
temporal- and/or event-related. However, most of the existing models are of a
static nature – for example, it is hard, if not impossible, to express security requirements which are:
_ time-dependent (e.g. A customer is allowed to be overdrawn by 100 pounds
only up-to the first week of every month.
_ event-dependent (e.g. A bank account can only be manipulated by its owner unless there is a change in the law or after six months of his/her death).
Currently, there is no commonly accepted model for secure and context-aware workflows or even a common agreement on which features a workflow security model should support. We have developed a novel approach to design, analyse and validate workflows. The approach has the following components:
= A modelling/design language (known as CS-Flow).
The language has the following features:
– support concurrency;
– context and context awareness are first-class citizens;
– supports mobility as activities can move from one context to another;
– has the ability to express timing constrains: delay, deadlines, priority and schedulability;
– allows the expressibility of security policies (e.g. access control and privacy) without the need for extra linguistic complexities; and
– enjoy sound formal semantics that allows us to animate designs and compare various designs.
= An approach known as communication-closed layer is developed, that allows us to serialise a highly distributed workflow to produce a semantically equivalent quasi-sequential flow which is easier to understand and analyse. Such re-structuring, gives us a mechanism to design fault-tolerant workflows as layers are atomic activities and various existing forward and backward error recovery techniques can be deployed.
= Provide a reduction semantics to CS-Flow that allows us to build a tool support to animate a specifications and designs. This has been evaluated on a Health care scenario, namely the Context Aware Ward (CAW) system. Health care provides huge amounts of business workflows, which will benefit from workflow adaptation and support through pervasive computing systems. The evaluation takes two complementary strands:
– provide CS-Flow’s models and specifications and
– formal verification of time-critical component of a workflow
Distribution pattern-driven development of service architectures
Distributed systems are being constructed by composing a number of discrete components. This practice is particularly prevalent within the Web service domain in the form of service process orchestration and choreography. Often, enterprise systems are built from many existing discrete applications such as legacy applications exposed using Web service interfaces. There are a number of architectural configurations or distribution patterns, which express how a composed system is to be deployed in a distributed environment. However, the amount of code
required to realise these distribution patterns is considerable. In this paper, we propose a distribution
pattern-driven approach to service composition and architecting. We develop, based on a catalog of patterns, a UML-compliant framework, which takes existing Web service interfaces as its input and generates executable Web service compositions based on a distribution pattern chosen by the software architect
An Investigation into Dynamic Web Service Composition Using a Simulation Framework
[Motivation] Web Services technology has emerged as a promising solution for creat- ing distributed systems with the potential to overcome the limitation of former distrib- uted system technologies. Web services provide a platform-independent framework that enables companies to run their business services over the internet. Therefore, many techniques and tools are being developed to create business to business/business to customer applications. In particular, researchers are exploring ways to build new services from existing services by dynamically composing services from a range of resources. [Aim] This thesis aims to identify the technologies and strategies cur- rently being explored for organising the dynamic composition of Web services, and to determine how extensively each of these has been demonstrated and assessed. In addition, the thesis will study the matchmaking and selection processes which are essential processes for Web service composition. [Research Method] We under- took a mapping study of empirical papers that had been published over the period 2000 to 2009. The aim of the mapping study was to identify the technologies and strategies currently being explored for organising the composition of Web services, and to determine how extensively each of these has been demonstrated and assessed. We then built a simulation framework to carry out some experiments on composition strategies. The rst experiment compared the results of a close replication of an ex- isting study with the original results in order to evaluate our close replication study. The simulation framework was then used to investigate the use of a QoS model for supporting the selection process, comparing this with the ranking technique in terms of their performance. [Results] The mapping study found 1172 papers that matched our search terms, from which 94 were classied as providing practical demonstration of ideas related to dynamic composition. We have analysed 68 of these in more detail. Only 29 provided a `formal' empirical evaluation. From these, we selected a `baseline' study to test our simulation model. Running the experiments using simulated data- sets have shown that in the rst experiment the results of the close replication study and the original study were similar in terms of their prole. In the second experiment, the results demonstrated that the QoS model was better than the ranking mechanism in terms of selecting a composite plan that has highest quality score. [Conclusions] No one approach to service composition seemed to meet all needs, but a number has been investigated more. The similarity between the results of the close replication and the original study showed the validity of our simulation framework and a proof that the results of the original study can be replicated. Using the simulation it was demonstrated that the performance of the QoS model was better than the ranking mechanism in terms of the overall quality for a selected plan. The overall objectives of this research are to develop a generic life-cycle model for Web service composition from a mapping study of the literature. This was then used to run simulations to replicate studies on matchmaking and compare selection methods
Partitioning workflow applications over federated clouds to meet non-functional requirements
PhD ThesisWith cloud computing, users can acquire computer resources when they need them
on a pay-as-you-go business model. Because of this, many applications are now being
deployed in the cloud, and there are many di erent cloud providers worldwide. Importantly,
all these various infrastructure providers o er services with di erent levels
of quality. For example, cloud data centres are governed by the privacy and security
policies of the country where the centre is located, while many organisations have
created their own internal \private cloud" to meet security needs.
With all this varieties and uncertainties, application developers who decide to host their
system in the cloud face the issue of which cloud to choose to get the best operational
conditions in terms of price, reliability and security. And the decision becomes even
more complicated if their application consists of a number of distributed components,
each with slightly di erent requirements.
Rather than trying to identify the single best cloud for an application, this thesis
considers an alternative approach, that is, combining di erent clouds to meet users'
non-functional requirements. Cloud federation o ers the ability to distribute a single
application across two or more clouds, so that the application can bene t from the
advantages of each one of them. The key challenge for this approach is how to nd the
distribution (or deployment) of application components, which can yield the greatest
bene ts. In this thesis, we tackle this problem and propose a set of algorithms, and a
framework, to partition a work
ow-based application over federated clouds in order to
exploit the strengths of each cloud. The speci c goal is to split a distributed application
structured as a work
ow such that the security and reliability requirements of each
component are met, whilst the overall cost of execution is minimised.
To achieve this, we propose and evaluate a cloud broker for partitioning a work
ow
application over federated clouds. The broker integrates with the e-Science Central
cloud platform to automatically deploy a work
ow over public and private clouds.
We developed a deployment planning algorithm to partition a large work
ow appli-
- i -
cation across federated clouds so as to meet security requirements and minimise the
monetary cost.
A more generic framework is then proposed to model, quantify and guide the partitioning
and deployment of work
ows over federated clouds. This framework considers
the situation where changes in cloud availability (including cloud failure) arise during
work
ow execution
UML-Based Modeling of Robustness Testing
Abstract-The aim of robustness testing is to characterize the behavior of a system in the presence of erroneous or stressful input conditions. It is a well-established approach in the dependability community, which has a long tradition of testing based on fault injection. However, a recurring problem is the insufficient documentation of experiments, which may prevent their replication. Our work investigates whether UMLbased documentation could be used. It proposes an extension of the UML Testing Profile that accounts for the specificities of robustness testing experiments. The extension also reuses some elements of the QoSFT profile targeting measurements. Its ability to model realistic experiments is demonstrated on a case study from dependability research
- …