8 research outputs found
Dynamic deployment of web services on the internet or grid
PhD ThesisThis thesis focuses on the area of dynamic Web Service deployment for grid and
Internet applications. It presents a new Dynamic Service Oriented Architecture
(DynaSOAr) that enables the deployment of Web Services at run-time in response to
consumer requests.
The service-oriented approach to grid and Internet computing is centred on two
parties: the service provider and the service consumer. This thesis investigates the
introduction of mobility into this service-oriented approach allowing for better use of
resources and improved quality of service. To this end, it examines the role of the
service provider and makes the case for a clear separation of its concerns into two
distinct roles: that of a Web Service Provider, whose responsibility is to receive and
direct consumer requests and supply service implementations, and a Host Provider,
whose role is to deploy services and process consumers' requests on available
resources. This separation of concerns breaks the implicit bond between a published
Web Service endpoint (network address) and the resource upon which the service is
deployed. It also allows the architecture to respond dynamically to changes in service
demand and the quality of service requirements. Clearly defined interfaces for each
role are presented, which form the infrastructure of DynaSOAr. The approach taken
is wholly based on Web Services.
The dynamic deployment of service code between separate roles, potentially running
in different administrative domains, raises a number of security issues which are
addressed. A DynaSOAr service invocation involves three parties: the requesting
Consumer, a Web Service Provider and a Host Provider; this tripartite relationship
requires a security model that allows the concerns of each party to be enforced for a
given invocation. This thesis, therefore, presents a Tripartite Security Model and an
architecture that allows the representation, propagation and enforcement of three
separate sets of constraints.
A prototype implementation of DynaSOAr is used to evaluate the claims made, and
the results show that a significant benefit in terms of round-trip execution time for
data-intensive applications is achieved. Additional benefits in terms of parallel
deployments to satisfy multiple concurrent requests are also shown
Recommended from our members
IOME, A Toolkit for Distributed and Collaborative Computational Science and Engineering
The internet provides a media rich communications platform enabling communities to share content. Alongside the increased activity in collaborative work, recent developments on workflow tools are now enabling researchers from different disciplines to collaborate by feeding data and results between large multi-disciplinary, optimization problems. Researchers developing computational models require development kits and tools enabling them to provide simulations with a range of methods that facilitate collaboration. This paper presents a unique, multi-purpose tool-kit, enabling researchers to easily develop simulations which may be run as web services and accessed interactively. The development kit is based on a protocol that uses an XML markup called IOME ML, "the Interactive Object Management Environment Markup Language". The paper describes the IOME ML and it's development kit. We illustrate the capabilities of IOME with two case studies. Firstly, a medical image processing application which is wrapped as a web service and accessed through a web browser offering medical professionals image analysis tools. Secondly, a method of collaborative visualisation and computational steering of a tsunami simulation based on a shallow water wave model. The paper concludes with a review of further developments including refinements to the mark up language and the development of a service factory enabling dynamic invocation of published simulations as IOME web service applications
Two ways to Grid: the contribution of Open Grid Services Architecture (OGSA) mechanisms to service-centric and resource-centric lifecycles
Service Oriented Architectures (SOAs) support service lifecycle tasks, including Development, Deployment, Discovery and Use. We observe that there are two disparate ways to use Grid SOAs such as the Open Grid Services Architecture (OGSA) as exemplified in the Globus Toolkit (GT3/4). One is a traditional enterprise SOA use where end-user services are developed, deployed and resourced behind firewalls, for use by external consumers: a service-centric (or ‘first-order’) approach. The other supports end-user development, deployment, and resourcing of applications across organizations via the use of execution and resource management services: A Resource-centric (or ‘second-order’) approach. We analyze and compare the two approaches using a combination of empirical experiments and an architectural evaluation methodology (scenario, mechanism, and quality attributes) to reveal common and distinct strengths and weaknesses. The impact of potential improvements (which are likely to be manifested by GT4) is estimated, and opportunities for alternative architectures and technologies explored. We conclude by investigating if the two approaches can be converged or combined, and if they are compatible on shared resources
Recommended from our members
A Framework for Providing Research Applications as a Service Using the IOME Toolkit
This paper presents a unique, multi-purpose toolkit, enabling researchers to easily develop modelling and analysis applications, which can be run as web services and accessed interactively. The development kit is based on a protocol that uses an XML markup called the "Interactive Object Management Environment Markup Language" (IOME ML). The paper describes the IOME ML and its development kit.
We illustrate the capabilities of IOME with two case studies the first case study is based on a medical image processing application (CAIMAN: CAncer IMage ANalysis), offering image analysis tools for life scientists. For the second case study, the Pi-Phi collaboration have developed an inverse imaging method for ‘lensless’ microscopy a demonstrator is introduced for the Pi-Phi project. For both case studies the application is wrapped as a web service and accessed through a web browser.
The paper concludes with a review of further developments, including refinements to the mark up language and the development of a service factory, enabling a more scalable service provision model through the dynamic invocation of published simulations as IOME web service applications
Scalable and responsive real time event processing using cloud computing
PhD ThesisCloud computing provides the potential for scalability and adaptability in a cost e ective
manner. However, when it comes to achieving scalability for real time applications
response time cannot be high. Many applications require good performance and low
response time, which need to be matched with the dynamic resource allocation. The
real time processing requirements can also be characterized by unpredictable rates
of incoming data streams and dynamic outbursts of data. This raises the issue of
processing the data streams across multiple cloud computing nodes. This research
analyzes possible methodologies to process the real time data in which applications
can be structured as multiple event processing networks and be partitioned over the
set of available cloud nodes. The approach is based on queuing theory principles
to encompass the cloud computing. The transformation of the raw data into useful
outputs occurs in various stages of processing networks which are distributed across
the multiple computing nodes in a cloud. A set of valid options is created to understand
the response time requirements for each application. Under a given valid set of
conditions to meet the response time criteria, multiple instances of event processing
networks are distributed in the cloud nodes. A generic methodology to scale-up and
scale-down the event processing networks in accordance to the response time criteria
is de ned. The real time applications that support sophisticated decision support
mechanisms need to comply with response time criteria consisting of interdependent
data
ow paradigms making it harder to improve the performance. Consideration is
given for ways to reduce the latency,improve response time and throughput of the real
time applications by distributing the event processing networks in multiple computing
nodes