9,891 research outputs found
AUTOMATIC PROXY GENERATION AND LOAD-BALANCING-BASED DYNAMIC CHOICE OF SERVICES
The paper addresses the issues of invoking services from within workïŹows which are becoming an increasingly popular paradigm of distributed programming. The main idea of our research is to develop a facility which enables load balancing between the available services and their instances. The system consists of three main modules: a proxy generator for a speciïŹc service according to its interface type, a proxy that redirects requests to a concrete instance of the service and load-balancer (LB) to choose the least loaded virtual machine (VM) which hosts a single service instance. The proxy generator was implemented as a bean (in compliance to EJB standard) which generates proxy according to the WSDL service interface description using XSLT engine and then deploys it on a GlassFish application server using GlassFish API, the proxy is a BPEL module and load-balancer is a stateful Web Service
SWI-Prolog and the Web
Where Prolog is commonly seen as a component in a Web application that is
either embedded or communicates using a proprietary protocol, we propose an
architecture where Prolog communicates to other components in a Web application
using the standard HTTP protocol. By avoiding embedding in external Web servers
development and deployment become much easier. To support this architecture, in
addition to the transfer protocol, we must also support parsing, representing
and generating the key Web document types such as HTML, XML and RDF.
This paper motivates the design decisions in the libraries and extensions to
Prolog for handling Web documents and protocols. The design has been guided by
the requirement to handle large documents efficiently. The described libraries
support a wide range of Web applications ranging from HTML and XML documents to
Semantic Web RDF processing.
To appear in Theory and Practice of Logic Programming (TPLP)Comment: 31 pages, 24 figures and 2 tables. To appear in Theory and Practice
of Logic Programming (TPLP
A Scalable Cluster-based Infrastructure for Edge-computing Services
In this paper we present a scalable and dynamic intermediary infrastruc-
ture, SEcS (acronym of BScalable Edge computing Servicesââ), for developing and
deploying advanced Edge computing services, by using a cluster of heterogeneous
machines. Our goal is to address the challenges of the next-generation Internet
services: scalability, high availability, fault-tolerance and robustness, as well as
programmability and quick prototyping. The system is written in Java and is based
on IBMâs Web Based Intermediaries (WBI) [71] developed at IBM Almaden
Research Center
Recommended from our members
Multimedia delivery in the future internet
The term âNetworked Mediaâ implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizensâ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications âon the moveâ, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
Supporting Quality of Service in Scientific Workflows
While workflow management systems have been utilized in enterprises to support
businesses for almost two decades, the use of workflows in scientific environments
was fairly uncommon until recently. Nowadays, scientists use workflow systems to
conduct scientific experiments, simulations, and distributed computations. However,
most scientific workflow management systems have not been built using existing
workflow technology; rather they have been designed and developed from
scratch. Due to the lack of generality of early scientific workflow systems, many
domain-specific workflow systems have been developed. Generally speaking, those
domain-specific approaches lack common acceptance and tool support and offer
lower robustness compared to business workflow systems.
In this thesis, the use of the industry standard BPEL, a workflow language
for modeling business processes, is proposed for the modeling and the execution of
scientific workflows. Due to the widespread use of BPEL in enterprises, a number
of stable and mature software products exist. The language is expressive (Turingcomplete)
and not restricted to specific applications. BPEL is well suited for the
modeling of scientific workflows, but existing implementations of the standard lack
important features that are necessary for the execution of scientific workflows.
This work presents components that extend an existing implementation of the
BPEL standard and eliminate the identified weaknesses. The components thus provide
the technical basis for use of BPEL in academia. The particular focus is on
so-called non-functional (Quality of Service) requirements. These requirements include
scalability, reliability (fault tolerance), data security, and cost (of executing a
workflow). From a technical perspective, the workflow system must be able to interface
with the middleware systems that are commonly used by the scientific workflow
community to allow access to heterogeneous, distributed resources (especially Grid
and Cloud resources).
The major components cover exactly these requirements:
Cloud Resource Provisioner Scalability of the workflow system is achieved by
automatically adding additional (Cloud) resources to the workflow systemâs
resource pool when the workflow system is heavily loaded.
Fault Tolerance Module High reliability is achieved via continuous monitoring
of workflow execution and corrective interventions, such as re-execution of a
failed workflow step or replacement of the faulty resource.
Cost Aware Data Flow Aware Scheduler The majority of scientific workflow
systems only take the performance and utilization of resources for the execution
of workflow steps into account when making scheduling decisions. The
presented workflow system goes beyond that. By defining preference values
for the weighting of costs and the anticipated workflow execution time,
workflow users may influence the resource selection process. The developed multiobjective
scheduling algorithm respects the defined weighting and makes both
efficient and advantageous decisions using a heuristic approach.
Security Extensions Because it supports various encryption, signature and authentication
mechanisms (e.g., Grid Security Infrastructure), the workflow
system guarantees data security in the transfer of workflow data.
Furthermore, this work identifies the need to equip workflow developers with
workflow modeling tools that can be used intuitively. This dissertation presents
two modeling tools that support users with different needs. The first tool, DAVO
(domain-adaptable, Visual BPEL Orchestrator), operates at a low level of abstraction
and allows users with knowledge of BPEL to use the full extent of the language.
DAVO is a software that offers extensibility and customizability for different application
domains. These features are used in the implementation of the second tool,
SimpleBPEL Composer. SimpleBPEL is aimed at users with little or no background
in computer science and allows for quick and intuitive development of BPEL workflows based on predefined components
Web Content Delivery Optimization
Milliseconds matters, when theyâre counted. If we consider the life of the universe into one single year, then on 31 December at 11:59:59.5 PM, âspeedâ was transportationâs concern, and now after 500 milliseconds it is webâs, and no one knows whose concern it would be in coming milliseconds, but at this very moment; this thesis proposes an optimization method, mainly for content delivery on slow connections. The method utilizes a proxy as a middle box to fetch the content; requested by a client, from a single or multiple web servers, and bundles all of the fetched image content types that fits into the bundling policy; inside a JavaScript file in Base64 format. This optimization method reduces the number of HTTP requests between the client and multiple web servers as a result of its proposed bundling solution, and at the same time optimizes the HTTP compression efficiency as a result of its proposed method of aggregative textual content compression. Page loading time results of the test web pages; which were specially designed and developed to capture the optimum benefits of the proposed method; proved up to 81% faster page loading time for all connection types. However, other tests in non-optimal situations such as webpages which use âLazy Loadingâ techniques, showed just 35% to 50% benefits, that is only achievable on 2G and 3G connections (0.2 Mbps â 15 Mbps downlink) and not faster connections
- âŠ