340 research outputs found
A transformation-based approach to business process management in the cloud
Business Process Management (BPM) has gained a lot of popularity in the last two decades, since it allows organizations to manage and optimize their business processes. However, purchasing a BPM system can be an expensive investment for a company, since not only the software itself needs to be purchased, but also hardware is required on which the process engine should run, and personnel need to be hired or allocated for setting up and maintaining the hardware and the software. Cloud computing gives its users the opportunity of using computing resources in a pay-per-use manner, and perceiving these resources as unlimited. Therefore, the application of cloud computing technologies to BPM can be extremely beneficial specially for small and middle-size companies. Nevertheless, the fear of losing or exposing sensitive data by placing these data in the cloud is one of the biggest obstacles to the deployment of cloud-based solutions in organizations nowadays. In this paper we introduce a transformation-based approach that allows companies to control the parts of their business processes that should be allocated to their own premises and to the cloud, to avoid unwanted exposure of confidential data and to profit from the high performance of cloud environments. In our approach, the user annotates activities and data that should be placed in the cloud or on-premise, and an automated transformation generates the process fragments for cloud and on-premise deployment. The paper discusses the challenges of developing the transformation and presents a case study that demonstrates the applicability of the approach
Recommended from our members
Requirements-Driven Adaptation of Choreographed Interactions
Electronic services are emerging as the de-facto enabler of interaction interoperability across organization boundaries. Cross-organizational interactions are often “choreographed”, i.e. specified by a messaging protocol from a global point of view independent of the local view of each interacting organization. Local requirements motivating an interaction as well as the global contextual requirements governing the interaction inevitably evolve over time, requiring adaptation of the corresponding interaction protocol. Adaptation of an interaction protocol must ensure the satisfaction of both sets of interaction requirements while maintaining consistency between the global view and the local views of an interaction specification. Such adaptation is not possible with the current state-of-the-art representations of choreographed interactions, as they capture only operational messaging specifications detached from both local organizational requirements as well as global contextual requirements.
This thesis presents three novel contributions that tackle adaptation of choreographed interaction protocols: an automated technique for deriving an interaction protocol from requirements, a formalization of consistency between local and global views, and a framework for guiding the adaptation of a choreographed interaction. A choreographed interaction is specified using models of organizational requirements motivating the interaction. We employ the formal semantics embedded in requirements models to automatically derive an interaction protocol. We propose a framework for relating the global and local views of interaction specification and maintaining consistency between them. We develop a metamodel for interaction specification, from which we enumerate adaptation operations. We build a catalogue that provides guidance on performing each operation and propagating changes between the global and local views. These contributions are evaluated using examples from the literature as well as a real-world case study
Programming and parallelising applications for distributed infrastructures
The last decade has witnessed unprecedented changes in parallel and distributed infrastructures. Due to the diminished gains in processor performance from increasing clock frequency, manufacturers have moved from uniprocessor architectures to multicores; as a result, clusters of computers have incorporated such new CPU designs. Furthermore, the ever-growing need of scienti c applications for computing and storage capabilities has motivated the appearance of grids: geographically-distributed, multi-domain infrastructures based on sharing
of resources to accomplish large and complex tasks. More recently, clouds have emerged by combining virtualisation technologies, service-orientation and business models to deliver IT resources on demand over the Internet.
The size and complexity of these new infrastructures poses a challenge for programmers to exploit them. On the one hand, some of the di culties are inherent to concurrent and distributed programming themselves, e.g. dealing with thread creation and synchronisation, messaging, data partitioning and transfer, etc. On the other hand, other issues are related to the singularities of each scenario, like the heterogeneity of Grid middleware and resources or the risk of vendor lock-in when writing an application for a particular Cloud provider.
In the face of such a challenge, programming productivity - understood as a tradeo between programmability and performance - has become crucial for software developers. There is a strong need for high-productivity programming models and languages, which should provide simple means for writing parallel and distributed applications that can run on current infrastructures without sacri cing performance.
In that sense, this thesis contributes with Java StarSs, a programming model and runtime system for developing and parallelising Java applications on distributed infrastructures. The model has two key features: first, the user programs in a fully-sequential standard-Java fashion - no parallel construct, API call or pragma must be included in the application code; second, it is completely infrastructure-unaware, i.e. programs do not contain any details about deployment or resource management, so that the same application can run in di erent
infrastructures with no changes. The only requirement for the user is to select the application tasks, which are the model's unit of parallelism. Tasks can be either regular Java methods or web service operations, and they can handle any data type supported by the Java language, namely les, objects, arrays and primitives. For the sake of simplicity of the model, Java StarSs shifts the burden of parallelisation from the programmer to the runtime system. The runtime is responsible from modifying the original application to make it create asynchronous
tasks and synchronise data accesses from the main program. Moreover, the implicit inter-task concurrency is automatically found as the application executes, thanks to a data dependency detection mechanism that integrates all the Java data types.
This thesis provides a fairly comprehensive evaluation of Java StarSs on three di erent distributed scenarios: Grid, Cluster and Cloud. For each of them, a runtime system was designed and implemented to exploit their particular characteristics as well as to address their issues, while keeping the infrastructure unawareness of the programming model. The evaluation compares Java StarSs against state-of-the-art solutions, both in terms of programmability and performance, and demonstrates how the model can bring remarkable productivity to programmers of parallel distributed applications
Supporting Quality of Service in Scientific Workflows
While workflow management systems have been utilized in enterprises to support
businesses for almost two decades, the use of workflows in scientific environments
was fairly uncommon until recently. Nowadays, scientists use workflow systems to
conduct scientific experiments, simulations, and distributed computations. However,
most scientific workflow management systems have not been built using existing
workflow technology; rather they have been designed and developed from
scratch. Due to the lack of generality of early scientific workflow systems, many
domain-specific workflow systems have been developed. Generally speaking, those
domain-specific approaches lack common acceptance and tool support and offer
lower robustness compared to business workflow systems.
In this thesis, the use of the industry standard BPEL, a workflow language
for modeling business processes, is proposed for the modeling and the execution of
scientific workflows. Due to the widespread use of BPEL in enterprises, a number
of stable and mature software products exist. The language is expressive (Turingcomplete)
and not restricted to specific applications. BPEL is well suited for the
modeling of scientific workflows, but existing implementations of the standard lack
important features that are necessary for the execution of scientific workflows.
This work presents components that extend an existing implementation of the
BPEL standard and eliminate the identified weaknesses. The components thus provide
the technical basis for use of BPEL in academia. The particular focus is on
so-called non-functional (Quality of Service) requirements. These requirements include
scalability, reliability (fault tolerance), data security, and cost (of executing a
workflow). From a technical perspective, the workflow system must be able to interface
with the middleware systems that are commonly used by the scientific workflow
community to allow access to heterogeneous, distributed resources (especially Grid
and Cloud resources).
The major components cover exactly these requirements:
Cloud Resource Provisioner Scalability of the workflow system is achieved by
automatically adding additional (Cloud) resources to the workflow system’s
resource pool when the workflow system is heavily loaded.
Fault Tolerance Module High reliability is achieved via continuous monitoring
of workflow execution and corrective interventions, such as re-execution of a
failed workflow step or replacement of the faulty resource.
Cost Aware Data Flow Aware Scheduler The majority of scientific workflow
systems only take the performance and utilization of resources for the execution
of workflow steps into account when making scheduling decisions. The
presented workflow system goes beyond that. By defining preference values
for the weighting of costs and the anticipated workflow execution time,
workflow users may influence the resource selection process. The developed multiobjective
scheduling algorithm respects the defined weighting and makes both
efficient and advantageous decisions using a heuristic approach.
Security Extensions Because it supports various encryption, signature and authentication
mechanisms (e.g., Grid Security Infrastructure), the workflow
system guarantees data security in the transfer of workflow data.
Furthermore, this work identifies the need to equip workflow developers with
workflow modeling tools that can be used intuitively. This dissertation presents
two modeling tools that support users with different needs. The first tool, DAVO
(domain-adaptable, Visual BPEL Orchestrator), operates at a low level of abstraction
and allows users with knowledge of BPEL to use the full extent of the language.
DAVO is a software that offers extensibility and customizability for different application
domains. These features are used in the implementation of the second tool,
SimpleBPEL Composer. SimpleBPEL is aimed at users with little or no background
in computer science and allows for quick and intuitive development of BPEL workflows based on predefined components
Achieving Coordination Through Dynamic Construction of Open Workflows ** PLEASE SEE WUCSE-2009-14 **
Workflows, widely used on the Internet today, typically consist of a graph-like structure that defines the orchestration rules for executing a set of tasks, each of which is matched at run-rime to a corresponding service. The graph is static, specialized directories enable the discovery of services, and the wired infrastructure supports routing of results among tasks. In this paper we introduce a radically new paradigm for workflow construction and execution called open workflow. It is motivated by the growing reliance on wireless ad hoc networks in settings such as emergency response, field hospitals, and military operations. Open workflows facilitate goal-directed coordination among physically mobile agents (people and host devices) that form a transient community over an ad hoc wireless network. The quintessential feature of the open workflow paradigm is the ability to construct a custom context-specific workflow specification on the fly in response to unpredictable and evolving circumstances by exploiting the knowhow and services available within a given spatiotemporal context. This paper introduces the open workflow approach and explores the technical challenges (algorithms and architecture) associated with its first practical realization
- …