18 research outputs found

    Grid service orchestration using the Business Process Execution Language (BPEL)

    Get PDF
    Modern scientific applications often need to be distributed across grids. Increasingly applications rely on services, such as job submission, data transfer or data portal services. We refer to such services as grid services. While the invocation of grid services could be hard coded in theory, scientific users want to orchestrate service invocations more flexibly. In enterprise applications, the orchestration of web services is achieved using emerging orchestration standards, most notably the Business Process Execution Language (BPEL). We describe our experience in orchestrating scientific workflows using BPEL. We have gained this experience during an extensive case study that orchestrates grid services for the automation of a polymorph prediction application

    Semantically Resolving Type Mismatches in Scientific Workflows

    No full text
    Scientists are increasingly utilizing Grids to manage large data sets and execute scientific experiments on distributed resources. Scientific workflows are used as means for modeling and enacting scientific experiments. Windows Workflow Foundation (WF) is a major component of Microsoft’s .NET technology which offers lightweight support for long-running workflows. It provides a comfortable graphical and programmatic environment for the development of extended BPEL-style workflows. WF’s visual features ease the syntactic composition of Web services into scientific workflows but do nothing to assure that information passed between services has consistent semantic types or representations or that deviant flows, errors and compensations are handled meaningfully. In this paper we introduce SAWSDL-compliant annotations for WF and use them with a semantic reasoner to guarantee semantic type correctness in scientific workflows. Examples from bioinformatics are presented

    Enabling quantitative data analysis through e-infrastructures

    Get PDF
    This paper discusses how quantitative data analysis in the social sciences can engage with and exploit an e-Infrastructure. We highlight how a number of activities which are central to quantitative data analysis, referred to as ‘data management’, can benefit from e-infrastructure support. We conclude by discussing how these issues are relevant to the DAMES (Data Management through e-Social Science) research Node, an ongoing project that aims to develop e-Infrastructural resources for quantitative data analysis in the social sciences

    Mashups: An Approach to Overcoming the Business/IT Gap in Service-Oriented Architectures

    Get PDF
    For quite a long time already, great importance has been attached to the concept of Service-Oriented Architectures for future IT-architectures. However, a major challenge in implementing this concept lies in the gap between the functional department and IT department. Mashups, an architecture also based on services, try to avoid this gap by letting the user himself integrate the services. The following article analyzes similarities and differences between both architecture approaches, and explains to what extent and in which cases Mashups could complement a Service-Oriented Architecture

    Extending BPEL for Interoperable Pervasive Computing

    Get PDF
    The widespread deployment of mobile devices like PDAs and mobile phones has created a vast computation and communication platform for pervasive computing applications. However, these devices feature an array of incompatible hardware and software architectures, discouraging ad-hoc interactions among devices. The Business Process Execution Language (BPEL) allows users in wired computing settings to model applications of significant complexity, leveraging Web standards to guarantee interoperability. However, BPEL\u27s inflexible communication model effectively prohibits its deployment on the kinds of dynamic wireless networks used by most pervasive computing devices. This paper presents extensions to BPEL that address these restrictions, transforming BPEL into a versatile platform for interoperable pervasive computing applications. We discuss our implementation of these extensions in Sliver, a lightweight BPEL execution engine that we have developed for mobile devices. We also evaluate a pervasive computing application prototype implemented in BPEL, running on Sliver

    A Distributed Workflow Platform for High-Performance Simulation

    Get PDF
    International audienceThis paper presents an approach to design, implement and deploy a simulation platform based on distributed workflows. It supports the smooth integration of existing software, e.g., Matlab, Scilab, Python, OpenFOAM, Paraview and user-defined programs. Additional features include the support for application-level fault-tolerance and exception-handling, i.e., resilience, and the orchestrated execution of distributed codes on remote high-performance clusters

    Workflow Systems for Science: Concepts and Tools

    Get PDF
    The wide availability of high-performance computing systems, Grids and Clouds, allowed scientists and engineers to implement more and more complex applications to access and process large data repositories and run scientific experiments in silico on distributed computing platforms. Most of these applications are designed as workflows that include data analysis, scientific computation methods, and complex simulation techniques. Scientific applications require tools and high-level mechanisms for designing and executing complex workflows. For this reason, in the past years, many efforts have been devoted towards the development of distributed workflow management systems for scientific applications. This paper discusses basic concepts of scientific workflows and presents workflow system tools and frameworks used today for the implementation of application in science and engineering on high-performance computers and distributed systems. In particular, the paper reports on a selection of workflow systems largely used for solving scientific problems and discusses some open issues and research challenges in the area

    Research in Business Process Management: A bibliometric analysis

    Get PDF
    It contains several growing subtopics such as process mining, process flexibility and process compliance. BPM is also highly relevant for numerous related fields, such as Business Intelligence, ERP systems or Knowledge Management. The growing number of publications and the variety of topics in BPM make it useful to apply bibliometric methods on this scientific field. With bibliometric methods, topical clusters, essential authors and the relationships between them can be discovered. In this work, the BibTechMon software from the Austrian Institute of Technology is utilized to perform the bibliometric analyses. As a novelty for the work with BibTechMon, data from Google Scholar is used as the basis of the analyses. The nature of Google Scholar data differs significantly from the data of other scientific databases. These differences lead to changes on how the bibliometric analyses can be performed. After these changes have been assessed, several bibliometric analyses in the BPM field and related fields are performed. As a result of these analyses, diverse topical clusters in BPM and its related fields could be discovered. Additionally, important authors for each cluster and for the BPM field as a whole were determined. In order to evaluate the results of the bibliometric analyses, I conducted an interview on BPM with Professor Reichert, who is an active researcher in the field. Subsequently, his statements are compared with the results of the bibliometric analyses and the match between the bibliometric analyses and his statements is assessed

    Ensuring Service Level Agreements for Composite Services by Means of Request Scheduling

    Get PDF
    Building distributed systems according to the Service-Oriented Architecture (SOA) allows simplifying the integration process, reducing development costs and increasing scalability, interoperability and openness. SOA endorses the reusability of existing services and aggregating them into new service layers for future recycling. At the same time, the complexity of large service-oriented systems negatively reflects on their behavior in terms of the exhibited Quality of Service. To address this problem this thesis focuses on using request scheduling for meeting Service Level Agreements (SLAs). The special focus is given to composite services specified by means of workflow languages. The proposed solution suggests using two level scheduling: global and local. The global policies assign the response time requirements for component service invocations. The local scheduling policies are responsible for performing request scheduling in order to meet these requirements. The proposed scheduling approach can be deployed without altering the code of the scheduled services, does not require a central point of control and is platform independent. The experiments, conducted using a simulation, were used to study the effectiveness and the feasibility of the proposed scheduling schemes in respect to various deployment requirements. The validity of the simulation was confirmed by comparing its results to the results obtained in experiments with a real-world service. The proposed approach was shown to work well under different traffic conditions and with different types of SLAs
    corecore