3,120 research outputs found
A grid-based infrastructure for distributed retrieval
In large-scale distributed retrieval, challenges of latency, heterogeneity, and dynamicity emphasise the importance of infrastructural support in reducing the development costs of state-of-the-art solutions. We present a service-based infrastructure for distributed retrieval which blends middleware facilities and a design framework to ‘lift’ the resource sharing approach and the computational services of a European Grid platform into the domain of e-Science applications. In this paper, we give an overview of the DILIGENT Search Framework and illustrate its exploitation in the field of Earth Science
BPM News - Folge 3
Die BPM-Kolumne des EMISA-Forums berichtet über aktuelle Themen, Projekte und Veranstaltungen aus dem BPM-Umfeld. Schwerpunkt der vorliegenden Kolumne bildet das Thema Standardisierung von Prozessbeschreibungssprachen und -notationen im Allgemeinen und BPEL4WS (Business Process Execution Language for Web Services) im Speziellen. Hierzu liefert Jan Mendling von der Wirtschaftsuniversität Wien in aktuelles Schlagwort. Des weiteren erhalten Leser eine Zusammenfassung zweier im ersten Halbjahr 2006 veranstalteten Workshops zu den Themen „Flexibilität prozessorientierter Informationssysteme“ und „Kollaborative Prozesse“ sowie einen BPM Veranstaltungskalender für die 2. Jahreshälfte 2006
The Signal Data Explorer: A high performance Grid based signal search tool for use in distributed diagnostic applications
We describe a high performance Grid based signal search tool for distributed diagnostic applications developed in conjunction with Rolls-Royce plc for civil aero engine condition monitoring applications. With the introduction of advanced monitoring technology into engineering systems, healthcare, etc., the associated diagnostic processes are increasingly required to handle and consider vast amounts of data. An exemplar of such a diagnosis process was developed during the DAME project, which built a proof of concept demonstrator to assist in the enhanced diagnosis and prognosis of aero-engine conditions. In particular it has shown the utility of an interactive viewing and high performance distributed search tool (the Signal Data Explorer) in the aero-engine diagnostic process. The viewing and search techniques are equally applicable to other domains. The Signal Data Explorer and search services have been demonstrated on the Worldwide Universities Network to search distributed databases of electrocardiograph data
Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud
With the advent of cloud computing, organizations are nowadays able to react
rapidly to changing demands for computational resources. Not only individual
applications can be hosted on virtual cloud infrastructures, but also complete
business processes. This allows the realization of so-called elastic processes,
i.e., processes which are carried out using elastic cloud resources. Despite
the manifold benefits of elastic processes, there is still a lack of solutions
supporting them.
In this paper, we identify the state of the art of elastic Business Process
Management with a focus on infrastructural challenges. We conceptualize an
architecture for an elastic Business Process Management System and discuss
existing work on scheduling, resource allocation, monitoring, decentralized
coordination, and state management for elastic processes. Furthermore, we
present two representative elastic Business Process Management Systems which
are intended to counter these challenges. Based on our findings, we identify
open issues and outline possible research directions for the realization of
elastic processes and elastic Business Process Management.Comment: Please cite as: S. Schulte, C. Janiesch, S. Venugopal, I. Weber, and
P. Hoenisch (2015). Elastic Business Process Management: State of the Art and
Open Challenges for BPM in the Cloud. Future Generation Computer Systems,
Volume NN, Number N, NN-NN., http://dx.doi.org/10.1016/j.future.2014.09.00
Cloudbus Toolkit for Market-Oriented Cloud Computing
This keynote paper: (1) presents the 21st century vision of computing and
identifies various IT paradigms promising to deliver computing as a utility;
(2) defines the architecture for creating market-oriented Clouds and computing
atmosphere by leveraging technologies such as virtual machines; (3) provides
thoughts on market-based resource management strategies that encompass both
customer-driven service management and computational risk management to sustain
SLA-oriented resource allocation; (4) presents the work carried out as part of
our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a
Service software system containing SDK (Software Development Kit) for
construction of Cloud applications and deployment on private or public Clouds,
in addition to supporting market-oriented resource management; (ii)
internetworking of Clouds for dynamic creation of federated computing
environments for scaling of elastic applications; (iii) creation of 3rd party
Cloud brokering services for building content delivery networks and e-Science
applications and their deployment on capabilities of IaaS providers such as
Amazon along with Grid mashups; (iv) CloudSim supporting modelling and
simulation of Clouds for performance studies; (v) Energy Efficient Resource
Allocation Mechanisms and Techniques for creation and management of Green
Clouds; and (vi) pathways for future research.Comment: 21 pages, 6 figures, 2 tables, Conference pape
A Workflow for Fast Evaluation of Mapping Heuristics Targeting Cloud Infrastructures
Resource allocation is today an integral part of cloud infrastructures
management to efficiently exploit resources. Cloud infrastructures centers
generally use custom built heuristics to define the resource allocations. It is
an immediate requirement for the management tools of these centers to have a
fast yet reasonably accurate simulation and evaluation platform to define the
resource allocation for cloud applications. This work proposes a framework
allowing users to easily specify mappings for cloud applications described in
the AMALTHEA format used in the context of the DreamCloud European project and
to assess the quality for these mappings. The two quality metrics provided by
the framework are execution time and energy consumption.Comment: 2nd International Workshop on Dynamic Resource Allocation and
Management in Embedded, High Performance and Cloud Computing DREAMCloud 2016
(arXiv:cs/1601.04675
Portability of Scientific Workflows in NGS Data Analysis: A Case Study
The analysis of next-generation sequencing (NGS) data requires complex
computational workflows consisting of dozens of autonomously developed yet
interdependent processing steps. Whenever large amounts of data need to be
processed, these workflows must be executed on a parallel and/or distributed
systems to ensure reasonable runtime. Porting a workflow developed for a
particular system on a particular hardware infrastructure to another system or
to another infrastructure is non-trivial, which poses a major impediment to the
scientific necessities of workflow reproducibility and workflow reusability. In
this work, we describe our efforts to port a state-of-the-art workflow for the
detection of specific variants in whole-exome sequencing of mice. The workflow
originally was developed in the scientific workflow system snakemake for
execution on a high-performance cluster controlled by Sun Grid Engine. In the
project, we ported it to the scientific workflow system SaasFee that can
execute workflows on (multi-core) stand-alone servers or on clusters of
arbitrary sizes using the Hadoop. The purpose of this port was that also owners
of low-cost hardware infrastructures, for which Hadoop was made for, become
able to use the workflow. Although both the source and the target system are
called scientific workflow systems, they differ in numerous aspects, ranging
from the workflow languages to the scheduling mechanisms and the file access
interfaces. These differences resulted in various problems, some expected and
more unexpected, that had to be resolved before the workflow could be run with
equal semantics. As a side-effect, we also report cost/runtime ratios for a
state-of-the-art NGS workflow on very different hardware platforms: A
comparably cheap stand-alone server (80 threads), a mid-cost, mid-sized cluster
(552 threads), and a high-end HPC system (3784 threads)
- …