147 research outputs found

    A Novel Approach for Triggering the Serverless Function in Serverless Environment

    Get PDF
    Serverless computing has gained significant popularity in recent years due to its scalability, cost efficiency, and simplified development process. In a serverless environment, functions are the basic units of computation that are executed on-demand, without the need for provisioning and managing servers. However, efficiently triggering serverless functions remains a challenge, as traditional methodologies often suffer from latency, Time limit and scalability issues and the efficient execution and management of serverless functions heavily rely on effective triggering mechanisms. This research paper explores various design considerations and proposes a novel approach for designing efficient triggering mechanisms in serverless environments. By leveraging our proposed methodology, developers can efficiently trigger serverless functions in a variety of scenarios, including event-driven architectures, data processing pipelines, and web application backend

    Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud

    Full text link
    With the advent of cloud computing, organizations are nowadays able to react rapidly to changing demands for computational resources. Not only individual applications can be hosted on virtual cloud infrastructures, but also complete business processes. This allows the realization of so-called elastic processes, i.e., processes which are carried out using elastic cloud resources. Despite the manifold benefits of elastic processes, there is still a lack of solutions supporting them. In this paper, we identify the state of the art of elastic Business Process Management with a focus on infrastructural challenges. We conceptualize an architecture for an elastic Business Process Management System and discuss existing work on scheduling, resource allocation, monitoring, decentralized coordination, and state management for elastic processes. Furthermore, we present two representative elastic Business Process Management Systems which are intended to counter these challenges. Based on our findings, we identify open issues and outline possible research directions for the realization of elastic processes and elastic Business Process Management.Comment: Please cite as: S. Schulte, C. Janiesch, S. Venugopal, I. Weber, and P. Hoenisch (2015). Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud. Future Generation Computer Systems, Volume NN, Number N, NN-NN., http://dx.doi.org/10.1016/j.future.2014.09.00

    An object-oriented model for adaptive high-performance computing on the computational GRID

    Get PDF
    The dissertation presents a new parallel programming paradigm for developing high performance (HPC) applications on the Grid. We address the question "How to tailor HPC applications to the Grid?" where the heterogeneity and the large scale of resources are the two main issues. We respond to the question at two different levels: the programming tool level and the parallelization concept level. At the programming tool level, the adaptation of applications to the Grid environment consists of two forms: either the application components should somehow decompose dynamically based on the available resources; or the components should be able to ask the infrastructure to select automatically the suitable resources by providing descriptive information about the resource requirements. These two forms of adaptation lead to the parallel object model on which resource requirements are integrated into shareable distributed objects under the form of object descriptions. We develop a tool called ParoC++ that implements the parallel object model. ParoC++ provides a comprehensive object-oriented infrastructure for developing and integrating HPC applications, for managing the Grid environment and for executing applications on the Grid. At the parallelization concept level, we investigate the parallelization scheme which provides the user a method to express the parallelism to satisfy the user specified time constraints for a class of problems with known (or well-estimated) complexities on the Grid. The parallelization scheme is constructed on the following two principal elements: the decomposition tree which represents the multi-level decomposition and the decomposition dependency graph which defines the partial order of execution within each decomposition. Through the scheme, the parallelism grain will be automatically chosen based on the available resources at run-time. The parallelization scheme framework has been implemented using the ParoC++. This framework provides a high level abstraction which hides all of the complexities of the Grid environment so that users can focus on the "logic" of their problems. The dissertation has been accompanied with a series of benchmarks and two real life applications from image analysis for real-time textile manufacturing and from snow simulation and avalanche warning. The results show the effectiveness of ParoC++ on developing high performance computing applications and in particular for solving the time constraint problems on the Grid

    A Service-Oriented Approach for Network-Centric Data Integration and Its Application to Maritime Surveillance

    Get PDF
    Maritime-surveillance operators still demand for an integrated maritime picture better supporting international coordination for their operations, as looked for in the European area. In this area, many data-integration efforts have been interpreted in the past as the problem of designing, building and maintaining huge centralized repositories. Current research activities are instead leveraging service-oriented principles to achieve more flexible and network-centric solutions to systems and data integration. In this direction, this article reports on the design of a SOA platform, the Service and Application Integration (SAI) system, targeting novel approaches for legacy data and systems integration in the maritime surveillance domain. We have developed a proof-of-concept of the main system capabilities to assess feasibility of our approach and to evaluate how the SAI middleware architecture can fit application requirements for dynamic data search, aggregation and delivery in the distributed maritime domain

    MapReduce to couple a bio-mechanical and a systems-biological simulation

    Get PDF
    Recently, workflow technology has fostered the hope of the scientific community in that they could help complex scientific simulations to become easier to implement and maintain. The subject of this thesis is an existing workflow for a multi-scalar simulation which calculates the flux of porous mass in human bones. The simulation consists of separate systems-biological and bio-mechanical simulation steps coupled through additional data processing steps. The workflow exhibits a high potential for parallelism which is only used to a marginal degree. Thus we investigate whether "Big Data" concepts such as MapReduce or NoSQL can be integrated into the workflow. A prototype of the workflow is developed using the Apache Hadoop ecosystem to parallelize the simulation and this prototype compared against a hand-parallelized baseline prototype in terms of performance and scalability. NoSQL concepts for storing inputs and results are utilized with an emphasis on HDFS, the Hadoop File System, as a schemaless distributed file system and MySQL Cluster as an intermediary between a classic database system and a NoSQL system. Lastly, the MapReduce-based prototype is implemented in the WS-BPEL workflow language using the SIMPL[RRS+11] framework and a customWeb Service to access Hadoop functionality. We show the simplicity of the resulting workflow model and argue that the approach greatly decreases implementation effort and at the same time enables simulations to scale to very large data volumes at ease.Workflow Technologien werden aktuell verstärkt eingesetzt in der Hoffnung, hierdurch komplexe wissenschaftliche Simulationsabläufe einfacher umsetzen zu können. Das Thema dieser Arbeit ist ein existierender Workflow, der eine multiskalare Simulation des Massenflusses im porösen menschlichen Knochenmaterial umsetzt. Diese Simulation besteht aus getrennten systembiologischen und biomechanischen Berechnungen, die durch weitere Datenverarbeitungsschritte miteinander verbunden sind. Der Workflow weist ein erhebliches Potenzial zur Parallelisierung auf, welches allerdings nur geringfügig genutzt wird. Wir untersuchen daher, inwieweit sich "Big Data"-Konzepte wie etwa MapReduce oder NoSQL-Datenbanksysteme auf den Workflow übertragen lassen. Ein Prototyp des Workflows wird mithilfe des Apache Hadoop-Ökosystems zur Parallelisierung der Simulation entwickelt und mit einem von Hand parallelisierten zweiten Prototyp in Bezug auf Effizienz und Skalierbarkeit verglichen. NoSQL-Konzepte zum Speichern von Eingaben und Resultaten werden angewendet, hierbei liegt der Fokus auf HDFS, dem Hadoop File System, als schemalosem, verteiltem Dateisystem und MySQL Cluster als einem Hybriden aus klassischem Datenbanksystem und einem NoSQL-Ansatz. Zuletzt wird der MapReduce-basierte Prototyp in die Workflow-Beschreibungssprache WSBPEL übertragen, wobei das SIMPL-Rahmenwerk[RRS+11] und ein spezieller Web Service zur Anbindung an Hadoop zum Einsatz kommen. Wir zeigen die Einfachkeit des resultierenden Workflows und halten fest, dass der gewählte Ansatz nicht nur den Implementierungsaufwand für Workflows verringert, sondern es auch einfacher macht, sich größerem Datenaufkommen anzupassen
    corecore