308 research outputs found

    Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud

    Full text link
    With the advent of cloud computing, organizations are nowadays able to react rapidly to changing demands for computational resources. Not only individual applications can be hosted on virtual cloud infrastructures, but also complete business processes. This allows the realization of so-called elastic processes, i.e., processes which are carried out using elastic cloud resources. Despite the manifold benefits of elastic processes, there is still a lack of solutions supporting them. In this paper, we identify the state of the art of elastic Business Process Management with a focus on infrastructural challenges. We conceptualize an architecture for an elastic Business Process Management System and discuss existing work on scheduling, resource allocation, monitoring, decentralized coordination, and state management for elastic processes. Furthermore, we present two representative elastic Business Process Management Systems which are intended to counter these challenges. Based on our findings, we identify open issues and outline possible research directions for the realization of elastic processes and elastic Business Process Management.Comment: Please cite as: S. Schulte, C. Janiesch, S. Venugopal, I. Weber, and P. Hoenisch (2015). Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud. Future Generation Computer Systems, Volume NN, Number N, NN-NN., http://dx.doi.org/10.1016/j.future.2014.09.00

    Employing the Powered Hybridized Darts Game with BWO Optimization for Effective Job Scheduling and Distributing Load in the Cloud-Based Environment

    Get PDF
    One of the most frequent issues in cloud computing systems is job scheduling, which is designed to efficiently reduce installation time and cost while concurrently enhancing resource utilisation. Limitations such as accessible implementation costs, high resource utilisation, insufficient make-span, and fast scheduling response lead to the Nondeterministic Polynomial (NP)-hard optimisation problem. As the number of combinations along with processing power increases, job allocation becomes NP-hard. This study employs a hybrid heuristic optimisation technique that incorporates load balancing to achieve optimal job scheduling and boost service provider performance within the cloud architecture. As a result, there are many less problems with the scheduling process. The suggested work scheduling approach successfully resolves the load balancing issue. The suggested Hybridised Darts Game-Based Beluga Whale Optimisation Algorithm (HDG-BWOA) assists in assigning jobs to the machines according to workload. When assigning jobs to virtual machines, factors such as reduced energy usage, minimised mean reaction time, enhanced job assurance ratio, and higher Cloud Data Centre (CDC) resource consumption are taken into account. By ensuring flexibility among virtual computers, this job scheduling strategy keeps them from overloading or underloading. Additionally, by employing this method, more activities are effectively finished before the deadline. The effectiveness of the proposed configuration is guaranteed using traditional heuristic-based job scheduling techniques in compliance with multiple assessment metrics

    Hybridized Darts Game with Beluga Whale Optimization Strategy for Efficient Task Scheduling with Optimal Load Balancing in Cloud Computing

    Get PDF
    A cloud computing technology permits clients to use hardware and software technology virtually on a subscription basis. The task scheduling process is planned to effectively minimize implementation time and cost while simultaneously increasing resource utilization, and it is one of the most common problems in cloud computing systems. The Nondeterministic Polynomial (NP)-hard optimization problem occurs due to limitations like an insufficient make-span, excessive resource utilization, low implementation costs, and immediate response for scheduling. The task allocation is NP-hard because of the increase in the amount of combinations and computing resources. In this work, a hybrid heuristic optimization technique with load balancing is implemented for optimal task scheduling to increase the performance of service providers in the cloud infrastructure. Thus, the issues that occur in the scheduling process is greatly reduced. The load balancing problem is effectively solved with the help of the proposed task scheduling scheme. The allocation of tasks to the machines based on the workload is done with the help of the proposed Hybridized Darts Game-Based Beluga Whale Optimization Algorithm (HDG-BWOA). The objective functions like higher Cloud Data Center (CDC) resource consumption, increased task assurance ratio, minimized mean reaction time, and reduced energy utilization are considered while allocating the tasks to the virtual machines. This task scheduling approach ensures flexibility among virtual machines, preventing them from overloading or underloading. Also, using this technique, more tasks is efficiently completed within the deadline. The efficacy of the offered arrangement is ensured with the conventional heuristic-based task scheduling approaches in accordance with various evaluation measures

    A review on job scheduling technique in cloud computing and priority rule based intelligent framework

    Get PDF
    In recent years, the concept of cloud computing has been gaining traction to provide dynamically increasing access to shared computing resources (software and hardware) via the internet. It’s not secret that cloud computing’s ability to supply mission-critical services has made job scheduling a hot subject in the industry right now. Cloud resources may be wasted, or in-service performance may suffer because of under-utilization or over-utilization, respectively, due to poor scheduling. Various strategies from the literature are examined in this research in order to give procedures for the planning and performance of Job Scheduling techniques (JST) in cloud computing. To begin, we look at and tabulate the existing JST that is linked to cloud and grid computing. The present successes are then thoroughly reviewed, difficulties and flows are recognized, and intelligent solutions are devised to take advantage of the proposed taxonomy. To bridge the gaps between present investigations, this paper also seeks to provide readers with a conceptual framework, where we proposed an effective job scheduling technique in cloud computing. These findings are intended to provide academics and policymakers with information about the advantages of a more efficient cloud computing setup. In cloud computing, fair job scheduling is most important. We proposed a priority-based scheduling technique to ensure fair job scheduling. Finally, the open research questions raised in this article will create a path for the implementation of an effective job scheduling strateg

    Work flows in life science

    Get PDF
    The introduction of computer science technology in the life science domain has resulted in a new life science discipline called bioinformatics. Bioinformaticians are biologists who know how to apply computer science technology to perform computer based experiments, also known as in-silico or dry lab experiments. Various tools, such as databases, web applications and scripting languages, are used to design and run in-silico experiments. As the size and complexity of these experiments grow, new types of tools are required to design and execute the experiments and to analyse the results. Workflow systems promise to fulfill this role. The bioinformatician composes an experiment by using tools and web services as building blocks, and connecting them, often through a graphical user interface. Workflow systems, such as Taverna, provide access to up to a few thousand resources in a uniform way. Although workflow systems are intended to make the bioinformaticians' work easier, bioinformaticians experience difficulties in using them. This thesis is devoted to find out which problems bioinformaticians experience using workflow systems and to provide solutions for these problems.\u

    A new priority rule cloud scheduling technique that utilizes gaps to increase the efficiency of jobs distribution

    Get PDF
    In recent years, the concept of cloud computing has been gaining traction to provide dynamically increasing access to shared computing resources (software and hardware) via the internet. It’s no secret that cloud computing’s ability to supply mission-critical services has made job scheduling a hot subject in the industry right now. However, the efficient utilization of these cloud resources has been a challenge, often resulting in wastage or degraded service performance due to poor scheduling. To solve this issue, existing research has been focused on queue-based job scheduling techniques, where jobs are scheduled based on specific deadlines or job lengths. To overcome this challenge, numerous researchers have focused on improving existing Priority Rule (PR) cloud schedulers by developing dynamic scheduling algorithms, but they have fallen short of meeting user satisfaction, such as flowtime, makespan, and total tardiness. These are the limitations of the current implementation of existing Priority Rule (PR) schedulers, mainly caused by blocking made by jobs at the head of the queue. These limitations lead to the poor performance of cloud-based mobile applications and other cloud services. To address this issue, the main objective of this research is to improve the existing PR cloud schedulers by developing a new dynamic scheduling algorithm by manipulating the gaps in the cloud job schedule. In this thesis, first a Priority-Based Fair Scheduling (PBFS) algorithm has been introduced to schedule jobs so that jobs get access to the required resources at optimal times. Then, a backfilling strategy called Shortest Gap Priority-Based Fair Scheduling (SG-PBFS) is proposed that attempts to manipulate the gaps in the schedule of cloud jobs. Finally, the performance evaluation demonstrates that the proposed SG-PBFS algorithm outperforms SG-SJF, SG-LJF, SG-FCFS, SG-EDF, and SG-(MAX-MIN) in terms of flow time, makespan time, and total tardiness, which conclusively demonstrates its effectiveness. The experiment result shows that for 500 jobs, SG-PBFS flow time, makespan time, and tardiness time are 9%, 4%, and 7% less than PBFS gradually

    Programming Languages for Data-Intensive HPC Applications: a Systematic Mapping Study

    Get PDF
    A major challenge in modelling and simulation is the need to combine expertise in both software technologies and a given scientific domain. When High-Performance Computing (HPC) is required to solve a scientific problem, software development becomes a problematic issue. Considering the complexity of the software for HPC, it is useful to identify programming languages that can be used to alleviate this issue. Because the existing literature on the topic of HPC is very dispersed, we performed a Systematic Mapping Study (SMS) in the context of the European COST Action cHiPSet. This literature study maps characteristics of various programming languages for data-intensive HPC applications, including category, typical user profiles, effectiveness, and type of articles. We organised the SMS in two phases. In the first phase, relevant articles are identified employing an automated keyword-based search in eight digital libraries. This lead to an initial sample of 420 papers, which was then narrowed down in a second phase by human inspection of article abstracts, titles and keywords to 152 relevant articles published in the period 2006–2018. The analysis of these articles enabled us to identify 26 programming languages referred to in 33 of relevant articles. We compared the outcome of the mapping study with results of our questionnaire-based survey that involved 57 HPC experts. The mapping study and the survey revealed that the desired features of programming languages for data-intensive HPC applications are portability, performance and usability. Furthermore, we observed that the majority of the programming languages used in the context of data-intensive HPC applications are text-based general-purpose programming languages. Typically these have a steep learning curve, which makes them difficult to adopt. We believe that the outcome of this study will inspire future research and development in programming languages for data-intensive HPC applications.Additional co-authors: Sabri Pllana, Ana Respício, José Simão, Luís Veiga, Ari Vis
    corecore