2,414 research outputs found

    Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud

    Full text link
    With the advent of cloud computing, organizations are nowadays able to react rapidly to changing demands for computational resources. Not only individual applications can be hosted on virtual cloud infrastructures, but also complete business processes. This allows the realization of so-called elastic processes, i.e., processes which are carried out using elastic cloud resources. Despite the manifold benefits of elastic processes, there is still a lack of solutions supporting them. In this paper, we identify the state of the art of elastic Business Process Management with a focus on infrastructural challenges. We conceptualize an architecture for an elastic Business Process Management System and discuss existing work on scheduling, resource allocation, monitoring, decentralized coordination, and state management for elastic processes. Furthermore, we present two representative elastic Business Process Management Systems which are intended to counter these challenges. Based on our findings, we identify open issues and outline possible research directions for the realization of elastic processes and elastic Business Process Management.Comment: Please cite as: S. Schulte, C. Janiesch, S. Venugopal, I. Weber, and P. Hoenisch (2015). Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud. Future Generation Computer Systems, Volume NN, Number N, NN-NN., http://dx.doi.org/10.1016/j.future.2014.09.00

    The Strategic Balance of Centralized Control and Localized Flexibility in Two-Tier ERP Systems

    Get PDF
    Two-tier ERP systems are an increasingly popular technology strategy for large, multinational enterprises. This paper examines how two-tier ERP enables organizations to balance centralized control and coordination at the corporate level with localized flexibility and responsiveness at the division/subsidiary level. The tier 1 ERP system handles core tasks like HR, finance, and IT using highly customized solutions tailored to the large corporate entity's needs, scale, and sophistication. This promotes enterprise-wide process standardization and centralized control. Meanwhile, the tier 2 ERP systems utilized by smaller subsidiaries and regional offices are less resource intensive and more configurable to address localized requirements. Tier 2 gives local divisions more control over their ERP to enable flexibility and responsiveness. This research analyzes the key drivers pushing large multinationals towards two-tier ERP, including managing complexity across global operations, enabling centralized coordination while allowing localization, integrating dispersed IT infrastructures, and controlling implementation costs. The paper explores the unique characteristics and benefits of tier 1 and tier 2 ERP systems in depth, providing concrete examples. Critical considerations for successfully deploying two-tier ERP are also examined, such as integration, change management, and striking the right balance between standardization and localization. The conclusion reached is that two-tier ERP delivers important synergistic benefits for large enterprises through its centralized/decentralized dual structure. The tier 1/tier 2 approach balances the key needs for coordination and control at the center with flexibility at the edges. However, careful planning is required for effective two-tier ERP implementation. The optimal balance between standardization and localization must be struck to fully realize the strategic potential. This research provides important insights for both academic study and real-world application of two-tier ERP systems

    Decentralized workflow management on software defined infrastructures

    Get PDF

    DFlow: Efficient Dataflow-based Invocation Workflow Execution for Function-as-a-Service

    Full text link
    The Serverless Computing is becoming increasingly popular due to its ease of use and fine-grained billing. These features make it appealing for stateful application or serverless workflow. However, current serverless workflow systems utilize a controlflow-based invocation pattern to invoke functions. In this execution pattern, the function invocation depends on the state of the function. A function can only begin executing once all its precursor functions have completed. As a result, this pattern may potentially lead to longer end-to-end execution time. We design and implement the DFlow, a novel dataflow-based serverless workflow system that achieves high performance for serverless workflow. DFlow introduces a distributed scheduler (DScheduler) by using the dataflow-based invocation pattern to invoke functions. In this pattern, the function invocation depends on the data dependency between functions. The function can start to execute even its precursor functions are still running. DFlow further features a distributed store (DStore) that utilizes effective fine-grained optimization techniques to eliminate function interaction, thereby enabling efficient data exchange. With the support of DScheduler and DStore, DFlow can achieving an average improvement of 60% over CFlow, 40% over FaaSFlow, 25% over FaasFlowRedis, and 40% over KNIX on 99%-ile latency respectively. Further, it can improve network bandwidth utilization by 2x-4x over CFlow and 1.5x-3x over FaaSFlow, FaaSFlowRedis and KNIX, respectively. DFlow effectively reduces the cold startup latency, achieving an average improvement of 5.6x over CFlow and 1.1x over FaaSFlowComment: 22 pages, 13 figure

    ArchCloudChain Dapp: the efficient workflow for interior designers

    Get PDF
    The interior design and construction industry involves various stakeholders who must collaborate and coordinate effectively to ensure the successful realization of projects. However, the existing workflow often suffers from fragmentation and inefficiency, leading to delays, errors, and increased costs. To address these challenges, this paper introduces the Arch Cloud Chain Dapp project, a decentralized software application that leverages blockchain technology and Building Information Modeling (BIM) to establish a transparent, secure, and efficient platform for stakeholder collaboration in interior design projects. The primary objective of this project is to reduce interior design costs while upholding high standards of quality and transparency. By integrating BIM and blockchain technology, the Arch Cloud Chain Dapp enables stakeholders to collaborate in real-time, significantly mitigating the risk of errors and miscommunication. Smart contracts play a crucial role in ensuring the enforceability and transparency of agreements, while the blockchain serves as an immutable ledger, providing an auditable record of all project transactions. These innovative features present a novel solution to the challenges faced by the interior design and construction industry. The Arch Cloud Chain Dapp project holds significant potential to revolutionize the industry by streamlining processes, enhancing collaboration, and reducing costs. Through its adoption, stakeholders can benefit from improved project outcomes, streamlined communication, and enhanced efficiency, ultimately leading to a more sustainable and prosperous interior design and construction sector

    Decentralized Resource Scheduling in Grid/Cloud Computing

    Get PDF
    In the Grid/Cloud environment, applications or services and resources belong to different organizations with different objectives. Entities in the Grid/Cloud are autonomous and self-interested; however, they are willing to share their resources and services to achieve their individual and collective goals. In such open environment, the scheduling decision is a challenge given the decentralized nature of the environment. Each entity has specific requirements and objectives that need to achieve. In this thesis, we review the Grid/Cloud computing technologies, environment characteristics and structure and indicate the challenges within the resource scheduling. We capture the Grid/Cloud scheduling model based on the complete requirement of the environment. We further create a mapping between the Grid/Cloud scheduling problem and the combinatorial allocation problem and propose an adequate economic-based optimization model based on the characteristic and the structure nature of the Grid/Cloud. By adequacy, we mean that a comprehensive view of required properties of the Grid/Cloud is captured. We utilize the captured properties and propose a bidding language that is expressive where entities have the ability to specify any set of preferences in the Grid/Cloud and simple as entities have the ability to express structured preferences directly. We propose a winner determination model and mechanism that utilizes the proposed bidding language and finds a scheduling solution. Our proposed approach integrates concepts and principles of mechanism design and classical scheduling theory. Furthermore, we argue that in such open environment privacy concerns by nature is part of the requirement in the Grid/Cloud. Hence, any scheduling decision within the Grid/Cloud computing environment is to incorporate the feasibility of privacy protection of an entity. Each entity has specific requirements in terms of scheduling and privacy preferences. We analyze the privacy problem in the Grid/Cloud computing environment and propose an economic based model and solution architecture that provides a scheduling solution given privacy concerns in the Grid/Cloud. Finally, as a demonstration of the applicability of the approach, we apply our solution by integrating with Globus toolkit (a well adopted tool to enable Grid/Cloud computing environment). We also, created simulation experimental results to capture the economic and time efficiency of the proposed solution
    • …
    corecore