3,262 research outputs found
Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud
With the advent of cloud computing, organizations are nowadays able to react
rapidly to changing demands for computational resources. Not only individual
applications can be hosted on virtual cloud infrastructures, but also complete
business processes. This allows the realization of so-called elastic processes,
i.e., processes which are carried out using elastic cloud resources. Despite
the manifold benefits of elastic processes, there is still a lack of solutions
supporting them.
In this paper, we identify the state of the art of elastic Business Process
Management with a focus on infrastructural challenges. We conceptualize an
architecture for an elastic Business Process Management System and discuss
existing work on scheduling, resource allocation, monitoring, decentralized
coordination, and state management for elastic processes. Furthermore, we
present two representative elastic Business Process Management Systems which
are intended to counter these challenges. Based on our findings, we identify
open issues and outline possible research directions for the realization of
elastic processes and elastic Business Process Management.Comment: Please cite as: S. Schulte, C. Janiesch, S. Venugopal, I. Weber, and
P. Hoenisch (2015). Elastic Business Process Management: State of the Art and
Open Challenges for BPM in the Cloud. Future Generation Computer Systems,
Volume NN, Number N, NN-NN., http://dx.doi.org/10.1016/j.future.2014.09.00
Dynamic Resource Allocation Model for Distribution Operations using SDN
In vehicular ad-hoc networks, autonomous vehicles generate a large amount of data prior to support in-vehicle applications. So, a big storage and high computation platform is needed. On the other hand, the computation for vehicular networks at the cloud platform requires low latency. Applying edge computation (EC) as a new computing paradigm has potentials to provide computation services while reducing the latency and improving the total utility. We propose a three-tier EC framework to set the elastic calculating processing capacity and dynamic route calculation to suitable edge servers for real-time vehicle monitoring. This framework includes the cloud computation layer, EC layer, and device layer. The formulation of resource allocation approach is similar to an optimization problem. We design a new reinforcement learning (RL) algorithm to deal with resource allocation problem assisted by cloud computation. By integration of EC and software defined networking (SDN), this study provides a new software defined networking edge (SDNE) framework for resource assignment in vehicular networks. The novelty of this work is to design a multi-agent RL-based approach using experience reply. The proposed algorithm stores the users’ communication information and the network tracks’ state in real-time. The results of simulation with various system factors are presented to display the efficiency of the suggested framework. We present results with a real-world case study
Internet of Vehicles and Real-Time Optimization Algorithms: Concepts for Vehicle Networking in Smart Cities
Achieving sustainable freight transport and citizens’ mobility operations in modern cities are becoming critical issues for many governments. By analyzing big data streams generated through IoT devices, city planners now have the possibility to optimize traffic and mobility patterns. IoT combined with innovative transport concepts as well as emerging mobility modes (e.g., ridesharing and carsharing) constitute a new paradigm in sustainable and optimized traffic operations in smart cities. Still, these are highly dynamic scenarios, which are also subject to a high uncertainty degree. Hence, factors such as real-time optimization and re-optimization of routes, stochastic travel times, and evolving customers’ requirements and traffic status also have to be considered. This paper discusses the main challenges associated with Internet of Vehicles (IoV) and vehicle networking scenarios, identifies the underlying optimization problems that need to be solved in real time, and proposes an approach to combine the use of IoV with parallelization approaches. To this aim, agile optimization and distributed machine learning are envisaged as the best candidate algorithms to develop efficient transport and mobility systems
Modelling and condition-based control of a flexible and hybrid disassembly system with manual and autonomous workstations using reinforcement learning
Remanufacturing includes disassembly and reassembly of used products to save natural resources and reduce emissions. While assembly is widely understood in the field of operations management, disassembly is a rather new problem in production planning and control. The latter faces the challenge of high uncertainty of type, quantity and quality conditions of returned products, leading to high volatility in remanufacturing production systems. Traditionally, disassembly is a manual labor-intensive production step that, thanks to advances in robotics and artificial intelligence, starts to be automated with autonomous workstations. Due to the diverging material flow, the application of production systems with loosely linked stations is particularly suitable and, owing to the risk of condition induced operational failures, the rise of hybrid disassembly systems that combine manual and autonomous workstations can be expected. In contrast to traditional workstations, autonomous workstations can expand their capabilities but suffer from unknown failure rates. For such adverse conditions a condition-based control for hybrid disassembly systems, based on reinforcement learning, alongside a comprehensive modeling approach is presented in this work. The method is applied to a real-world production system. By comparison with a heuristic control approach, the potential of the RL approach can be proven simulatively using two different test cases
Adaptive Control of Resource Flow to Optimize Construction Work and Cash Flow via Online Deep Reinforcement Learning
Due to complexity and dynamics of construction work, resource, and cash
flows, poor management of them usually leads to time and cost overruns,
bankruptcy, even project failure. Existing approaches in construction failed to
achieve optimal control of resource flow in a dynamic environment with
uncertainty. Therefore, this paper introducess a model and method to adaptive
control the resource flows to optimize the work and cash flows of construction
projects. First, a mathematical model based on a partially observable Markov
decision process is established to formulate the complex interactions of
construction work, resource, and cash flows as well as uncertainty and
variability of diverse influence factors. Meanwhile, to efficiently find the
optimal solutions, a deep reinforcement learning (DRL) based method is
introduced to realize the continuous adaptive optimal control of labor and
material flows, thereby optimizing the work and cash flows. To assist the
training process of DRL, a simulator based on discrete event simulation is also
developed to mimic the dynamic features and external environments of a project.
Experiments in simulated scenarios illustrate that our method outperforms the
vanilla empirical method and genetic algorithm, possesses remarkable capability
in diverse projects and external environments, and a hybrid agent of DRL and
empirical method leads to the best result. This paper contributes to adaptive
control and optimization of coupled work, resource, and cash flows, and may
serve as a step stone for adopting DRL technology in construction project
management
Dynamic Resource Allocation Model for Distribution Operations using SDN
In vehicular ad-hoc networks, autonomous vehicles generate a large amount of data prior to support in-vehicle applications. So, a big storage and high computation platform is needed. On the other hand, the computation for vehicular networks at the cloud platform requires low latency. Applying edge computation (EC) as a new computing paradigm has potentials to provide computation services while reducing the latency and improving the total utility. We propose a three-tier EC framework to set the elastic calculating processing capacity and dynamic route calculation to suitable edge servers for real-time vehicle monitoring. This framework includes the cloud computation layer, EC layer, and device layer. The formulation of resource allocation approach is similar to an optimization problem. We design a new reinforcement learning (RL) algorithm to deal with resource allocation problem assisted by cloud computation. By integration of EC and software defined networking (SDN), this study provides a new software defined networking edge (SDNE) framework for resource assignment in vehicular networks. The novelty of this work is to design a multi-agent RL-based approach using experience reply. The proposed algorithm stores the users’ communication information and the network tracks’ state in realtime. The results of simulation with various system factors are presented to display the efficiency of the suggested framework. We present results with a real-world case stud
An Auction-based Coordination Strategy for Task-Constrained Multi-Agent Stochastic Planning with Submodular Rewards
In many domains such as transportation and logistics, search and rescue, or
cooperative surveillance, tasks are pending to be allocated with the
consideration of possible execution uncertainties. Existing task coordination
algorithms either ignore the stochastic process or suffer from the
computational intensity. Taking advantage of the weakly coupled feature of the
problem and the opportunity for coordination in advance, we propose a
decentralized auction-based coordination strategy using a newly formulated
score function which is generated by forming the problem into task-constrained
Markov decision processes (MDPs). The proposed method guarantees convergence
and at least 50% optimality in the premise of a submodular reward function.
Furthermore, for the implementation on large-scale applications, an approximate
variant of the proposed method, namely Deep Auction, is also suggested with the
use of neural networks, which is evasive of the troublesome for constructing
MDPs. Inspired by the well-known actor-critic architecture, two Transformers
are used to map observations to action probabilities and cumulative rewards
respectively. Finally, we demonstrate the performance of the two proposed
approaches in the context of drone deliveries, where the stochastic planning
for the drone league is cast into a stochastic price-collecting Vehicle Routing
Problem (VRP) with time windows. Simulation results are compared with
state-of-the-art methods in terms of solution quality, planning efficiency and
scalability.Comment: 17 pages, 5 figure
- …