790 research outputs found

    Performance Evaluation of Stochastic Multi-Echelon Inventory Systems: A Survey

    Get PDF
    Globalization, product proliferation, and fast product innovation have significantly increased the complexities of supply chains in many industries. One of the most important advancements of supply chain management in recent years is the development of models and methodologies for controlling inventory in general supply networks under uncertainty and their widefspread applications to industry. These developments are based on three generic methods: the queueing-inventory method, the lead-time demand method and the flow-unit method. In this paper, we compare and contrast these methods by discussing their strengths and weaknesses, their differences and connections, and showing how to apply them systematically to characterize and evaluate various supply networks with different supply processes, inventory policies, and demand processes. Our objective is to forge links among research strands on different methods and various network topologies so as to develop unified methodologies.Masdar Institute of Science and TechnologyNational Science Foundation (U.S.) (NSF Contract CMMI-0758069)National Science Foundation (U.S.) (Career Award CMMI-0747779)Bayer Business ServicesSAP A

    Extensions to the dynamic requirements planning model

    Get PDF
    "April 1998."Includes bibliographical references (p. 32).by John Ruark

    Optimizing Strategic Planning With Long-term Sequential Decision Making Under Uncertainty: A Decomposition Approach

    Get PDF
    The operations research literature has seen decision-making methods at both strategic and operational levels, where high-level strategic plans are first devised, followed by long-term policies that guide future day-to-day operations under uncertainties. Current literature studies such problems on a case-by-case basis, without a unified approach. In this study, we investigate the joint optimization of strategic and operational decisions from a methodological perspective, by proposing a generic two-stage long-term strategic stochastic decision-making (LSSD) framework, in which the first stage models strategic decisions with linear programming (LP), and the second stage models operational decisions with Markov decision processes (MDP). The joint optimization model is formulated as a nonlinear programming (NLP) model, which is then reduced to an integer model through discretization. As expected, the LSSD framework is computationally expensive. Thus, we develop a novel solution algorithm for MDP, which exploit the Benders decomposition with the ``divide-and-conquer\u27\u27 strategy. We further prove mathematical properties to show that the proposed multi-cut L-shaped (MCLD) algorithm is an exact algorithm for MDP. We extend the MCLD algorithm to solve the LSSD framework by developing a two-step backward decomposition (TSBD) method. To evaluate algorithm performances, we adopt four benchmarking problems from the literature. Numerical experiments show that the MCLD algorithm and the TSBD method outperform conventional benchmarks by up to over 90\% and 80\% in algorithm runtime, respectively. The practicality of the LSSD framework is further validated on a real-world critical infrastructure systems (CISs) defense problem. In the past decades, ``attacks\u27\u27 on CIS facilities from deliberate attempts or natural disasters have caused disastrous consequences all over the globe. In this study, we strategically design CIS interconnections and allocate defense resources, to protect the CIS network from sequential, stochastic attacks. The LSSD framework is utilized to model the problem as an NLP model with an alternate integer formulation. We estimate model parameters using real-world CIS data collected from a middle-sized city in the U.S. Previously established algorithms are used to solve the problem with over 45% improvements in algorithm runtime. Sensitivity analyses are conducted to investigate model behaviors and provide insights to practitioners

    Multistage decisions and risk in Markov decision processes: towards effective approximate dynamic programming architectures

    Get PDF
    The scientific domain of this thesis is optimization under uncertainty for discrete event stochastic systems. In particular, this thesis focuses on the practical implementation of the Dynamic Programming (DP) methodology to discrete event stochastic systems. Unfortunately DP in its crude form suffers from three severe computational obstacles that make its imple-mentation to such systems an impossible task. This thesis addresses these obstacles by developing and executing practical Approximate Dynamic Programming (ADP) techniques. Specifically, for the purposes of this thesis we developed the following ADP techniques. The first one is inspired from the Reinforcement Learning (RL) literature and is termed as Real Time Approximate Dynamic Programming (RTADP). The RTADP algorithm is meant for active learning while operating the stochastic system. The basic idea is that the agent while constantly interacts with the uncertain environment accumulates experience, which enables him to react more optimal in future similar situations. While the second one is an off-line ADP procedure These ADP techniques are demonstrated on a variety of discrete event stochastic systems such as: i) a three stage queuing manufacturing network with recycle, ii) a supply chain of the light aromatics of a typical refinery, iii) several stochastic shortest path instances with a single starting and terminal state and iv) a general project portfolio management problem. Moreover, this work addresses, in a systematic way, the issue of multistage risk within the DP framework by exploring the usage of intra-period and inter-period risk sensitive utility functions. In this thesis we propose a special structure for an intra-period utility and compare the derived policies in several multistage instances.Ph.D.Committee Chair: Jay H. Lee; Committee Member: Martha Grover; Committee Member: Matthew J. Realff; Committee Member: Shabbir Ahmed; Committee Member: Stylianos Kavadia

    Measuring the variability in supply chains with the peakedness

    Get PDF
    This paper introduces a novel way to measure the variability of order flows in supply chains, the peakedness. The peakedness can be used to measure the variability assuming the order flow is a general point pro- cess. We show basic properties of the peakedness, and demonstrate its computation from real-time continuous demand processes, and cumulative demand collected at fixed time intervals as well. We also show that the peakedness can be used to characterize demand, forecast, and inventory variables, to effectively manage the variability. Our results hold for both single stage and multistage inventory systems, and can further be extended to a tree-structured supply chain with a single supplier and multiple retailers. Furthermore, the peakedness can be applied to study traditional inventory problems such as quantifying bullwhip effects and determining safety stock levels. Finally, a numerical study based on real life Belgian supermarket data verifies the effectiveness of the peakedness for measuring the order flow variability, as well as estimating the bullwhip effects.variability, peakedness, supply chain
    • …
    corecore