11 research outputs found

    Integrated optimization of maintenance interventions and spare part selection for a partially observable multi-component system

    Get PDF
    Advanced technical systems are typically composed of multiple critical components whose failure cause a system failure. Often, it is not technically or economically possible to install sensors dedicated to each component, which means that the exact condition of each component cannot be monitored, but a system level failure or defect can be observed. The service provider then needs to implement a condition based maintenance policy that is based on partial information on the systems condition. Furthermore, when the service provider decides to service the system, (s)he also needs to decide which spare part(s) to bring along in order to avoid emergency shipments and part returns. We model this problem as an infinite horizon partially observable Markov decision process. In a set of numerical experiments, we first compare the optimal policy with preventive and corrective maintenance policies: The optimal policy leads on average to a 28% and 15% cost decrease, respectively. Second, we investigate the value of having full information, i.e., sensors dedicated to each component: This leads on average to a 13% cost decrease compared to the case with partial information. Interestingly, having full information is more valuable for cheaper, less reliable components than for more expensive, more reliable components

    Energy-efficient production control of a make-to-stock system with buffer- and time-based policies

    Full text link
    peer reviewedIncreasing energy efficiency in manufacturing has significant environmental and cost benefits. Turning on or off a machine dynamically while considering the production rate requirements can offer substantial energy savings. In this work, we examine the optimal policies to control production and turn on and off a machine that operates in working, idle, off, and warmup modes for the case where demand inter-arrival, production, and warmup times have phase-type distributions. The optimal control problem that minimises the expected costs associated with the energy usage in different energy modes and the inventory and backlog costs is solved using a linear program associated with the underlying Markov Decision Process. We also present a matrix-geometric method to evaluate the steady-state performance of the system under a given threshold control policy. We show that when the inter-arrival time distribution is not exponential, the optimal control policy depends on both the current phase of the inter-arrival time and inventory position. The phase-dependent policy implemented by estimating the current phase based on the time elapsed since the last arrival yields a buffer- and time-based policy to control the energy mode and production. We show that policies that only use the inventory position information can be effective if the control parameters are chosen appropriately. However, the control policies that use both the inventory and time information further improve the performance.TUBITA

    Augmented Winter's method for forecasting under asynchronous seasonalities

    Get PDF
    The method of Winters (1960) is one of the most well-known forecasting methodologies in practice. The main reason behind its popularity is that it is easy to implement and can give quite effective and efficient results for practice purposes. However, this method is not capable of capturing a pattern being emerged due to the simultaneous effects of two different asynchronous calendars, such as Gregorian and Hijri. We adapt this method in a way that it can deal with such patterns, and study its performance using a real dataset collected from a brewery factory in Turkey. With the same data set, we also provide a comparative performance analysis between our model and several forecasting models such as Winter’s (Winters 1960), TBAT (De Livera et al. 2011), ETS (Hyndman et al. 2002), and ARIMA (Hyndman and Khandakar 2008). The results we obtained reveal that better forecasts can be achieved using the new method when two asynchronous calendars exert their effects on the time-series

    Customer-to-customer returns logistics:Can it mitigate the negative impact of online returns?

    Get PDF
    Customer returns are a major problem for online retailers due to their economic and environmental impact. This paper investigates a new concept for handling online returns: customer-to-customer (C2C) returns logistics. The idea behind the C2C concept is to deliver returned items directly to the next customer, bypassing the retailer's warehouse. To incentivize customers to purchase C2C return items, retailers can promote return items on their webshop with a discount. We build the mathematical models behind the C2C concept to determine how much discount to offer to ensure enough customers are induced to purchase C2C return items and to maximize the retailer's expected total profit. Our first model, the base model (BM), is a customer-based formulation of the problem and provides an easy-to-implement constant-discount-level policy. Our second model formulates the real-world problem as a Markov decision process (MDP). Since our MDP suffers from the curse of dimensionality, we resort to simulation optimization (SO) and reinforcement learning (RL) methods to obtain reasonably good solutions. We apply our methods to data collected from a Dutch fashion retailer. We also provide extensive numerical experiments to claim generality. Our results indicate that the constant-discount-level policy obtained with the BM performs well in terms of expected profit compared to SO and RL. With the C2C concept, significant benefits can be achieved in terms of both expected profit and return rate. Even in cases where the cost-effectiveness of the C2C returns program is not pronounced, the proportion of customer-to-warehouse returns to total demand becomes lower. Therefore, the system can be defined as more environmentally friendly. The C2C concept can help retailers financially address the problem of online returns and meet the growing need for reducing their environmental impact.</p

    Customer-to-customer returns logistics:Can it mitigate the negative impact of online returns?

    Get PDF
    Customer returns are a major problem for online retailers due to their economic and environmental impact. This paper investigates a new concept for handling online returns: customer-to-customer (C2C) returns logistics. The idea behind the C2C concept is to deliver returned items directly to the next customer, bypassing the retailer's warehouse. To incentivize customers to purchase C2C return items, retailers can promote return items on their webshop with a discount. We build the mathematical models behind the C2C concept to determine how much discount to offer to ensure enough customers are induced to purchase C2C return items and to maximize the retailer's expected total profit. Our first model, the base model (BM), is a customer-based formulation of the problem and provides an easy-to-implement constant-discount-level policy. Our second model formulates the real-world problem as a Markov decision process (MDP). Since our MDP suffers from the curse of dimensionality, we resort to simulation optimization (SO) and reinforcement learning (RL) methods to obtain reasonably good solutions. We apply our methods to data collected from a Dutch fashion retailer. We also provide extensive numerical experiments to claim generality. Our results indicate that the constant-discount-level policy obtained with the BM performs well in terms of expected profit compared to SO and RL. With the C2C concept, significant benefits can be achieved in terms of both expected profit and return rate. Even in cases where the cost-effectiveness of the C2C returns program is not pronounced, the proportion of customer-to-warehouse returns to total demand becomes lower. Therefore, the system can be defined as more environmentally friendly. The C2C concept can help retailers financially address the problem of online returns and meet the growing need for reducing their environmental impact.</p

    Integrated optimisation of pricing, manufacturing, and procurement decisions of a make-to-stock system operating in a fluctuating environment

    No full text
    Manufacturers experience random environmental fluctuations that influence their supply and demand processes directly. To cope with these environmental fluctuations, they typically utilise operational hedging strategies in terms of pricing, manufacturing, and procurement decisions. We focus on this challenging problem by proposing an analytical model. Specifically, we study an integrated problem of procurement, manufacturing, and pricing strategies for a continuous-review make-to-stock system operating in a randomly fluctuating environment with exponentially distributed processing times. The environmental changes are driven by a continuous-time discrete state-space Markov chain, and they directly affect the system's procurement price, raw material flow rate, and price-sensitive demand rate. We formulate the system as an infinite-horizon Markov decision process with a long-run average profit criterion and show that the optimal procurement and manufacturing strategies are of state-dependent threshold policies. Besides that, we provide several analytical results on the optimal pricing strategies. We introduce a linear programming formulation to numerically obtain the system's optimal decisions. We, particularly, investigate how production rate, holding cost, procurement price and demand variabilities, customers' price sensitivity, and interaction between supply and demand processes affect the system's performance measures through an extensive numerical study. Furthermore, our numerical results demonstrate the potential benefits of using dynamic pricing compared to that of static pricing. In particular, the profit enhancement being achieved with dynamic pricing can reach up to 15%, depending on the problem parameters

    Production and energy mode control of a production-inventory system

    No full text
    Energy efficiency in manufacturing can be improved by controlling energy modes and production dynamically. We examine a production-inventory system that can operate in Working, Idle, and Off energy modes with mode-dependent energy costs. There can be a warm-up delay to switch between one mode to another. With random inter-arrival, production and warm-up times, we formulate the problem of determining in which mode the production resource should operate at a given time depending on the state of the system as a stochastic control problem under the long-run average profit criterion considering the sales revenue together with energy, inventory holding and backlog costs. The optimal solution of the problem for the exponential inter-arrival, production and warm-up times is determined by solving the Markov Decision Process with a linear programming approach. The structure of the optimal policy for the exponential case uses two thresholds to switch between the Working and Idle or Working and Off modes. We use the two-threshold policy as an approximate policy to control a system with correlated inter-event times with general distributions. This system is modelled as a Quasi Birth and Death Process and analyzed by using a matrix-geometric method. Our numerical experiments show that the joint production and energy control policy performs better compared to the pure production and energy control policies depending on the system parameters. In summary, we propose a joint energy and production control policy that improves energy efficiency by controlling the energy modes depending on the state of the system

    An efficient procedure for optimal maintenance intervention in partially observable multi-component systems

    No full text
    With rapid advances in technology, many systems are becoming more complex, including ever-increasing numbers of components that are prone to failure. In most cases, it may not be feasible from a technical or economic standpoint to dedicate a sensor for each individual component to gauge its wear and tear. To make sure that these systems that may require large capitals are economically maintained, one should provide maintenance in a way that responds to captured sensor observations. This gives rise to condition-based maintenance in partially observable multi-component systems. In this study, we propose a novel methodology to manage maintenance interventions as well as spare part quantity decisions for such systems. Our methodology is based on reducing the state space of the multi-component system and optimizing the resulting reduced-state Markov decision process via a linear programming approach. This methodology is highly scalable and capable of solving large problems that cannot be approached with the previously existing solution procedures.</p

    An efficient procedure for optimal maintenance intervention in partially observable multi-component systems

    No full text
    With rapid advances in technology, many systems are becoming more complex, including ever-increasing numbers of components that are prone to failure. In most cases, it may not be feasible from a technical or economic standpoint to dedicate a sensor for each individual component to gauge its wear and tear. To make sure that these systems that may require large capitals are economically maintained, one should provide maintenance in a way that responds to captured sensor observations. This gives rise to condition-based maintenance in partially observable multi-component systems. In this study, we propose a novel methodology to manage maintenance interventions as well as spare part quantity decisions for such systems. Our methodology is based on reducing the state space of the multi-component system and optimizing the resulting reduced-state Markov decision process via a linear programming approach. This methodology is highly scalable and capable of solving large problems that cannot be approached with the previously existing solution procedures.</p
    corecore