5,868 research outputs found

    Scheduling Allocation and Inventory Replenishment Problems Under Uncertainty: Applications in Managing Electric Vehicle and Drone Battery Swap Stations

    Get PDF
    In this dissertation, motivated by electric vehicle (EV) and drone application growth, we propose novel optimization problems and solution techniques for managing the operations at EV and drone battery swap stations. In Chapter 2, we introduce a novel class of stochastic scheduling allocation and inventory replenishment problems (SAIRP), which determines the recharging, discharging, and replacement decisions at a swap station over time to maximize the expected total profit. We use Markov Decision Process (MDP) to model SAIRPs facing uncertain demands, varying costs, and battery degradation. Considering battery degradation is crucial as it relaxes the assumption that charging/discharging batteries do not deteriorate their quality (capacity). Besides, it ensures customers receive high-quality batteries as we prevent recharging/discharging and swapping when the average capacity of batteries is lower than a predefined threshold. Our MDP has high complexity and dimensions regarding the state space, action space, and transition probabilities; therefore, we can not provide the optimal decision rules (exact solutions) for SAIRPs of increasing size. Thus, we propose high-quality approximate solutions, heuristic and reinforcement learning (RL) methods, for stochastic SAIRPs that provide near-optimal policies for the stations. In Chapter 3, we explore the structure and theoretical findings related to the optimal solution of SAIRP. Notably, we prove the monotonicity properties to develop fast and intelligent algorithms to provide approximate solutions and overcome the curses of dimensionality. We show the existence of monotone optimal decision rules when there is an upper bound on the number of batteries replaced in each period. We demonstrate the monotone structure for the MDP value function when considering the first, second, and both dimensions of the state. We utilize data analytics and regression techniques to provide an intelligent initialization for our monotone approximate dynamic programming (ADP) algorithm. Finally, we provide insights from solving realistic-sized SAIRPs. In Chapter 4, we consider the problem of optimizing the distribution operations of a hub using drones to deliver medical supplies to different geographic regions. Drones are an innovative method with many benefits including low-contact delivery thereby reducing the spread of pandemic and vaccine-preventable diseases. While we focus on medical supply delivery for this work, it is applicable to drone delivery for many other applications, including food, postal items, and e-commerce delivery. In this chapter, our goal is to address drone delivery challenges by optimizing the distribution operations at a drone hub that dispatch drones to different geographic locations generating stochastic demands for medical supplies. By considering different geographic locations, we consider different classes of demand that require different flight ranges, which is directly related to the amount of charge held in a drone battery. We classify the stochastic demands based on their distance from the drone hub, use a Markov decision process to model the problem, and perform computational tests using realistic data representing a prominent drone delivery company. We solve the problem using a reinforcement learning method and show its high performance compared with the exact solution found using dynamic programming. Finally, we analyze the results and provide insights for managing the drone hub operations

    Selective maintenance optimisation for series-parallel systems alternating missions and scheduled breaks with stochastic durations

    Get PDF
    This paper deals with the selective maintenance problem for a multi-component system performing consecutive missions separated by scheduled breaks. To increase the probability of successfully completing its next mission, the system components are maintained during the break. A list of potential imperfect maintenance actions on each component, ranging from minimal repair to replacement is available. The general hybrid hazard rate approach is used to model the reliability improvement of the system components. Durations of the maintenance actions, the mission and the breaks are stochastic with known probability distributions. The resulting optimisation problem is modelled as a non-linear stochastic programme. Its objective is to determine a cost-optimal subset of maintenance actions to be performed on the components given the limited stochastic duration of the break and the minimum system reliability level required to complete the next mission. The fundamental concepts and relevant parameters of this decision-making problem are developed and discussed. Numerical experiments are provided to demonstrate the added value of solving this selective maintenance problem as a stochastic optimisation programme

    Exact and heuristic approaches to detect failures in failed k-out-of-n systems

    Get PDF
    This paper considers a k-out-of-n system that has just failed. There is an associated cost of testing each component. In addition, we have apriori information regarding the probabilities that a certain set of components is the reason for the failure. The goal is to identify the subset of components that have caused the failure with the minimum expected cost. In this work, we provide exact and approximate policies that detects components’ states in a failed k-out-of-n system. We propose two integer programming (IP) formulations, two novel Markov decision process (MDP) based approaches, and two heuristic algorithms. We show the limitations of exact algorithms and effectiveness of proposed heuristic approaches on a set of randomly generated test instances. Despite longer CPU times, IP formulations are flexible in incorporating further restrictions such as test precedence relationships, if need be. Numerical results illustrate that dynamic programming for the proposed MDP model is the most effective exact method, solving up to 12 components within one hour. The heuristic algorithms’ performances are presented against exact approaches for small to medium sized instances and against a lower bound for larger instances

    A Multi-Objective Approach to Optimize a Periodic Maintenance Policy

    Get PDF
    The present paper proposes a multi-objective approach to find out an optimal periodic maintenance policy for a repairable and stochastically deteriorating multi-component system over a finite time horizon. The tackled problem concerns the determination of the system elements to replace at each scheduled and periodical system inspection by ensuring the simultaneous minimization of both the expected total maintenance cost and the expected global system unavailability time. It is assumed that in the case of system elements failure they are instantaneously detected and repaired by means of minimal repair actions in order to rapidly restore the system. A non-linear integer mathematical programming model is developed to solve the treated multi-objective problem whereas the Pareto optimal frontier is described by the Lexicographic Goal Programming and the \u3b5-constraint methods. To explain the whole procedure a case study is solved and the related considerations are given

    A Dynamic Policy for Grouping Maintenance Activities

    Get PDF
    A maintenance activity carried out on a technical system often involves a system-dependent set-up cost that is the same for all maintenance activities carried out on that system. Grouping activities thus saves costs since execution of a group of activities requires only one set-up. Many maintenance models consider the grouping of maintenance activities on a long-term basis with an infinite horizon. This makes it very difficult to incorporate short-term circumstances such as opportunities or a varying use of components because these are either not known beforehand or make the problem intractable. In this paper we propose a rolling-horizon approach that takes a long-term tentative plan as a basis for a subsequent adaptation according to information that becomes available on the short term. This yields a dynamic grouping policy that assists the maintenance manager in his planning job. We present a fast approach that allows interactive planning by showing how shifts from the tentative planning work out. We illustrate our approach with examples

    A dynamic policy for grouping maintenance activities

    Get PDF
    A maintenance activity carried out on a technical system often involves a system-dependent set-up cost that is the same for all maintenance activities carried out on that system. Grouping activities thus saves costs since execution of a group of activities requires only one set-up. Many maintenance models consider the grouping of maintenance activities on a long-term basis with an infinite horizon. This makes it very difficult to incorporate short-term circumstances such as opportunities or a varying use of components because these are either not known beforehand or make the problem intractable. In this paper we propose a rolling-horizon approach that takes a long-term tentative plan as a basis for a subsequent adaptation according to information that becomes available on the short term. This yields a dynamic grouping policy that assists the maintenance manager in his planning job. We present a fast approach that allows interactive planning by showing how shifts from the tentative planning work out. We illustrate our approach with examples

    Prognostics-Based Two-Operator Competition for Maintenance and Service Part Logistics

    Get PDF
    Prognostics and timely maintenance of components are critical to the continuing operation of a system. By implementing prognostics, it is possible for the operator to maintain the system in the right place at the right time. However, the complexity in the real world makes near-zero downtime difficult to achieve partly because of a possible shortage of required service parts. This is realistic and quite important in maintenance practice. To coordinate with a prognostics-based maintenance schedule, the operator must decide when to order service parts and how to compete with other operators who also need the same parts. This research addresses a joint decision-making approach that assists two operators in making proactive maintenance decisions and strategically competing for a service part that both operators rely on for their individual operations. To this end, a maintenance policy involving competition in service part procurement is developed based on the Stackelberg game-theoretic model. Variations of the policy are formulated for three different scenarios and solved via either backward induction or genetic algorithm methods. Unlike the first two scenarios, the possibility for either of the operators being the leader in such competitions is considered in the third scenario. A numerical study on wind turbine operation is provided to demonstrate the use of the joint decision-making approach in maintenance and service part logistics
    corecore