1,503 research outputs found

    Optimal staffing under an annualized hours regime using Cross-Entropy optimization

    Get PDF
    This paper discusses staffing under annualized hours. Staffing is the selection of the most cost-efficient workforce to cover workforce demand. Annualized hours measure working time per year instead of per week, relaxing the restriction for employees to work the same number of hours every week. To solve the underlying combinatorial optimization problem this paper develops a Cross-Entropy optimization implementation that includes a penalty function and a repair function to guarantee feasible solutions. Our experimental results show Cross-Entropy optimization is efficient across a broad range of instances, where real-life sized instances are solved in seconds, which significantly outperforms an MILP formulation solved with CPLEX. In addition, the solution quality of Cross-Entropy closely approaches the optimal solutions obtained by CPLEX. Our Cross-Entropy implementation offers an outstanding method for real-time decision making, for example in response to unexpected staff illnesses, and scenario analysis

    A Literature Survey of Cooperative Caching in Content Distribution Networks

    Full text link
    Content distribution networks (CDNs) which serve to deliver web objects (e.g., documents, applications, music and video, etc.) have seen tremendous growth since its emergence. To minimize the retrieving delay experienced by a user with a request for a web object, caching strategies are often applied - contents are replicated at edges of the network which is closer to the user such that the network distance between the user and the object is reduced. In this literature survey, evolution of caching is studied. A recent research paper [15] in the field of large-scale caching for CDN was chosen to be the anchor paper which serves as a guide to the topic. Research studies after and relevant to the anchor paper are also analyzed to better evaluate the statements and results of the anchor paper and more importantly, to obtain an unbiased view of the large scale collaborate caching systems as a whole.Comment: 5 pages, 5 figure

    Robust long-term production planning

    Get PDF

    Stochastic programming for City Logistics: new models and methods

    Get PDF
    The need for mobility that emerged in the last decades led to an impressive increase in the number of vehicles as well as to a saturation of transportation infrastructures. Consequently, traffic congestion, accidents, transportation delays, and polluting emissions are some of the most recurrent concerns transportation and city managers have to deal with. However, just building new infrastructures might be not sustainable because of their cost, the land usage, which usually lacks in metropolitan regions, and their negative impact on the environment. Therefore, a different way of improving the performance of transportation systems while enhancing travel safety has to be found in order to make people and good transportation operations more efficient and support their key role in the economic development of either a city or a whole country. The concept of City Logistics (CL) is being developed to answer to this need. Indeed, CL focus on reducing the number of vehicles operating in the city, controlling their dimension and characteristics. CL solutions do not only improve the transportation system but the whole logistics system within an urban area, trying to integrate interests of the several. This global view challenges researchers to develop planning models, methods and decision support tools for the optimization of the structures and the activities of the transportation system. In particular, this leads researchers to the definition of strategic and tactical problems belonging to well-known problem classes, including network design problem, vehicle routing problem (VRP), traveling salesman problem (TSP), bin packing problem (BPP), which typically act as sub-problems of the overall CL system optimization. When long planning horizons are involved, these problems become stochastic and, thus, must explicitly take into account the different sources of uncertainty that can affect the transportation system. Due to these reasons and the large-scale of CL systems, the optimization problems arising in the urban context are very challenging. Their solution requires investigations in mathematical and combinatorial optimization methods as well as the implementation of efficient exact and heuristic algorithms. However, contributions answering these challenges are still limited number. This work contributes in filling this gap in the literature in terms of both modeling framework for new planning problems in CL context and developing new and effective heuristic solving methods for the two-stage formulation of these problems. Three stochastic problems are proposed in the context of CL: the stochastic variable cost and size bin packing problem (SVCSBPP), the multi-handler knapsack problem under uncertainty (MHKPu) and the multi-path traveling salesman problem with stochastic travel times (mpTSPs). The SVCSBPP arises in supply-chain management, in which companies outsource the logistics activities to a third-party logistic firm (3PL). The procurement of sufficient capacity, expressed in terms of vehicles, containers or space in a warehouse for varying periods of time to satisfy the demand plays a crucial role. The SVCSBPP focuses on the relation between a company and its logistics capacity provider and the tactical-planning problem of determining the quantity of capacity units to secure for the next period of activity. The SVCSBPP is the first attempt to introduce a stochastic variant of the variable cost and size bin packing problem (VCSBPP) considering not only the uncertainty on the demand to deliver, but also on the renting cost of the different bins and their availability. A large number of real-life situations can be satisfactorily modeled as a MHKPu, in particular in the last mile delivery. Last mile delivery may involve different sequences of consolidation operations, each handled by different workers with different skill levels and reliability. The improper management of consolidation operations can cause delay in the operations reducing the overall profit of the deliveries. Thus, given a set of potential logistics handlers and a set of items to deliver, characterized by volume and random profit, the MHKPu consists in finding a subset of items which maximizes the expected total profit. The profit is given by the sum of a deterministic profit and a stochastic profit oscillation, with unknown probability distribution, due to the random handling costs of the handlers.The mpTSPs arises mainly in City Logistics applications. Cities offer several services, such as garbage collection, periodic delivery of goods in urban grocery distribution and bike sharing services. These services require the planning of fixed and periodic tours that will be used from one to several weeks. However, the enlarged time horizon as well as strong dynamic changes in travel times due to traffic congestion and other nuisances typical of the urban transportation induce the presence of multiple paths with stochastic travel times. Given a graph characterized by a set of nodes connected by arcs, mpTSPs considers that, for every pair of nodes, multiple paths between the two nodes are present. Each path is characterized by a random travel time. Similarly to the standard TSP, the aim of the problem is to define the Hamiltonian cycle minimizing the expected total cost. These planning problems have been formulated as two-stage integer stochastic programs with recourse. Discretization methods are usually applied to approximate the probability distribution of the random parameters. The resulting approximated program becomes a deterministic linear program with integer decision variables of generally very large dimensions, beyond the reach of exact methods. Therefore, heuristics are required. For the MHKPu, we apply the extreme value theory and derive a deterministic approximation, while for the SVCSBPP and the mpTSPs we introduce effective and accurate heuristics based on the progressive hedging (PH) ideas. The PH mitigates the computational difficulty associated with large problem instances by decomposing the stochastic program by scenario. When effective heuristic techniques exist for solving individual scenario, that is the case of the SVCSBPP and the mpTSPs, the PH further reduces the computational effort of solving scenario subproblems by means of a commercial solver. In particular, we propose a series of specific strategies to accelerate the search and efficiently address the symmetry of solutions, including an aggregated consensual solution, heuristic penalty adjustments, and a bundle fixing technique. Yet, although solution methods become more powerful, combinatorial problems in the CL context are very large and difficult to solve. Thus, in order to significantly enhance the computational efficiency, these heuristics implement parallel schemes. With the aim to make a complete analysis of the problems proposed, we perform extensive numerical experiments on a large set of instances of various dimensions, including realistic setting derived by real applications in the urban area, and combinations of different levels of variability and correlations in the stochastic parameters. The campaign includes the assessment of the efficiency of the meta-heuristic, the evaluation of the interest to explicitly consider uncertainty, an analysis of the impact of problem characteristics, the structure of solutions, as well as an evaluation of the robustness of the solutions when used as decision tool. The numerical analysis indicates that the stochastic programs have significant effects in terms of both the economic impact (e.g. cost reduction) and the operations management (e.g. prediction of the capacity needed by the firm). The proposed methodologies outperform the use of commercial solvers, also when small-size instances are considered. In fact, they find good solutions in manageable computing time. This makes these heuristics a strategic tool that can be incorporated in larger decision support systems for CL

    Approximate Dynamic Programming for Military Resource Allocation

    Get PDF
    This research considers the optimal allocation of weapons to a collection of targets with the objective of maximizing the value of destroyed targets. The weapon-target assignment (WTA) problem is a classic non-linear combinatorial optimization problem with an extensive history in operations research literature. The dynamic weapon target assignment (DWTA) problem aims to assign weapons optimally over time using the information gained to improve the outcome of their engagements. This research investigates various formulations of the DWTA problem and develops algorithms for their solution. Finally, an embedded optimization problem is introduced in which optimization of the multi-stage DWTA is used to determine optimal weaponeering of aircraft. Approximate dynamic programming is applied to the various formulations of the WTA problem. Like many in the field of combinatorial optimization, the DWTA problem suffers from the curses of dimensionality and exact solutions are often computationally intractability. As such, approximations are developed which exploit the special structure of the problem and allow for efficient convergence to high-quality local optima. Finally, a genetic algorithm solution framework is developed to test the embedded optimization problem for aircraft weaponeering

    Multi-stage stochastic optimization and reinforcement learning for forestry epidemic and covid-19 control planning

    Get PDF
    This dissertation focuses on developing new modeling and solution approaches based on multi-stage stochastic programming and reinforcement learning for tackling biological invasions in forests and human populations. Emerald Ash Borer (EAB) is the nemesis of ash trees. This research introduces a multi-stage stochastic mixed-integer programming model to assist forest agencies in managing emerald ash borer insects throughout the U.S. and maximize the public benets of preserving healthy ash trees. This work is then extended to present the first risk-averse multi-stage stochastic mixed-integer program in the invasive species management literature to account for extreme events. Significant computational achievements are obtained using a scenario dominance decomposition and cutting plane algorithm.The results of this work provide crucial insights and decision strategies for optimal resource allocation among surveillance, treatment, and removal of ash trees, leading to a better and healthier environment for future generations. This dissertation also addresses the computational difficulty of solving one of the most difficult classes of combinatorial optimization problems, the Multi-Dimensional Knapsack Problem (MKP). A novel 2-Dimensional (2D) deep reinforcement learning (DRL) framework is developed to represent and solve combinatorial optimization problems focusing on MKP. The DRL framework trains different agents for making sequential decisions and finding the optimal solution while still satisfying the resource constraints of the problem. To our knowledge, this is the first DRL model of its kind where a 2D environment is formulated, and an element of the DRL solution matrix represents an item of the MKP. Our DRL framework shows that it can solve medium-sized and large-sized instances at least 45 and 10 times faster in CPU solution time, respectively, with a maximum solution gap of 0.28% compared to the solution performance of CPLEX. Applying this methodology, yet another recent epidemic problem is tackled, that of COVID-19. This research investigates a reinforcement learning approach tailored with an agent-based simulation model to simulate the disease growth and optimize decision-making during an epidemic. This framework is validated using the COVID-19 data from the Center for Disease Control and Prevention (CDC). Research results provide important insights into government response to COVID-19 and vaccination strategies

    Neur2RO: Neural Two-Stage Robust Optimization

    Full text link
    Robust optimization provides a mathematical framework for modeling and solving decision-making problems under worst-case uncertainty. This work addresses two-stage robust optimization (2RO) problems (also called adjustable robust optimization), wherein first-stage and second-stage decisions are made before and after uncertainty is realized, respectively. This results in a nested min-max-min optimization problem which is extremely challenging computationally, especially when the decisions are discrete. We propose Neur2RO, an efficient machine learning-driven instantiation of column-and-constraint generation (CCG), a classical iterative algorithm for 2RO. Specifically, we learn to estimate the value function of the second-stage problem via a novel neural network architecture that is easy to optimize over by design. Embedding our neural network into CCG yields high-quality solutions quickly as evidenced by experiments on two 2RO benchmarks, knapsack and capital budgeting. For knapsack, Neur2RO finds solutions that are within roughly 2%2\% of the best-known values in a few seconds compared to the three hours of the state-of-the-art exact branch-and-price algorithm; for larger and more complex instances, Neur2RO finds even better solutions. For capital budgeting, Neur2RO outperforms three variants of the kk-adaptability algorithm, particularly on the largest instances, with a 5 to 10-fold reduction in solution time. Our code and data are available at https://github.com/khalil-research/Neur2RO
    corecore