37 research outputs found

    Algorithms for the Bin Packing Problem with Scenarios

    Full text link
    This paper presents theoretical and practical results for the bin packing problem with scenarios, a generalization of the classical bin packing problem which considers the presence of uncertain scenarios, of which only one is realized. For this problem, we propose an absolute approximation algorithm whose ratio is bounded by the square root of the number of scenarios times the approximation ratio for an algorithm for the vector bin packing problem. We also show how an asymptotic polynomial-time approximation scheme is derived when the number of scenarios is constant. As a practical study of the problem, we present a branch-and-price algorithm to solve an exponential model and a variable neighborhood search heuristic. To speed up the convergence of the exact algorithm, we also consider lower bounds based on dual feasible functions. Results of these algorithms show the competence of the branch-and-price in obtaining optimal solutions for about 59% of the instances considered, while the combined heuristic and branch-and-price optimally solved 62% of the instances considered

    A branch-and-price algorithm for the temporal bin packing problem

    Get PDF
    We study an extension of the classical Bin Packing Problem, where each item consumes the bin capacity during a given time window that depends on the item itself. The problem asks for finding the minimum number of bins to pack all the items while respecting the bin capacity at any time instant. A polynomial-size formulation, an exponential-size formulation, and a number of lower and upper bounds are studied. A branch-and-price algorithm for solving the exponential-size formulation is introduced. An overall algorithm combining the different methods is then proposed and tested through extensive computational experiments

    Mathematical Models and Decomposition Algorithms for Cutting and Packing Problems

    Get PDF
    In this thesis, we provide (or review) new and effective algorithms based on Mixed-Integer Linear Programming (MILP) models and/or decomposition approaches to solve exactly various cutting and packing problems. The first three contributions deal with the classical bin packing and cutting stock problems. First, we propose a survey on the problems, in which we review more than 150 references, implement and computationally test the most common methods used to solve the problems (including branch-and-price, constraint programming (CP) and MILP), and we successfully propose new instances that are difficult to solve in practice. Then, we introduce the BPPLIB, a collection of codes, benchmarks, and links for the two problems. Finally, we study in details the main MILP formulations that have been proposed for the problems, we provide a clear picture of the dominance and equivalence relations that exist among them, and we introduce reflect, a new pseudo-polynomial formulation that achieves state of the art results for both problems and some variants. The following three contributions deal with two-dimensional packing problems. First, we propose a method using Logic based Benders’ decomposition for the orthogonal stock cutting problem and some extensions. We solve the master problem through an MILP model while CP is used to solve the slave problem. Computational experiments on classical benchmarks from the literature show the effectiveness of the proposed approach. Then, we introduce TwoBinGame, a visual application we developed for students to interactively solve two-dimensional packing problems, and analyze the results obtained by 200 students. Finally, we study a complex optimization problem that originates from the packaging industry, which combines cutting and scheduling decisions. For its solution, we propose mathematical models and heuristic algorithms that involve a non-trivial decomposition method. In the last contribution, we study and strengthen various MILP and CP approaches for three project scheduling problems

    Two-Dimensional Bin Packing Problem with Guillotine Restrictions

    Get PDF
    This thesis, after presenting recent advances obtained for the two-dimensional bin packing problem, focuses on the case where guillotine restrictions are imposed. A mathematical characterization of non-guillotine patterns is provided and the relation between the solution value of the two-dimensional problem with guillotine restrictions and the two-dimensional problem unrestricted is being studied from a worst-case perspective. Finally it presents a new heuristic algorithm, for the two-dimensional problem with guillotine restrictions, based on partial enumeration, and computationally evaluates its performance on a large set of instances from the literature. Computational experiments show that the algorithm is able to produce proven optimal solutions for a large number of problems, and gives a tight approximation of the optimum in the remaining cases

    Resource Optimized Scheduling For Enhanced Power Efficiency And Throughput On Chip Multi Processor Platforms

    Get PDF
    The parallel nature of process execution on Chip Multi-Processors (CMPs) has boosted levels of application performance far beyond the capabilities of erstwhile single-core designs. Generally, CMPs offer improved performance by integrating multiple simpler cores onto a single die that share certain computing resources among them such as last-level caches, data buses, and main memory. This ensures architectural simplicity while also boosting performance for multi-threaded applications. However, a major trade-off associated with this approach is that concurrently executing applications incur performance degradation if their collective resource requirements exceed the total amount of resources available to the system. If dynamic resource allocation is not carefully considered, the potential performance gain from having multiple cores may be outweighed by the losses due to contention for allocation of shared resources. Additionally, CMPs with inbuilt dynamic voltage-frequency scaling (DVFS) mechanisms may try to compensate for the performance bottleneck by scaling to higher clock frequencies. For performance degradation due to shared-resource contention, this does not necessarily improve performance but does ensure a significant penalty on power consumption due to the quadratic relation of electrical power and voltage (P_dynamic ∝ V^2 * f).This dissertation presents novel methodologies for balancing the competing requirements of high performance, fairness of execution, and enforcement of priority, while also ensuring overall power efficiency of CMPs. Specifically, we (1) Analyze the problem of resource interference during concurrent process execution and propose two fine-grained scheduling methodologies for improving overall performance and fairness, (2) Develop an approach for enforcement of priority (i.e., minimum performance) for specific processes while avoiding resource starvation for others, and (3) Present a machine-learning approach for maximizing the power efficiency (performance-per-Watt) of CMPs through estimation of a workload\u27s performance and power consumption limits at different clock frequencies.As modern computing workloads become increasingly dynamic, and computers themselves become increasingly ubiquitous, the problem of finding the ideal balance between performance and power consumption of CMPs is of particular relevance today, especially given the unprecedented proliferation of embedded devices for use in Internet-of-Things, edge computing, smart wearables, and even exotic experiments such as space probes comprised entirely of a CMP, sensors, and an antenna ( space chips ). Additionally, reducing power consumption while maintaining constant performance can contribute to addressing the growing problem of dark silicon

    Multi objective particle swarm optimization: algorithms and applications

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Operational Research and Machine Learning Applied to Transport Systems

    Get PDF
    The New Economy, environmental sustainability and global competitiveness drive inno- vations in supply chain management and transport systems. The New Economy increases the amount and types of products that can be delivered directly to homes, challenging the organisation of last-mile delivery companies. To keep up with the challenges, deliv- ery companies are continuously seeking new innovations to allow them to pack goods faster and more efficiently. Thus, the packing problem has become a crucial factor and solving this problem effectively is essential for the success of good deliveries and logistics. On land, rail transportation is known to be the most eco-friendly transport system in terms of emissions, energy consumption, land use, noise levels, and quantities of people and goods that can be moved. It is difficult to apply innovations to the rail industry due to a number of reasons: the risk aversion nature, the high level of regulations, the very high cost of infrastructure upgrades, and the natural monopoly of resources in many countries. In the UK, however, in 2018 the Department for Transport published the Joint Rail Data Action Plan, opening some rail industry datasets for researching purposes. In line with the above developments, this thesis focuses on the research of machine learning and operational research techniques in two main areas: improving packing operations for logistics and improving various operations for passenger rail. In total, the research in this thesis will make six contributions as detailed below. The first contribution is a new mathematical model and a new heuristic to solve the Multiple Heterogeneous Knapsack Problem, giving priority to smaller bins and consid- ering some important container loading constraints. This problem is interesting because many companies prefer to deal with smaller bins as they are less expensive. Moreover, giving priority to filling small bins (rather than large bins) is very important in some industries, e.g. fast-moving consumer goods. The second contribution is a novel strategy to hybridize operational research with ma- chine learning to estimate if a particular packing solution is feasible in a constant O(1) computational time. Given that traditional feasibility checking for packing solutions is an NP-Hard problem, it is expected that this strategy will significantly save time and computational effort. The third contribution is an extended mathematical model and an algorithm to apply the packing problem to improving the seat reservation system in passenger rail. The problem is formulated as the Group Seat Reservation Knapsack Problem with Price on Seat. It is an extension of the Offline Group Seat Reservation Knapsack Problem. This extension introduces a profit evaluation dependent on not only the space occupied, but also on the individual profit brought by each reserved seat. The fourth contribution is a data-driven method to infer the feasible train routing strate- gies from open data in the United Kingdom rail network. Briefly, most of the UK network is divided into sections called berths, and the transition point from one berth to another is called a berth step. There are sensors at berth steps that can detect the movement when a train passes by. The result of the method is a directed graph, the berth graph, where each node represents a berth and each arc represents a berth-step. The arcs rep- resent the feasible routing strategies, i.e. where a train can move from one berth. A connected path between two berths represents a connected section of the network. The fifth contribution is a novel method to estimate the amount of time that a train is going to spend on a berth. This chapter compares two different approaches, AutoRe- gressive Moving Average with Recurrent Neural Networks, and analyse the pros and cons of each choice with statistical analyses. The method is tested on a real-world case study, one berth that represent a busy junction in the Merseyside region. The sixth contribution is an adaptive method to forecast the running time of a train journey using the Gated Recurrent Units method. The method exploits the TD’s berth information and the berth graph. The case-study adopted in the experimental tests is the train network in the Merseyside region

    Moldable Items Packing Optimization

    Get PDF
    This research has led to the development of two mathematical models to optimize the problem of packing a hybrid mix of rigid and moldable items within a three-dimensional volume. These two developed packing models characterize moldable items from two perspectives: (1) when limited discrete configurations represent the moldable items and (2) when all continuous configurations are available to the model. This optimization scheme is a component of a lean effort that attempts to reduce the lead-time associated with the implementation of dynamic product modifications that imply packing changes. To test the developed models, they are applied to the dynamic packing changes of Meals, Ready-to-Eat (MREs) at two different levels: packing MRE food items in the menu bags and packing menu bags in the boxes. These models optimize the packing volume utilization and provide information for MRE assemblers, enabling them to preplan for packing changes in a short lead-time. The optimization results are validated by running the solutions multiple times to access the consistency of solutions. Autodesk Inventor helps visualize the solutions to communicate the optimized packing solutions with the MRE assemblers for training purposes

    Machine learning for improving heuristic optimisation

    Get PDF
    Heuristics, metaheuristics and hyper-heuristics are search methodologies which have been preferred by many researchers and practitioners for solving computationally hard combinatorial optimisation problems, whenever the exact methods fail to produce high quality solutions in a reasonable amount of time. In this thesis, we introduce an advanced machine learning technique, namely, tensor analysis, into the field of heuristic optimisation. We show how the relevant data should be collected in tensorial form, analysed and used during the search process. Four case studies are presented to illustrate the capability of single and multi-episode tensor analysis processing data with high and low abstraction levels for improving heuristic optimisation. A single episode tensor analysis using data at a high abstraction level is employed to improve an iterated multi-stage hyper-heuristic for cross-domain heuristic search. The empirical results across six different problem domains from a hyper-heuristic benchmark show that significant overall performance improvement is possible. A similar approach embedding a multi-episode tensor analysis is applied to the nurse rostering problem and evaluated on a benchmark of a diverse collection of instances, obtained from different hospitals across the world. The empirical results indicate the success of the tensor-based hyper-heuristic, improving upon the best-known solutions for four particular instances. Genetic algorithm is a nature inspired metaheuristic which uses a population of multiple interacting solutions during the search. Mutation is the key variation operator in a genetic algorithm and adjusts the diversity in a population throughout the evolutionary process. Often, a fixed mutation probability is used to perturb the value at each locus, representing a unique component of a given solution. A single episode tensor analysis using data with a low abstraction level is applied to an online bin packing problem, generating locus dependent mutation probabilities. The tensor approach improves the performance of a standard genetic algorithm on almost all instances, significantly. A multi-episode tensor analysis using data with a low abstraction level is embedded into multi-agent cooperative search approach. The empirical results once again show the success of the proposed approach on a benchmark of flow shop problem instances as compared to the approach which does not make use of tensor analysis. The tensor analysis can handle the data with different levels of abstraction leading to a learning approach which can be used within different types of heuristic optimisation methods based on different underlying design philosophies, indeed improving their overall performance

    Integrated machine learning and optimization approaches

    Get PDF
    This dissertation focuses on the integration of machine learning and optimization. Specifically, novel machine learning-based frameworks are proposed to help solve a broad range of well-known operations research problems to reduce the solution times. The first study presents a bidirectional Long Short-Term Memory framework to learn optimal solutions to sequential decision-making problems. Computational results show that the framework significantly reduces the solution time of benchmark capacitated lot-sizing problems without much loss in feasibility and optimality. Also, models trained using shorter planning horizons can successfully predict the optimal solution of the instances with longer planning horizons. For the hardest data set, the predictions at the 25% level reduce the solution time of 70 CPU hours to less than 2 CPU minutes with an optimality gap of 0.8% and without infeasibility. In the second study, an extendable prediction-optimization framework is presented for multi-stage decision-making problems to address the key issues of sequential dependence, infeasibility, and generalization. Specifically, an attention-based encoder-decoder neural network architecture is integrated with an infeasibility-elimination and generalization framework to learn high-quality feasible solutions. The proposed framework is demonstrated to tackle the two well-known dynamic NP-Hard optimization problems: multi-item capacitated lot-sizing and multi-dimensional knapsack. The results show that models trained on shorter and smaller-dimension instances can be successfully used to predict longer and larger-dimension problems with the presented item-wise expansion algorithm. The solution time can be reduced by three orders of magnitude with an average optimality gap below 0.1%. The proposed framework can be advantageous for solving dynamic mixed-integer programming problems that need to be solved instantly and repetitively. In the third study, a deep reinforcement learning-based framework is presented for solving scenario-based two-stage stochastic programming problems, which are computationally challenging to solve. A general two-stage deep reinforcement learning framework is proposed where two learning agents sequentially learn to solve each stage of a general two-stage stochastic multi-dimensional knapsack problem. The results show that solution time can be reduced significantly with a relatively small gap. Additionally, decision-making agents can be trained with a few scenarios and solve problems with a large number of scenarios. In the fourth study, a learning-based prediction-optimization framework is proposed for solving scenario-based multi-stage stochastic programs. The issue of non-anticipativity is addressed with a novel neural network architecture that is based on a neural machine translation system. Furthermore, training the models on deterministic problems is suggested instead of solving hard and time-consuming stochastic programs. In this framework, the level of variables used for the solution is iteratively reduced to eliminate infeasibility, and a heuristic based on a linear relaxation is performed to reduce the solution time. An improved item-wise expansion strategy is introduced to generalize the algorithm to tackle instances with different sizes. The results are presented in solving stochastic multi-item capacitated lot-sizing and stochastic multi-stage multi-dimensional knapsack problems. The results show that the solution time can be reduced by a factor of 599 with an optimality gap of only 0.08%. Moreover, results demonstrate that the models can be used to predict similarly structured stochastic programming problems with a varying number of periods, items, and scenarios. The frameworks presented in this dissertation can be utilized to achieve high-quality and fast solutions to repeatedly-solved problems in various industrial and business settings, such as production and inventory management, capacity planning, scheduling, airline logistics, dynamic pricing, and emergency management
    corecore