58,564 research outputs found

    A parallel solver for the design of oil filters

    Get PDF
    Nowadays, it is widely recognized that computer simulation plays a crucial role in designing oil filters used in the automotive industry. However, even a single direct simulation of the flow usually requires significant computational resources. Thus, it is obvious that solution of optimization problems is only feasible using parallel computers and algorithms.In this paper, we present a general master-slave parallel template, which was specially designed for the easy integration of direct parallel solvers into a parallel optimization tool. We show how an already existing direct solver for the 3D simulation of flow through the oil filter is integrated into our template to obtain a parallel optimization solver. Some capabilities and performance of this solver are demonstrated by solving geometry optimization problem of a filter element

    An Expandable Machine Learning-Optimization Framework to Sequential Decision-Making

    Full text link
    We present an integrated prediction-optimization (PredOpt) framework to efficiently solve sequential decision-making problems by predicting the values of binary decision variables in an optimal solution. We address the key issues of sequential dependence, infeasibility, and generalization in machine learning (ML) to make predictions for optimal solutions to combinatorial problems. The sequential nature of the combinatorial optimization problems considered is captured with recurrent neural networks and a sliding-attention window. We integrate an attention-based encoder-decoder neural network architecture with an infeasibility-elimination and generalization framework to learn high-quality feasible solutions to time-dependent optimization problems. In this framework, the required level of predictions is optimized to eliminate the infeasibility of the ML predictions. These predictions are then fixed in mixed-integer programming (MIP) problems to solve them quickly with the aid of a commercial solver. We demonstrate our approach to tackling the two well-known dynamic NP-Hard optimization problems: multi-item capacitated lot-sizing (MCLSP) and multi-dimensional knapsack (MSMK). Our results show that models trained on shorter and smaller-dimensional instances can be successfully used to predict longer and larger-dimensional problems. The solution time can be reduced by three orders of magnitude with an average optimality gap below 0.1%. We compare PredOpt with various specially designed heuristics and show that our framework outperforms them. PredOpt can be advantageous for solving dynamic MIP problems that need to be solved instantly and repetitively

    Landscape Surrogate: Learning Decision Losses for Mathematical Optimization Under Partial Information

    Full text link
    Recent works in learning-integrated optimization have shown promise in settings where the optimization problem is only partially observed or where general-purpose optimizers perform poorly without expert tuning. By learning an optimizer g\mathbf{g} to tackle these challenging problems with ff as the objective, the optimization process can be substantially accelerated by leveraging past experience. The optimizer can be trained with supervision from known optimal solutions or implicitly by optimizing the compound function fgf\circ \mathbf{g}. The implicit approach may not require optimal solutions as labels and is capable of handling problem uncertainty; however, it is slow to train and deploy due to frequent calls to optimizer g\mathbf{g} during both training and testing. The training is further challenged by sparse gradients of g\mathbf{g}, especially for combinatorial solvers. To address these challenges, we propose using a smooth and learnable Landscape Surrogate MM as a replacement for fgf\circ \mathbf{g}. This surrogate, learnable by neural networks, can be computed faster than the solver g\mathbf{g}, provides dense and smooth gradients during training, can generalize to unseen optimization problems, and is efficiently learned via alternating optimization. We test our approach on both synthetic problems, including shortest path and multidimensional knapsack, and real-world problems such as portfolio optimization, achieving comparable or superior objective values compared to state-of-the-art baselines while reducing the number of calls to g\mathbf{g}. Notably, our approach outperforms existing methods for computationally expensive high-dimensional problems

    A Parallel General Purpose Multi-Objective Optimization Framework, with Application to Beam Dynamics

    Full text link
    Particle accelerators are invaluable tools for research in the basic and applied sciences, in fields such as materials science, chemistry, the biosciences, particle physics, nuclear physics and medicine. The design, commissioning, and operation of accelerator facilities is a non-trivial task, due to the large number of control parameters and the complex interplay of several conflicting design goals. We propose to tackle this problem by means of multi-objective optimization algorithms which also facilitate a parallel deployment. In order to compute solutions in a meaningful time frame a fast and scalable software framework is required. In this paper, we present the implementation of such a general-purpose framework for simulation-based multi-objective optimization methods that allows the automatic investigation of optimal sets of machine parameters. The implementation is based on a master/slave paradigm, employing several masters that govern a set of slaves executing simulations and performing optimization tasks. Using evolutionary algorithms as the optimizer and OPAL as the forward solver, validation experiments and results of multi-objective optimization problems in the domain of beam dynamics are presented. The high charge beam line at the Argonne Wakefield Accelerator Facility was used as the beam dynamics model. The 3D beam size, transverse momentum, and energy spread were optimized

    Survey on Combinatorial Register Allocation and Instruction Scheduling

    Full text link
    Register allocation (mapping variables to processor registers or memory) and instruction scheduling (reordering instructions to increase instruction-level parallelism) are essential tasks for generating efficient assembly code in a compiler. In the last three decades, combinatorial optimization has emerged as an alternative to traditional, heuristic algorithms for these two tasks. Combinatorial optimization approaches can deliver optimal solutions according to a model, can precisely capture trade-offs between conflicting decisions, and are more flexible at the expense of increased compilation time. This paper provides an exhaustive literature review and a classification of combinatorial optimization approaches to register allocation and instruction scheduling, with a focus on the techniques that are most applied in this context: integer programming, constraint programming, partitioned Boolean quadratic programming, and enumeration. Researchers in compilers and combinatorial optimization can benefit from identifying developments, trends, and challenges in the area; compiler practitioners may discern opportunities and grasp the potential benefit of applying combinatorial optimization
    corecore