19 research outputs found
Clustering search
This paper presents the Clustering Search (CS) as a new hybrid metaheuristic, which works in conjunction with other metaheuristics, managing the implementation of local search algorithms for optimization problems. Usually the local search is costly and should be used only in promising regions of the search space. The CS assists in the discovery of these regions by dividing the search space into clusters. The CS and its applications are reviewed and a case study for a problem of capacitated clustering is presented.Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Universidade Federal do MaranhãoUniversidade Federal de São Paulo (UNIFESP)Instituto Nacional de Pesquisas EspaciaisUNIFESPSciEL
Multi-objective decision analytics for short-notice bushfire evacuation: An Australian case study
This paper develops a multi-objective optimisation model to compute resource allocation,shelter assignment and routing options to evacuate late evacuees from affected areas to shelters.Three bushfire scenarios are analysed to incorporate constraints of restricted time-window and potential road disruptions.Capacity and number of rescue vehicles and shelters are other constraints that are identical in all scenarios.The proposed mathematical model is solved by ?-constraint approach.Objective functions are simultaneously optimised to maximise the total number of evacuees and assigned rescue vehicles and shelters.We argue that this model provides a scenario-based decision-making platform to aid minimise resource utilisation and maximise coverage of late evacuees
Offshore Wind Farm Electrical Cable Layout Optimization
This is the author accepted manuscript. The final version is available from Taylor & Francis via the DOI in this record.This article explores an automated approach for the efficient placement of substations and the design of an inter-array electrical collection network for an offshore wind farm through the minimization of the cost. To accomplish this, the problem is represented as a number of sub-problems that are solved in series using a combination of heuristic algorithms. The overall problem is first solved by clustering the turbines to generate valid substation positions. From this, a navigational mesh pathfinding algorithm based on Delaunay triangulation is applied to identify valid cable paths, which are then used in a mixed-integer linear programming problem to solve for a constrained capacitated minimum spanning tree considering all realistic constraints. The final tree that is produced represents the solution to the inter-array cable problem. This method is applied to a planned wind farm to illustrate the suitability of the approach and the resulting layout that is generated
Models and Matheuristics for Large-Scale Combinatorial Optimization Problems
Combinatorial optimization deals with efficiently determining an optimal (or at least a good) decision among a finite set of alternatives. In business administration, such combinatorial optimization problems arise in, e.g., portfolio selection, project management, data analysis, and logistics. These optimization problems have in common that the set of alternatives becomes very large as the problem size increases, and therefore an exhaustive search of all alternatives may require a prohibitively long computation time. Moreover, due to their combinatorial nature no closed-form solutions to these problems exist.
In practice, a common approach to tackle combinatorial optimization problems is to formulate them as mathematical models and to solve them using a mathematical programming solver (cf., e.g., Bixby et al. 1999, Achterberg et al. 2020). For small-scale problem instances, the mathematical models comprise a manageable number of variables and constraints such that mathematical programming solvers are able to devise optimal solutions within a reasonable computation time. For large-scale problem instances, the number of variables and constraints becomes very large which extends the computation time required to find an optimal solution considerably. Therefore, despite the continuously improving performance of mathematical programming solvers and computing hardware, the availability of mathematical models that are efficient in terms of the number of variables and constraints used is of crucial importance. Another frequently used approach to address combinatorial optimization problems are matheuristics. Matheuristics decompose the considered optimization problem into subproblems, which are then formulated as mathematical models and solved with the help of a mathematical programming solver. Matheuristics are particularly suitable for situations where it is required to find a good, but not necessarily an optimal solution within a short computation time, since the speed of the solution process can be controlled by choosing an appropriate size of the subproblems.
This thesis consists of three papers on large-scale combinatorial optimization problems. We consider a portfolio optimization problem in finance, a scheduling problem in project management, and a clustering problem in data analysis. For these problems, we present novel mathematical models that require a relatively small number of variables and constraints, and we develop matheuristics that are based on novel problem-decomposition strategies. In extensive computational experiments, the proposed models and matheuristics performed favorably compared to state-of-the-art models and solution approaches from the literature.
In the first paper, we consider the problem of determining a portfolio for an enhanced index-tracking fund. Enhanced index-tracking funds aim to replicate the returns of a particular financial stock-market index as closely as possible while outperforming that index by a small positive excess return. Additionally, we consider various real-life constraints that may be imposed by investors, stock exchanges, or investment guidelines. Since enhanced index-tracking funds are particularly attractive to investors if the index comprises a large number of stocks and thus is well diversified, it is of particular interest to tackle large-scale problem instances. For this problem, we present two matheuristics that consist of a novel construction matheuristic, and two different improvement matheuristics that are based on the concepts of local branching (cf. Fischetti and Lodi 2003) and iterated greedy heuristics (cf., e.g., Ruiz and Stützle 2007). Moreover, both matheuristics are based on a novel mathematical model for which we provide insights that allow to remove numerous redundant variables and constraints. We tested both matheuristics in a computational experiment on problem instances that are based on large stock-market indices with up to 9,427 constituents. It turns out that our matheuristics yield better portfolios than benchmark approaches in terms of out-of-sample risk-return characteristics.
In the second paper, we consider the problem of scheduling a set of precedence-related project activities, each of which requiring some time and scarce resources during their execution. For each activity, alternative execution modes are given, which differ in the duration and the resource requirements of the activity. Sought is a start time and an execution mode for each activity, such that all precedence relationships are respected, the required amount of each resource does not exceed its prescribed capacity, and the project makespan is minimized. For this problem, we present two novel mathematical models, in which the number of variables remains constant when the range of the activities' durations and thus also the planning horizon is increased. Moreover, we enhance the performance of the proposed mathematical models by eliminating some symmetric solutions from the search space and by adding some redundant sequencing constraints for activities that cannot be processed in parallel. In a computational experiment based on instances consisting of activities with durations ranging from one up to 260 time units, the proposed models consistently outperformed all reference models from the literature.
In the third paper, we consider the problem of grouping similar objects into clusters, where the similarity between a pair of objects is determined by a distance measure based on some features of the objects. In addition, we consider constraints that impose a maximum capacity for the clusters, since the size of the clusters is often restricted in practical clustering applications. Furthermore, practical clustering applications are often characterized by a very large number of objects to be clustered. For this reason, we present a matheuristic based on novel problem-decomposition strategies that are specifically designed for large-scale problem instances. The proposed matheuristic comprises two phases. In the first phase, we decompose the considered problem into a series of generalized assignment problems, and in the second phase, we decompose the problem into subproblems that comprise groups of clusters only. In a computational experiment, we tested the proposed matheuristic on problem instances with up to 498,378 objects. The proposed matheuristic consistently outperformed the state-of-the-art approach on medium- and large-scale instances, while matching the performance for small-scale instances.
Although we considered three specific optimization problems in this thesis, the proposed models and matheuristics can be adapted to related optimization problems with only minor modifications. Examples for such related optimization problems are the UCITS-constrained index-tracking problem (cf, e.g., Strub and Trautmann 2019), which consists of determining the portfolio of an investment fund that must comply with regulatory restrictions imposed by the European Union, the multi-site resource-constrained project scheduling problem (cf., e.g., Laurent et al. 2017), which comprises the scheduling of a set of project activities that can be executed at alternative sites, or constrained clustering problems with must-link and cannot-link constraints (cf., e.g., González-Almagro et al. 2020)
Recommended from our members
An investigation of multilevel refinement in routing and location problems
Multilevel refinement is a collaborative hierarchical solution technique. The multilevel technique aims to enhance the solution process of optimisation problems by improving the asymptotic convergence in the quality of solutions produced by its underlying local search heuristics and/or improving the convergence rate of these heuristics. To these aims, the central methodologies of the multilevel technique are filtering solutions from the search space (via coarsening), reducing the amount of problem detail considered at each level of the solution process and providing a mechanism to the underlying local search heuristics for efficiently making large moves around the search space. The neighbourhoods accessible by these moves are typically inaccessible if the local search heuristics are applied to the un-coarsened problems. The methodologies combine to meet the multilevel technique's aims, because, as the multilevel technique iteratively coarsens, extends and refines a given problem, it reduces the possibility of the local search heuristic becoming trapped in local optima of poor quality.
The research presented in this thesis investigates the application of multilevel refinement to classes of location and routing problems and develops numerous multilevel algorithms. Some of these algorithms are collaborative techniques for metaheuristics and others are collaborative techniques for local search heuristics. Additionally, new methods of coarsening for location and routing problems and enhancements for the multilevel technique are developed. It is demonstrated that the multilevel technique is suited to a wide array of problems. By extending the investigations of the multilevel technique across routing and location problems, the research was able to present generalisations regarding the multilevel technique's suitability, for these and similar types of problems.
Finally, results on a number of well known benchmarking suites for location and routing problem are presented, comparing equivalent single-level and multilevel algorithms. These results demonstrate that the multilevel technique provides significant gains over its single-level counterparts. In all cases, the multilevel algorithm was able to improve the asymptotic convergence in the quality of solutions produced by the standard (single-level) local search heuristics or metaheuristics. The multilevel technique did not improve the convergence rate of the single-level's local search heuristics in all cases. However, for large-scale problems the multilevel variants scaled in a manner superior to the single-level techniques. The research also demonstrated that for sufficiently large problems, the multilevel technique was able to improve the asymptotic convergence in the quality of solutions at a sufficiently fast rate, such that the multilevel algorithms were able to produce superior results compared to the single-level versions, without refining the solution down to the most detailed level