1,172 research outputs found
On the use of biased-randomized algorithms for solving non-smooth optimization problems
Soft constraints are quite common in real-life applications. For example, in freight transportation, the fleet size can be enlarged by outsourcing part of the distribution service and some deliveries to customers can be postponed as well; in inventory management, it is possible to consider stock-outs generated by unexpected demands; and in manufacturing processes and project management, it is frequent that some deadlines cannot be met due to delays in critical steps of the supply chain. However, capacity-, size-, and time-related limitations are included in many optimization problems as hard constraints, while it would be usually more realistic to consider them as soft ones, i.e., they can be violated to some extent by incurring a penalty cost. Most of the times, this penalty cost will be nonlinear and even noncontinuous, which might transform the objective function into a non-smooth one. Despite its many practical applications, non-smooth optimization problems are quite challenging, especially when the underlying optimization problem is NP-hard in nature. In this paper, we propose the use of biased-randomized algorithms as an effective methodology to cope with NP-hard and non-smooth optimization problems in many practical applications. Biased-randomized algorithms extend constructive heuristics by introducing a nonuniform randomization pattern into them. Hence, they can be used to explore promising areas of the solution space without the limitations of gradient-based approaches, which assume the existence of smooth objective functions. Moreover, biased-randomized algorithms can be easily parallelized, thus employing short computing times while exploring a large number of promising regions. This paper discusses these concepts in detail, reviews existing work in different application areas, and highlights current trends and open research lines
Recommended from our members
An investigation of multilevel refinement in routing and location problems
Multilevel refinement is a collaborative hierarchical solution technique. The multilevel technique aims to enhance the solution process of optimisation problems by improving the asymptotic convergence in the quality of solutions produced by its underlying local search heuristics and/or improving the convergence rate of these heuristics. To these aims, the central methodologies of the multilevel technique are filtering solutions from the search space (via coarsening), reducing the amount of problem detail considered at each level of the solution process and providing a mechanism to the underlying local search heuristics for efficiently making large moves around the search space. The neighbourhoods accessible by these moves are typically inaccessible if the local search heuristics are applied to the un-coarsened problems. The methodologies combine to meet the multilevel technique's aims, because, as the multilevel technique iteratively coarsens, extends and refines a given problem, it reduces the possibility of the local search heuristic becoming trapped in local optima of poor quality.
The research presented in this thesis investigates the application of multilevel refinement to classes of location and routing problems and develops numerous multilevel algorithms. Some of these algorithms are collaborative techniques for metaheuristics and others are collaborative techniques for local search heuristics. Additionally, new methods of coarsening for location and routing problems and enhancements for the multilevel technique are developed. It is demonstrated that the multilevel technique is suited to a wide array of problems. By extending the investigations of the multilevel technique across routing and location problems, the research was able to present generalisations regarding the multilevel technique's suitability, for these and similar types of problems.
Finally, results on a number of well known benchmarking suites for location and routing problem are presented, comparing equivalent single-level and multilevel algorithms. These results demonstrate that the multilevel technique provides significant gains over its single-level counterparts. In all cases, the multilevel algorithm was able to improve the asymptotic convergence in the quality of solutions produced by the standard (single-level) local search heuristics or metaheuristics. The multilevel technique did not improve the convergence rate of the single-level's local search heuristics in all cases. However, for large-scale problems the multilevel variants scaled in a manner superior to the single-level techniques. The research also demonstrated that for sufficiently large problems, the multilevel technique was able to improve the asymptotic convergence in the quality of solutions at a sufficiently fast rate, such that the multilevel algorithms were able to produce superior results compared to the single-level versions, without refining the solution down to the most detailed level
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (âefficientâ) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find âquicklyâ (reasonable run-times), with âhighâ probability, provable âgoodâ solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
Internet of Things in urban waste collection
Nowadays, the waste collection management has an important role in urban areas. This paper faces this issue and proposes the application of a metaheuristic for the optimization of a weekly schedule and routing of the waste collection activities in an urban area. Differently to several contributions in literature, fixed periodic routes are not imposed. The results significantly improve the performance of the company involved, both in terms of resources used and costs saving
A Tabu Search algorithm for the vehicle routing problem with discrete split deliveries and pickups
The Vehicle Routing Problem with Discrete Split Deliveries and Pickups is a variant of the Vehicle Routing Problem with Split Deliveries and Pickups, in which customersâ demands are discrete in terms of batches (or orders). It exists in the practice of logistics distribution and consists of designing a least cost set of routes to serve a given set of customers while respecting constraints on the vehiclesâ capacities. In this paper, its features are analyzed. A mathematical model and Tabu Search algorithm with specially designed batch combination and item creation operation are proposed. The batch combination operation is designed to avoid unnecessary travel costs, while the item creation operation effectively speeds up the search and enhances the algorithmic search ability. Computational results are provided and compared with other methods in the literature, which indicate that in most cases the proposed algorithm can find better solutions than those in the literature
- âŠ