24,320 research outputs found

    Meta-heuristic algorithms in car engine design: a literature survey

    Get PDF
    Meta-heuristic algorithms are often inspired by natural phenomena, including the evolution of species in Darwinian natural selection theory, ant behaviors in biology, flock behaviors of some birds, and annealing in metallurgy. Due to their great potential in solving difficult optimization problems, meta-heuristic algorithms have found their way into automobile engine design. There are different optimization problems arising in different areas of car engine management including calibration, control system, fault diagnosis, and modeling. In this paper we review the state-of-the-art applications of different meta-heuristic algorithms in engine management systems. The review covers a wide range of research, including the application of meta-heuristic algorithms in engine calibration, optimizing engine control systems, engine fault diagnosis, and optimizing different parts of engines and modeling. The meta-heuristic algorithms reviewed in this paper include evolutionary algorithms, evolution strategy, evolutionary programming, genetic programming, differential evolution, estimation of distribution algorithm, ant colony optimization, particle swarm optimization, memetic algorithms, and artificial immune system

    An Atypical Survey of Typical-Case Heuristic Algorithms

    Full text link
    Heuristic approaches often do so well that they seem to pretty much always give the right answer. How close can heuristic algorithms get to always giving the right answer, without inducing seismic complexity-theoretic consequences? This article first discusses how a series of results by Berman, Buhrman, Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ening, and Watanabe, from the early 1970s through the early 1990s, explicitly or implicitly limited how well heuristic algorithms can do on NP-hard problems. In particular, many desirable levels of heuristic success cannot be obtained unless severe, highly unlikely complexity class collapses occur. Second, we survey work initiated by Goldreich and Wigderson, who showed how under plausible assumptions deterministic heuristics for randomized computation can achieve a very high frequency of correctness. Finally, we consider formal ways in which theory can help explain the effectiveness of heuristics that solve NP-hard problems in practice.Comment: This article is currently scheduled to appear in the December 2012 issue of SIGACT New

    The Fast Heuristic Algorithms and Post-Processing Techniques to Design Large and Low-Cost Communication Networks

    Full text link
    It is challenging to design large and low-cost communication networks. In this paper, we formulate this challenge as the prize-collecting Steiner Tree Problem (PCSTP). The objective is to minimize the costs of transmission routes and the disconnected monetary or informational profits. Initially, we note that the PCSTP is MAX SNP-hard. Then, we propose some post-processing techniques to improve suboptimal solutions to PCSTP. Based on these techniques, we propose two fast heuristic algorithms: the first one is a quasilinear time heuristic algorithm that is faster and consumes less memory than other algorithms; and the second one is an improvement of a stateof-the-art polynomial time heuristic algorithm that can find high-quality solutions at a speed that is only inferior to the first one. We demonstrate the competitiveness of our heuristic algorithms by comparing them with the state-of-the-art ones on the largest existing benchmark instances (169 800 vertices and 338 551 edges). Moreover, we generate new instances that are even larger (1 000 000 vertices and 10 000 000 edges) to further demonstrate their advantages in large networks. The state-ofthe-art algorithms are too slow to find high-quality solutions for instances of this size, whereas our new heuristic algorithms can do this in around 6 to 45s on a personal computer. Ultimately, we apply our post-processing techniques to update the bestknown solution for a notoriously difficult benchmark instance to show that they can improve near-optimal solutions to PCSTP. In conclusion, we demonstrate the usefulness of our heuristic algorithms and post-processing techniques for designing large and low-cost communication networks

    Heuristic algorithms for the Longest Filled Common Subsequence Problem

    Full text link
    At CPM 2017, Castelli et al. define and study a new variant of the Longest Common Subsequence Problem, termed the Longest Filled Common Subsequence Problem (LFCS). For the LFCS problem, the input consists of two strings AA and BB and a multiset of characters M\mathcal{M}. The goal is to insert the characters from M\mathcal{M} into the string BB, thus obtaining a new string B∗B^*, such that the Longest Common Subsequence (LCS) between AA and B∗B^* is maximized. Casteli et al. show that the problem is NP-hard and provide a 3/5-approximation algorithm for the problem. In this paper we study the problem from the experimental point of view. We introduce, implement and test new heuristic algorithms and compare them with the approximation algorithm of Casteli et al. Moreover, we introduce an Integer Linear Program (ILP) model for the problem and we use the state of the art ILP solver, Gurobi, to obtain exact solution for moderate sized instances.Comment: Accepted and presented as a proceedings paper at SYNASC 201

    Significance Relations for the Benchmarking of Meta-Heuristic Algorithms

    Full text link
    The experimental analysis of meta-heuristic algorithm performance is usually based on comparing average performance metric values over a set of algorithm instances. When algorithms getting tight in performance gains, the additional consideration of significance of a metric improvement comes into play. However, from this moment the comparison changes from an absolute to a relative mode. Here the implications of this paradigm shift are investigated. Significance relations are formally established. Based on this, a trade-off between increasing cycle-freeness of the relation and small maximum sets can be identified, allowing for the selection of a proper significance level and resulting ranking of a set of algorithms. The procedure is exemplified on the CEC'05 benchmark of real parameter single objective optimization problems. The significance relation here is based on awarding ranking points for relative performance gains, similar to the Borda count voting method or the Wilcoxon signed rank test. In the particular CEC'05 case, five ranks for algorithm performance can be clearly identified.Comment: 6 pages, 2 figures, 1 tabl

    Single-machine scheduling with stepwise tardiness costs and release times

    Get PDF
    We study a scheduling problem that belongs to the yard operations component of the railroad planning problems, namely the hump sequencing problem. The scheduling problem is characterized as a single-machine problem with stepwise tardiness cost objectives. This is a new scheduling criterion which is also relevant in the context of traditional machine scheduling problems. We produce complexity results that characterize some cases of the problem as pseudo-polynomially solvable. For the difficult-to-solve cases of the problem, we develop mathematical programming formulations, and propose heuristic algorithms. We test the formulations and heuristic algorithms on randomly generated single-machine scheduling problems and real-life datasets for the hump sequencing problem. Our experiments show promising results for both sets of problems
    • …
    corecore