1,540 research outputs found

    Comparative Analysis of Selection Hyper-Heuristics for Real-World Multi-Objective Optimization Problems

    Get PDF
    As exact algorithms are unfeasible to solve real optimization problems, due to their computational complexity, meta-heuristics are usually used to solve them. However, choosing a meta-heuristic to solve a particular optimization problem is a non-trivial task, and often requires a time-consuming trial and error process. Hyper-heuristics, which are heuristics to choose heuristics, have been proposed as a means to both simplify and improve algorithm selection or configuration for optimization problems. This paper novel presents a novel cross-domain evaluation for multi-objective optimization: we investigate how four state-of-the-art online hyper-heuristics with different characteristics perform in order to find solutions for eighteen real-world multi-objective optimization problems. These hyper-heuristics were designed in previous studies and tackle the algorithm selection problem from different perspectives: Election-Based, based on Reinforcement Learning and based on a mathematical function. All studied hyper-heuristics control a set of five Multi-Objective Evolutionary Algorithms (MOEAs) as Low-Level (meta-)Heuristics (LLHs) while finding solutions for the optimization problem. To our knowledge, this work is the first to deal conjointly with the following issues: (i) selection of meta-heuristics instead of simple operators (ii) focus on multi-objective optimization problems, (iii) experiments on real world problems and not just function benchmarks. In our experiments, we computed, for each algorithm execution, Hypervolume and IGD+ and compared the results considering the Kruskal–Wallis statistical test. Furthermore, we ranked all the tested algorithms considering three different Friedman Rankings to summarize the cross-domain analysis. Our results showed that hyper-heuristics have a better cross-domain performance than single meta-heuristics, which makes them excellent candidates for solving new multi-objective optimization problems

    A general deep reinforcement learning hyperheuristic framework for solving combinatorial optimization problems

    Get PDF
    Many problem-specific heuristic frameworks have been developed to solve combinatorial optimization problems, but these frameworks do not generalize well to other problem domains. Metaheuristic frameworks aim to be more generalizable compared to traditional heuristics, however their performances suffer from poor selection of low-level heuristics (operators) during the search process. An example of heuristic selection in a metaheuristic framework is the adaptive layer of the popular framework of Adaptive Large Neighborhood Search (ALNS). Here, we propose a selection hyperheuristic framework that uses Deep Reinforcement Learning (Deep RL) as an alternative to the adaptive layer of ALNS. Unlike the adaptive layer which only considers heuristics’ past performance for future selection, a Deep RL agent is able to take into account additional information from the search process, e.g., the difference in objective value between iterations, to make better decisions. This is due to the representation power of Deep Learning methods and the decision making capability of the Deep RL agent which can learn to adapt to different problems and instance characteristics. In this paper, by integrating the Deep RL agent into the ALNS framework, we introduce Deep Reinforcement Learning Hyperheuristic (DRLH), a general framework for solving a wide variety of combinatorial optimization problems and show that our framework is better at selecting low-level heuristics at each step of the search process compared to ALNS and a Uniform Random Selection (URS). Our experiments also show that while ALNS can not properly handle a large pool of heuristics, DRLH is not negatively affected by increasing the number of heuristics.publishedVersio

    Simplexity: A Hybrid Framework for Managing System Complexity

    Get PDF
    Knowledge management, management of mission critical systems, and complexity management rely on a triangular support connection. Knowledge management provides ways of creating, corroborating, collecting, combining, storing, transferring, and sharing the know-why and know-how for reactively and proactively handling the challenges of mission critical systems. Complexity management, operating on “complexity” as an umbrella term for size, mass, diversity, ambiguity, fuzziness, randomness, risk, change, chaos, instability, and disruption, delivers support to both knowledge and systems management: on the one hand, support for dealing with the complexity of managing knowledge, i.e., furnishing criteria for a common and operationalized terminology, for dealing with mediating and moderating concepts, paradoxes, and controversial validity, and, on the other hand, support for systems managers coping with risks, lack of transparence, ambiguity, fuzziness, pooled and reciprocal interdependencies (e.g., for attaining interoperability), instability (e.g., downtime, oscillations, disruption), and even disasters and catastrophes. This support results from the evident intersection of complexity management and systems management, e.g., in the shape of complex adaptive systems, deploying slack, establishing security standards, and utilizing hybrid concepts (e.g., hybrid clouds, hybrid procedures for project management). The complexity-focused manager of mission critical systems should deploy an ambidextrous strategy of both reducing complexity, e.g., in terms of avoiding risks, and of establishing a potential to handle complexity, i.e., investing in high availability, business continuity, slack, optimal coupling, characteristics of high reliability organizations, and agile systems. This complexity-focused hybrid approach is labeled “simplexity.” It constitutes a blend of complexity reduction and complexity augmentation, relying on the generic logic of hybrids: the strengths of complexity reduction are capable of compensating the weaknesses of complexity augmentation and vice versa. The deficiencies of prevalent simplexity models signal that this blended approach requires a sophisticated architecture. In order to provide a sound base for coping with the meta-complexity of both complexity and its management, this architecture comprises interconnected components, domains, and dimensions as building blocks of simplexity as well as paradigms, patterns, and parameters for managing simplexity. The need for a balanced paradigm for complexity management, capable of overcoming not only the prevalent bias of complexity reduction but also weaknesses of prevalent concepts of simplexity, serves as the starting point of the argumentation in this chapter. To provide a practical guideline to meet this demand, an innovative model of simplexity is conceived. This model creates awareness for differentiating components, dimensions, and domains of complexity management as well as for various species of interconnectedness, such as the aligned upsizing and downsizing of capacities, the relevance of diversity management (e.g., in terms of deviations and errors), and the scope of risk management instruments. Strategies (e.g., heuristics, step-by-step procedures) and tools for managing simplexity-guided projects are outlined

    Fuzzy A* for optimum Path Planning in a Large Maze

    Get PDF
     Traditional A* path planning, while guaranteeing the shortest path with an admissible heuristic, often employs conservative heuristic functions that neglect potential obstacles and map inaccuracies. This can lead to inefficient searches and increased memory usage in complex environments. To address this, machine learning methods have been explored to predict cost functions, reducing memory load while maintaining optimal solutions. However, these require extensive data collection and struggle in novel, intricate environments. We propose the Fuzzy A* algorithm, an enhancement of the classic A* method, incorporating a new determinant variable to adjust heuristic cost calculations. This adjustment modulates the scope of scanned vertices during searches, optimizing memory usage and computational efficiency. In our approach, unlike traditional A* heuristics that overlook environmental complexities, the Fuzzy A* employs a dynamic heuristic function. This function, leveraging fuzzy logic principles, adapts to varying levels of environmental complexity, allowing a more nuanced estimation of the path cost that considers potential obstructions and route feasibility. This adaptability contrasts with standard machine learning-based solutions, which, while effective in known environments, often falter in unfamiliar or highly complex settings due to their reliance on pre-existing datasets. Our experimental framework involved 100 maze-solving trials in diverse maze configurations, ranging from simple to highly intricate layouts, to evaluate the effectiveness of Fuzzy A*. We employed specific metrics such as path length, computational time, and memory usage for a comprehensive assessment. The results showcased that Fuzzy A* consistently found the shortest paths (99.96% success rate) and significantly reduced memory usage by 67% and 59% compared to Breadth-First-Search (BFS) and traditional A*, respectively. These findings underline the effectiveness of our modified heuristic approach in diverse and challenging environments, highlighting its potential for real-world pathfinding applications

    Automation and Control

    Get PDF
    Advances in automation and control today cover many areas of technology where human input is minimized. This book discusses numerous types and applications of automation and control. Chapters address topics such as building information modeling (BIM)–based automated code compliance checking (ACCC), control algorithms useful for military operations and video games, rescue competitions using unmanned aerial-ground robots, and stochastic control systems

    Deep Joint Entity Disambiguation with Local Neural Attention

    Full text link
    We propose a novel deep learning model for joint document-level entity disambiguation, which leverages learned neural representations. Key components are entity embeddings, a neural attention mechanism over local context windows, and a differentiable joint inference stage for disambiguation. Our approach thereby combines benefits of deep learning with more traditional approaches such as graphical models and probabilistic mention-entity maps. Extensive experiments show that we are able to obtain competitive or state-of-the-art accuracy at moderate computational costs.Comment: Conference on Empirical Methods in Natural Language Processing (EMNLP) 2017 long pape

    High performance genetic algorithm for land use planning

    Get PDF
    [Abstract] This study uses genetic algorithms to formulate and develop land use plans. The restrictions to be imposed and the variables to be optimized are selected based on current local and national legal rules and experts’ criteria. Other considerations can easily be incorporated in this approach. Two optimization criteria are applied: land suitability and the shape-regularity of the resulting land use patches. We consider the existing plots as the minimum units for land use allocation. As the number of affected plots can be large, the algorithm execution time is potentially high. The work thus focuses on implementing and analyzing different parallel paradigms: multi-core parallelism, cluster parallelism and the combination of both. Some tests were performed that show the suitability of genetic algorithms to land use planning problems.Xunta de Galicia; 2010/06Xunta de Galicia; 2010/28Xunta de Galicia; 08SIN011291P

    A Heuristically Generated Metric Approach to the Solution of Chase Problem

    Get PDF
    In this work, heuristic, hyper-heuristic, and metaheuristic approaches are reviewed. Distance metrics are also examined to solve the “puzzle problems by searching” in AI. A viewpoint is brought by introducing the so-called Heuristically Generated Angular Metric Approach (HAMA) through the explanation of the metrics world. Distance metrics are applied to “cat and mouse” problem where cat and mouse makes smart moves relative to each other and therefore makes more appropriate decisions. The design is built around Fuzzy logic control to determine route finding between the pursuer and prey. As the puzzle size increases, the effect of HAMA can be distinguished more clearly in terms of computation time towards a solution. Hence, mouse will gain more time in perceiving the incoming danger, thus increasing the percentage of evading the danger. ‘Caught and escape percentages vs. number of cats’ for three distance metrics have been created and the results evaluated comparatively. Given three termination criteria, it is never inconsistent to define two different objective functions: either the cat travels the distance to catch the mouse, or the mouse increases the percentage of escape from the cat
    • 

    corecore