1,873 research outputs found

    On-line Search History-assisted Restart Strategy for Covariance Matrix Adaptation Evolution Strategy

    Full text link
    Restart strategy helps the covariance matrix adaptation evolution strategy (CMA-ES) to increase the probability of finding the global optimum in optimization, while a single run CMA-ES is easy to be trapped in local optima. In this paper, the continuous non-revisiting genetic algorithm (cNrGA) is used to help CMA-ES to achieve multiple restarts from different sub-regions of the search space. The CMA-ES with on-line search history-assisted restart strategy (HR-CMA-ES) is proposed. The entire on-line search history of cNrGA is stored in a binary space partitioning (BSP) tree, which is effective for performing local search. The frequently sampled sub-region is reflected by a deep position in the BSP tree. When leaf nodes are located deeper than a threshold, the corresponding sub-region is considered a region of interest (ROI). In HR-CMA-ES, cNrGA is responsible for global exploration and suggesting ROI for CMA-ES to perform an exploitation within or around the ROI. CMA-ES restarts independently in each suggested ROI. The non-revisiting mechanism of cNrGA avoids to suggest the same ROI for a second time. Experimental results on the CEC 2013 and 2017 benchmark suites show that HR-CMA-ES performs better than both CMA-ES and cNrGA. A positive synergy is observed by the memetic cooperation of the two algorithms.Comment: 8 pages, 9 figure

    Continuous non-revisiting genetic algorithm

    Get PDF
    The non-revisiting genetic algorithm (NrGA) is extended to handle continuous search space. The extended NrGA model, Continuous NrGA (cNrGA), employs the same tree-structure archive of NrGA to memorize the evaluated solutions, in which the search space is divided into non-overlapped partitions according to the distribution of the solutions. cNrGA is a bi-modulus evolutionary algorithm consisting of the genetic algorithm module (GAM) and the adaptive mutation module (AMM). When GAM generates an offspring, the offspring is sent to AMM and is mutated according to the density of the solutions stored in the memory archive. For a point in the search space with high solution-density, it infers a high probability that the point is close to the optimum and hence a near search is suggested. Alternatively, a far search is recommended for a point with low solution-density. Benefitting from the space partitioning scheme, a fast solution-density approximation is obtained. Also, the adaptive mutation scheme naturally avoid the generation of out-of-bound solutions. The performance of cNrGA is tested on 14 benchmark functions on dimensions ranging from 2 to 40. It is compared with real coded GA, differential evolution, covariance matrix adaptation evolution strategy and two improved particle swarm optimization. The simulation results show that cNrGA outperforms the other algorithms for multi-modal function optimization.published_or_final_versio

    Multispecies Coevolution Particle Swarm Optimization Based on Previous Search History

    Get PDF
    A hybrid coevolution particle swarm optimization algorithm with dynamic multispecies strategy based on K-means clustering and nonrevisit strategy based on Binary Space Partitioning fitness tree (called MCPSO-PSH) is proposed. Previous search history memorized into the Binary Space Partitioning fitness tree can effectively restrain the individuals’ revisit phenomenon. The whole population is partitioned into several subspecies and cooperative coevolution is realized by an information communication mechanism between subspecies, which can enhance the global search ability of particles and avoid premature convergence to local optimum. To demonstrate the power of the method, comparisons between the proposed algorithm and state-of-the-art algorithms are grouped into two categories: 10 basic benchmark functions (10-dimensional and 30-dimensional), 10 CEC2005 benchmark functions (30-dimensional), and a real-world problem (multilevel image segmentation problems). Experimental results show that MCPSO-PSH displays a competitive performance compared to the other swarm-based or evolutionary algorithms in terms of solution accuracy and statistical tests

    The role of Walsh structure and ordinal linkage in the optimisation of pseudo-Boolean functions under monotonicity invariance.

    Get PDF
    Optimisation heuristics rely on implicit or explicit assumptions about the structure of the black-box fitness function they optimise. A review of the literature shows that understanding of structure and linkage is helpful to the design and analysis of heuristics. The aim of this thesis is to investigate the role that problem structure plays in heuristic optimisation. Many heuristics use ordinal operators; which are those that are invariant under monotonic transformations of the fitness function. In this thesis we develop a classification of pseudo-Boolean functions based on rank-invariance. This approach classifies functions which are monotonic transformations of one another as equivalent, and so partitions an infinite set of functions into a finite set of classes. Reasoning about heuristics composed of ordinal operators is, by construction, invariant over these classes. We perform a complete analysis of 2-bit and 3-bit pseudo-Boolean functions. We use Walsh analysis to define concepts of necessary, unnecessary, and conditionally necessary interactions, and of Walsh families. This helps to make precise some existing ideas in the literature such as benign interactions. Many algorithms are invariant under the classes we define, which allows us to examine the difficulty of pseudo-Boolean functions in terms of function classes. We analyse a range of ordinal selection operators for an EDA. Using a concept of directed ordinal linkage, we define precedence networks and precedence profiles to represent key algorithmic steps and their interdependency in terms of problem structure. The precedence profiles provide a measure of problem difficulty. This corresponds to problem difficulty and algorithmic steps for optimisation. This work develops insight into the relationship between function structure and problem difficulty for optimisation, which may be used to direct the development of novel algorithms. Concepts of structure are also used to construct easy and hard problems for a hill-climber

    Coarse-Grained Barrier Trees of Fitness Landscapes

    Get PDF
    Recent literature suggests that local optima in fitness landscapes are clustered, which offers an explanation of why perturbation-based metaheuristics often fail to find the global optimum: they become trapped in a sub-optimal cluster. We introduce a method to extract and visualize the global organization of these clusters in form of a barrier tree. Barrier trees have been used to visualize the barriers between local optima basins in fitness landscapes. Our method computes a more coarsely grained tree to reveal the barriers between clusters of local optima. The core element is a new variant of the flooding algorithm, applicable to local optima networks, a compressed representation of fitness landscapes. To identify the clusters, we apply a community detection algorithm. A sample of 200 NK fitness landscapes suggests that the depth of their coarse-grained barrier tree is related to their search difficulty

    A Selectivity based approach to Continuous Pattern Detection in Streaming Graphs

    Full text link
    Cyber security is one of the most significant technical challenges in current times. Detecting adversarial activities, prevention of theft of intellectual properties and customer data is a high priority for corporations and government agencies around the world. Cyber defenders need to analyze massive-scale, high-resolution network flows to identify, categorize, and mitigate attacks involving networks spanning institutional and national boundaries. Many of the cyber attacks can be described as subgraph patterns, with prominent examples being insider infiltrations (path queries), denial of service (parallel paths) and malicious spreads (tree queries). This motivates us to explore subgraph matching on streaming graphs in a continuous setting. The novelty of our work lies in using the subgraph distributional statistics collected from the streaming graph to determine the query processing strategy. We introduce a "Lazy Search" algorithm where the search strategy is decided on a vertex-to-vertex basis depending on the likelihood of a match in the vertex neighborhood. We also propose a metric named "Relative Selectivity" that is used to select between different query processing strategies. Our experiments performed on real online news, network traffic stream and a synthetic social network benchmark demonstrate 10-100x speedups over selectivity agnostic approaches.Comment: in 18th International Conference on Extending Database Technology (EDBT) (2015

    An overview of recent distributed algorithms for learning fuzzy models in Big Data classification

    Get PDF
    AbstractNowadays, a huge amount of data are generated, often in very short time intervals and in various formats, by a number of different heterogeneous sources such as social networks and media, mobile devices, internet transactions, networked devices and sensors. These data, identified as Big Data in the literature, are characterized by the popular Vs features, such as Value, Veracity, Variety, Velocity and Volume. In particular, Value focuses on the useful knowledge that may be mined from data. Thus, in the last years, a number of data mining and machine learning algorithms have been proposed to extract knowledge from Big Data. These algorithms have been generally implemented by using ad-hoc programming paradigms, such as MapReduce, on specific distributed computing frameworks, such as Apache Hadoop and Apache Spark. In the context of Big Data, fuzzy models are currently playing a significant role, thanks to their capability of handling vague and imprecise data and their innate characteristic to be interpretable. In this work, we give an overview of the most recent distributed learning algorithms for generating fuzzy classification models for Big Data. In particular, we first show some design and implementation details of these learning algorithms. Thereafter, we compare them in terms of accuracy and interpretability. Finally, we argue about their scalability

    Parameter control by the entire search history: Case study of history-driven evolutionary algorithm

    Get PDF
    Special Session on Evolutionary Computer VisionHistory-driven Evolutionary Algorithm (HdEA) is an EA that uses the entire search history to improve searching performance. By building the approximated fitness landscape and estimating the gradient using the entire history, HdEA performs a parameter-less adaptive mutation. In order to decrease the number of parameters that makes the HdEA more robust, this paper proposes a novel adaptive parameter control system. This system is as an add-on component to HdEA, which uses the whole search history in HdEA to control the parameters in an automatic manner. The performance of the proposed system is examined on 34 benchmark functions. The results shows that the parameter control system gives similar or better performance in 24 functions and has the benefit that two parameters of the HdEA are eliminated; they are set and varied automatically by the system. © 2010 IEEE.published_or_final_versio
    • …
    corecore