270 research outputs found

    Micro-differential evolution: diversity enhancement and comparative study.

    Get PDF
    Evolutionary algorithms (EAs), such as the differential evolution (DE) algorithm, suffer from high computational time due to large population size and nature of evaluation, to mention two major reasons. The micro-EAs employ a very small population size, which can converge to a reasonable solution quicker; while they are vulnerable to premature convergence as well as high risk of stagnation. One approach to overcome the stagnation problem is increasing the diversity of the population. In this thesis, a micro-differential evolution algorithm with vectorized random mutation factor (MDEVM) is proposed, which utilizes the small size population benefit while preventing stagnation through diversification of the population. The following contributions are conducted related to the micro-DE (MDE) algorithms in this thesis: providing Monte-Carlo-based simulations for the proposed vectorized random mutation factor (VRMF) method; proposing mutation schemes for DE algorithm with populations sizes less than four; comprehensive comparative simulations and analysis on performance of the MDE algorithms over variant mutation schemes, population sizes, problem types (i.e. uni-modal, multi-modal, and composite), problem dimensionalities, mutation factor ranges, and population diversity analysis in stagnation and trapping in local optimum schemes. The comparative studies are conducted on the 28 benchmark functions provided at the IEEE congress on evolutionary computation 2013 (CEC-2013) and comprehensive analyses are provided. Experimental results demonstrate high performance and convergence speed of the proposed MDEVM algorithm over variant types of functions

    JPEG steganography with particle swarm optimization accelerated by AVX

    Get PDF
    Digital steganography aims at hiding secret messages in digital data transmitted over insecure channels. The JPEG format is prevalent in digital communication, and images are often used as cover objects in digital steganography. Optimization methods can improve the properties of images with embedded secret but introduce additional computational complexity to their processing. AVX instructions available in modern CPUs are, in this work, used to accelerate data parallel operations that are part of image steganography with advanced optimizations.Web of Science328art. no. e544

    Optimal generation scheduling in hydro-power plants with the Coral Reefs Optimization algorithm

    Get PDF
    Hydro-power plants are able to produce electrical energy in a sustainable way. A known format for producing energy is through generation scheduling, which is a task usually established as a Unit Commitment problem. The challenge in this process is to define the amount of energy that each turbine-generator needs to deliver to the plant, to fulfill the requested electrical dispatch commitment, while coping with the operational restrictions. An optimal generation scheduling for turbine-generators in hydro-power plants can offer a larger amount of energy to be generated with respect to non-optimized schedules, with significantly less water consumption. This work presents an efficient mathematical modelling for generation scheduling in a real hydro-power plant in Brazil. An optimization method based on different versions of the Coral Reefs Optimization algorithm with Substrate Layers (CRO) is proposed as an effective method to tackle this problem.This approach uses different search operators in a single population to refine the search for an optimal scheduling for this problem. We have shown that the solution obtained with the CRO using Gaussian search in exploration is able to produce competitive solutions in terms of energy production. The results obtained show a huge savings of 13.98 billion (liters of water) monthly projected versus the non-optimized scheduling.European CommissionMinisterio de Economía y CompetitividadComunidad de Madri

    S-Rocket: Selective Random Convolution Kernels for Time Series Classification

    Full text link
    Random convolution kernel transform (Rocket) is a fast, efficient, and novel approach for time series feature extraction using a large number of independent randomly initialized 1-D convolution kernels of different configurations. The output of the convolution operation on each time series is represented by a partial positive value (PPV). A concatenation of PPVs from all kernels is the input feature vector to a Ridge regression classifier. Unlike typical deep learning models, the kernels are not trained and there is no weighted/trainable connection between kernels or concatenated features and the classifier. Since these kernels are generated randomly, a portion of these kernels may not positively contribute in performance of the model. Hence, selection of the most important kernels and pruning the redundant and less important ones is necessary to reduce computational complexity and accelerate inference of Rocket for applications on the edge devices. Selection of these kernels is a combinatorial optimization problem. In this paper, we propose a scheme for selecting these kernels while maintaining the classification performance. First, the original model is pre-trained at full capacity. Then, a population of binary candidate state vectors is initialized where each element of a vector represents the active/inactive status of a kernel. A population-based optimization algorithm evolves the population in order to find a best state vector which minimizes the number of active kernels while maximizing the accuracy of the classifier. This activation function is a linear combination of the total number of active kernels and the classification accuracy of the pre-trained classifier with the active kernels. Finally, the selected kernels in the best state vector are utilized to train the Ridge regression classifier with the selected kernels

    Graphics Processing Unit–Enhanced Genetic Algorithms for Solving the Temporal Dynamics of Gene Regulatory Networks

    Get PDF
    Understanding the regulation of gene expression is one of the key problems in current biology. A promising method for that purpose is the determination of the temporal dynamics between known initial and ending network states, by using simple acting rules. The huge amount of rule combinations and the nonlinear inherent nature of the problem make genetic algorithms an excellent candidate for finding optimal solutions. As this is a computationally intensive problem that needs long runtimes in conventional architectures for realistic network sizes, it is fundamental to accelerate this task. In this article, we study how to develop efficient parallel implementations of this method for the fine-grained parallel architecture of graphics processing units (GPUs) using the compute unified device architecture (CUDA) platform. An exhaustive and methodical study of various parallel genetic algorithm schemes—master-slave, island, cellular, and hybrid models, and various individual selection methods (roulette, elitist)—is carried out for this problem. Several procedures that optimize the use of the GPU’s resources are presented. We conclude that the implementation that produces better results (both from the performance and the genetic algorithm fitness perspectives) is simulating a few thousands of individuals grouped in a few islands using elitist selection. This model comprises 2 mighty factors for discovering the best solutions: finding good individuals in a short number of generations, and introducing genetic diversity via a relatively frequent and numerous migration. As a result, we have even found the optimal solution for the analyzed gene regulatory network (GRN). In addition, a comparative study of the performance obtained by the different parallel implementations on GPU versus a sequential application on CPU is carried out. In our tests, a multifold speedup was obtained for our optimized parallel implementation of the method on medium class GPU over an equivalent sequential single-core implementation running on a recent Intel i7 CPU. This work can provide useful guidance to researchers in biology, medicine, or bioinformatics in how to take advantage of the parallelization on massively parallel devices and GPUs to apply novel metaheuristic algorithms powered by nature for real-world applications (like the method to solve the temporal dynamics of GRNs)
    corecore