817 research outputs found

    Fine-grained parallelization of fitness functions in bioinformatics optimization problems: gene selection for cancer classification and biclustering of gene expression data

    Get PDF
    ANTECEDENTES: las metaheurísticas se utilizan ampliamente para resolver grandes problemas de optimización combinatoria en bioinformática debido al enorme conjunto de posibles soluciones. Dos problemas representativos son la selección de genes para la clasificación del cáncer y el agrupamiento de los datos de expresión génica. En la mayoría de los casos, estas metaheurísticas, así como otras técnicas no lineales, aplican una función de adecuación a cada solución posible con una población de tamaño limitado, y ese paso involucra latencias más altas que otras partes de los algoritmos, lo cual es la razón por la cual el tiempo de ejecución de las aplicaciones dependerá principalmente del tiempo de ejecución de la función de aptitud. Además, es habitual encontrar formulaciones aritméticas de punto flotante para las funciones de fitness. De esta manera, una paralelización cuidadosa de estas funciones utilizando la tecnología de hardware reconfigurable acelerará el cálculo, especialmente si se aplican en paralelo a varias soluciones de la población. RESULTADOS: una paralelización de grano fino de dos funciones de aptitud de punto flotante de diferentes complejidades y características involucradas en el biclustering de los datos de expresión génica y la selección de genes para la clasificación del cáncer permitió obtener mayores aceleraciones y cómputos de potencia reducida con respecto a los microprocesadores habituales. CONCLUSIONES: Los resultados muestran mejores rendimientos utilizando tecnología de hardware reconfigurable en lugar de los microprocesadores habituales, en términos de tiempo de consumo y consumo de energía, no solo debido a la paralelización de las operaciones aritméticas, sino también gracias a la evaluación de aptitud concurrente para varios individuos de la población en La metaheurística. Esta es una buena base para crear soluciones aceleradas y de bajo consumo de energía para escenarios informáticos intensivos.BACKGROUND: Metaheuristics are widely used to solve large combinatorial optimization problems in bioinformatics because of the huge set of possible solutions. Two representative problems are gene selection for cancer classification and biclustering of gene expression data. In most cases, these metaheuristics, as well as other non-linear techniques, apply a fitness function to each possible solution with a size-limited population, and that step involves higher latencies than other parts of the algorithms, which is the reason why the execution time of the applications will mainly depend on the execution time of the fitness function. In addition, it is usual to find floating-point arithmetic formulations for the fitness functions. This way, a careful parallelization of these functions using the reconfigurable hardware technology will accelerate the computation, specially if they are applied in parallel to several solutions of the population. RESULTS: A fine-grained parallelization of two floating-point fitness functions of different complexities and features involved in biclustering of gene expression data and gene selection for cancer classification allowed for obtaining higher speedups and power-reduced computation with regard to usual microprocessors. CONCLUSIONS: The results show better performances using reconfigurable hardware technology instead of usual microprocessors, in computing time and power consumption terms, not only because of the parallelization of the arithmetic operations, but also thanks to the concurrent fitness evaluation for several individuals of the population in the metaheuristic. This is a good basis for building accelerated and low-energy solutions for intensive computing scenarios.• Ministerio de Economía y Competitividad y Fondos FEDER. Contrato TIN2012-30685 (I+D+i) • Gobierno de Extremadura. Ayuda GR15011 para grupos TIC015 • CONICYT/FONDECYT/REGULAR/1160455. Beca para Ricardo Soto Guzmán • CONICYT/FONDECYT/REGULAR/1140897. Beca para Broderick CrawfordpeerReviewe

    Acceleration of particle swarm optimization with AVX instructions

    Get PDF
    Parallel implementations of algorithms are usually compared with single-core CPU performance. The advantage of multicore vector processors decreases the performance gap between GPU and CPU computation, as shown in many recent pieces of research. With the AVX-512 instruction set, there will be another performance boost for CPU computations. The availability of parallel code running on CPUs made them much easier and more accessible than GPUs. This article compares the performances of parallel implementations of the particle swarm optimization algorithm. The code was written in C++, and we used various techniques to obtain parallel execution through Advanced Vector Extensions. We present the performance on various benchmark functions and different problem configurations. The article describes and compares the performance boost gained from parallel execution on CPU, along with advantages and disadvantages of parallelization techniques.Web of Science132art. no. 73

    Nature-inspired algorithms for solving some hard numerical problems

    Get PDF
    Optimisation is a branch of mathematics that was developed to find the optimal solutions, among all the possible ones, for a given problem. Applications of optimisation techniques are currently employed in engineering, computing, and industrial problems. Therefore, optimisation is a very active research area, leading to the publication of a large number of methods to solve specific problems to its optimality. This dissertation focuses on the adaptation of two nature inspired algorithms that, based on optimisation techniques, are able to compute approximations for zeros of polynomials and roots of non-linear equations and systems of non-linear equations. Although many iterative methods for finding all the roots of a given function already exist, they usually require: (a) repeated deflations, that can lead to very inaccurate results due to the problem of accumulating rounding errors, (b) good initial approximations to the roots for the algorithm converge, or (c) the computation of first or second order derivatives, which besides being computationally intensive, it is not always possible. The drawbacks previously mentioned served as motivation for the use of Particle Swarm Optimisation (PSO) and Artificial Neural Networks (ANNs) for root-finding, since they are known, respectively, for their ability to explore high-dimensional spaces (not requiring good initial approximations) and for their capability to model complex problems. Besides that, both methods do not need repeated deflations, nor derivative information. The algorithms were described throughout this document and tested using a test suite of hard numerical problems in science and engineering. Results, in turn, were compared with several results available on the literature and with the well-known Durand–Kerner method, depicting that both algorithms are effective to solve the numerical problems considered.A Optimização é um ramo da matemática desenvolvido para encontrar as soluções óptimas, de entre todas as possíveis, para um determinado problema. Actualmente, são várias as técnicas de optimização aplicadas a problemas de engenharia, de informática e da indústria. Dada a grande panóplia de aplicações, existem inúmeros trabalhos publicados que propõem métodos para resolver, de forma óptima, problemas específicos. Esta dissertação foca-se na adaptação de dois algoritmos inspirados na natureza que, tendo como base técnicas de optimização, são capazes de calcular aproximações para zeros de polinómios e raízes de equações não lineares e sistemas de equações não lineares. Embora já existam muitos métodos iterativos para encontrar todas as raízes ou zeros de uma função, eles usualmente exigem: (a) deflações repetidas, que podem levar a resultados muito inexactos, devido ao problema da acumulação de erros de arredondamento a cada iteração; (b) boas aproximações iniciais para as raízes para o algoritmo convergir, ou (c) o cálculo de derivadas de primeira ou de segunda ordem que, além de ser computacionalmente intensivo, para muitas funções é impossível de se calcular. Estas desvantagens motivaram o uso da Optimização por Enxame de Partículas (PSO) e de Redes Neurais Artificiais (RNAs) para o cálculo de raízes. Estas técnicas são conhecidas, respectivamente, pela sua capacidade de explorar espaços de dimensão superior (não exigindo boas aproximações iniciais) e pela sua capacidade de modelar problemas complexos. Além disto, tais técnicas não necessitam de deflações repetidas, nem do cálculo de derivadas. Ao longo deste documento, os algoritmos são descritos e testados, usando um conjunto de problemas numéricos com aplicações nas ciências e na engenharia. Os resultados foram comparados com outros disponíveis na literatura e com o método de Durand–Kerner, e sugerem que ambos os algoritmos são capazes de resolver os problemas numéricos considerados
    corecore