66 research outputs found

    Exploiting Heterogeneous Parallelism on Hybrid Metaheuristics for Vector Autoregression Models

    Get PDF
    In the last years, the huge amount of data available in many disciplines makes the mathematical modeling, and, more concretely, econometric models, a very important technique to explain those data. One of the most used of those econometric techniques is the Vector Autoregression Models (VAR) which are multi-equation models that linearly describe the interactions and behavior of a group of variables by using their past. Traditionally, Ordinary Least Squares and Maximum likelihood estimators have been used in the estimation of VAR models. These techniques are consistent and asymptotically efficient under ideal conditions of the data and the identification problem. Otherwise, these techniques would yield inconsistent parameter estimations. This paper considers the estimation of a VAR model by minimizing the difference between the dependent variables in a certain time, and the expression of their own past and the exogenous variables of the model (in this case denoted as VARX model). The solution of this optimization problem is approached through hybrid metaheuristics. The high computational cost due to the huge amount of data makes it necessary to exploit High-Performance Computing for the acceleration of methods to obtain the models. The parameterized, parallel implementation of the metaheuristics and the matrix formulation ease the simultaneous exploitation of parallelism for groups of hybrid metaheuristics. Multilevel and heterogeneous parallelism are exploited in multicore CPU plus multiGPU nodes, with the optimum combination of the different parallelism parameters depending on the particular metaheuristic and the problem it is applied to.This work was supported by the Spanish MICINN and AEI, as well as European Commission FEDER funds, under grant RTI2018-098156-B-C53 and grant TIN2016-80565-R

    On the role of metaheuristic optimization in bioinformatics

    Get PDF
    Metaheuristic algorithms are employed to solve complex and large-scale optimization problems in many different fields, from transportation and smart cities to finance. This paper discusses how metaheuristic algorithms are being applied to solve different optimization problems in the area of bioinformatics. While the text provides references to many optimization problems in the area, it focuses on those that have attracted more interest from the optimization community. Among the problems analyzed, the paper discusses in more detail the molecular docking problem, the protein structure prediction, phylogenetic inference, and different string problems. In addition, references to other relevant optimization problems are also given, including those related to medical imaging or gene selection for classification. From the previous analysis, the paper generates insights on research opportunities for the Operations Research and Computer Science communities in the field of bioinformatics

    Ant Colony Optimization

    Get PDF
    Ant Colony Optimization (ACO) is the best example of how studies aimed at understanding and modeling the behavior of ants and other social insects can provide inspiration for the development of computational algorithms for the solution of difficult mathematical problems. Introduced by Marco Dorigo in his PhD thesis (1992) and initially applied to the travelling salesman problem, the ACO field has experienced a tremendous growth, standing today as an important nature-inspired stochastic metaheuristic for hard optimization problems. This book presents state-of-the-art ACO methods and is divided into two parts: (I) Techniques, which includes parallel implementations, and (II) Applications, where recent contributions of ACO to diverse fields, such as traffic congestion and control, structural optimization, manufacturing, and genomics are presented

    Estrategias de paralización para la optimización de métodos computacionales en el descubrimiento de nuevos fármacos.

    Get PDF
    El descubrimiento de fármacos es un proceso largo y costoso que involucra varias etapas; entre ellas destaca la identificación de candidatos a fármacos; es decir moléculas potencialmente activas para neutralizar una determinada proteína involucrada en una enfermedad. Esta etapa se fundamenta en la optimización del acoplamiento molecular entre un receptor y un ingente número de candidatos a fármacos, para determinar cuál de estos candidatos obtiene una mayor intensidad en el acoplamiento. El acoplamiento molecular entre dos compuestos bioactivos está sujeto a una serie de fenómenos físicos presentes en la naturaleza y que se modelan a través de una función de scoring. Estos modelos representan los comportamientos de las moléculas en la naturaleza, permitiendo trasladar esta interacción molecular a una simulación en plataformas computacionales de silicio. Esta tesis doctoral plantea la aceleración y mejora de los métodos de descubrimiento de nuevos fármacos mediante técnicas de inteligencia artificial y paralelismo. Se propone un esquema metaheurístico parametrizado y paralelo que determine la interacción molecular entre compuestos bioactivos. Las técnicas metaheurísticas son técnicas algorítmicas empleadas, generalmente, en la optimización de cualquier tipo de problema, proporcionando soluciones satisfactorias. Algunos ejemplos de metaheurísticas incluyen búsquedas locales; que centran su campo de actuación a su entorno de soluciones (vecinos) más cercanos; búsquedas basadas en poblaciones muy utilizadas en la simulación de procesos biológicos y entre los que destacan los algoritmos evolutivos o las búsquedas dispersas por mencionar algunos ejemplos. Los esquemas parametrizados de metaheurísticas definen una serie de funciones básicas (Inicializar, Fin, Seleccionar, Combinar, Mejorar e Incluir) a fin de parametrizar el tipo de metaheurística concreta a instanciar en cada ejecución de la aplicación, permitiendo así no sólo la optimización del problema a resolver, sino también del algoritmo empleado para su resolución. Trabajar con una combinación de parámetros u otra es un factor vital para encontrar una buena solución al problema. Para abordar este número elevado de parámetros necesitamos alguna estrategia para este nuevo problema de optimización que surge. Esta estrategia es la hiperheurística, que busca la mejor de entre un conjunto de metaheurísticas aplicadas a un mismo problema. La gran mayoría de algoritmos metaheurísticos son, por definición, masivamente paralelos, y por tanto su implementación en plataformas secuenciales compromete tanto la eficiencia como la eficacia de los mismos. En ésta tesis doctoral se adapta además la instanciación del esquema metaheurístico a plataformas masivamente paralelas y heterogéneas como procesadores de memoria compartida y tarjetas gráficas. Las técnicas masivamente paralelas en GPU con soporte CUDA ayudan a realizar estos cálculos poniendo a disposición de la aplicación miles de núcleos capaces de funcionar en paralelo y, además, con la posibilidad de compartir memoria entre ellos y así reducir aún más los accesos a memoria. Aun así, existen compuestos celulares de decenas de miles de átomos para los que el uso de una sola GPU puede ser insuficiente, convirtiéndola en un cuello de botella. Esto hace necesario extender el esquema a multiGPU para dividir la carga computacional y poder abordar este tipo de compuestos con suficientes garantías de rendimiento. Para mejorar el rendimiento y maximizar la paralelización de la aplicación, es fundamental aprovechar al máximo los recursos que nos ofrece la máquina, por ello, se realiza un trabajo previo para ajustar los parámetros de la opción paralela elegida al entorno de ejecución y trabajar con los parámetros que mejor se adapten a la máquina. En un nodo, podemos tener un número limitado de GPUs, y para simular una molécula podemos obtener buenos rendimientos, pero en el problema de descubrimiento de fármacos, podemos tener millones de candidatos a fármacos con los que simular. En este caso, escalamos a un clúster de cómputo. Uno de los enfoques tomados por la comunidad para aprovechar todos los recursos de un clúster de computadores, de manera transparente al usuario, ha sido la virtualización del sistema. Entornos como (VMWARE, XEN) virtualizan todo el sistema y no solo una parte, siendo muy inadecuado en entornos de computación de alto rendimiento, ya que las restricciones a que deben someterse al ser un entorno compartido, introducen una sobrecarga que no es posible asumir. En lugar de virtualizar todo el sistema, sería virtualizar solo un conjunto de recursos específicos, como las GPUs. Este trabajo lo realiza un middleware muy potente denominado rCUDA. Este software permite el uso simultáneo y remoto de GPUs con soporte CUDA. Para habilitar la aceleración remota de GPUs, este software del sistema crea dispositivos virtuales compatibles con CUDA en máquinas sin GPUs locales. Además, rCUDA aporta una reducción de la complejidad algorítmica, evitando utilizar técnicas basadas en paso de mensajes (MPI), muy utilizadas en este tipo de entornos de cómputo. Las técnicas algorítmicas que se van a desarrollar, van a posibilitar la elección de las diferentes plataformas de cómputo disponibles optimizando el entorno de ejecución y, balanceando la carga de trabajo con los parámetros de configuración más idóneos.Ingeniería, Industria y Construcció

    Parallelised and vectorised ant colony optimization

    Get PDF
    Ant Colony Optimisation (ACO) is a versatile population-based optimisation metaheuristic based on the foraging behaviour of certain species of ant, and is part of the Evolutionary Computation family of algorithms. While ACO generally provides good quality solutions to the problems it is applied to, two key limitations prevent it from being truly viable on large-scale problems: A high memory requirement that grows quadratically with instance size, and high execution time. This thesis presents a parallelised and vectorised implementation of ACO using OpenMP and AVX SIMD instructions; while this alone is enough to improve upon the execution time of the algorithm, this implementation also features an alternative memory structure and a novel candidate set approach, the use of which significantly reduces the memory requirement of ACO. This parallelism is enabled through the use of Max-Min Ant System, an ACO variant that only utilises local memory during the solution process and therefore risks no synchronisation issues, and an adaptation of vRoulette, a vector-compatible variant of the common roulette wheel selection method. Through the use of these techniques ACO is also able to find good quality solutions for the very large Art TSPs, a problem set that has traditionally been unfeasible to solve with ACO due to high memory requirements and execution time. These techniques can also benefit ACO when it comes to solving other problems. In this case the Virtual Machine Placement problem, in which Virtual Machines have to be efficiently allocated to Physical Machines in a cloud environment, is used as a benchmark, with significant improvements to execution time

    Comparison of High Performance Parallel Implementations of TLBO and Jaya Optimization Methods on Manycore GPU

    Get PDF
    The utilization of optimization algorithms within engineering problems has had a major rise in recent years, which has led to the proliferation of a large number of new algorithms to solve optimization problems. In addition, the emergence of new parallelization techniques applicable to these algorithms to improve their convergence time has made it a subject of study by many authors. Recently, two optimization algorithms have been developed: Teaching-Learning Based Optimization and Jaya. One of the main advantages of both algorithms over other optimization methods is that the former do not need to adjust specific parameters for the particular problem to which they are applied. In this paper, the parallel implementations of Teaching-Learning Based Optimization and Jaya are compared. The parallelization of both algorithms is performed using manycore GPU techniques. Different scenarios will be created involving functions frequently applied to the evaluation of optimization algorithms. Results will make it possible to compare both parallel algorithms with regard to the number of iterations and the time needed to perform them so as to obtain a predefined error level. The GPU resources occupation in each case will also be analyzed.This work was supported in part by the Spanish Ministry of Economy and Competitiveness under Grant TIN2017-89266-R, in part by FEDER funds (MINECO/FEDER/UE), and in part by the Spanish Ministry of Science, Innovation, and Universities co-financed by FEDER funds under Grant RTI2018-098156-B-C54

    Smart Data Harvesting in Cache-Enabled MANETs: UAVs, Future Position Prediction, and Autonomous Path Planning

    Get PDF
    The task of gathering data from nodes within mobile ad-hoc wireless sensor networks presents an enduring challenge. Conventional strategies employ customized routing protocols tailored to these environments, with research focused on refining them for improved efficiency in terms of throughput and energy utilization. However, these elements are interconnected, and enhancements in one often come at the expense of the other. An alternative data collection approach involves the use of Unmanned Aerial Vehicles (UAVs). In contrast to traditional methods, UAVs directly collect data from mobile nodes, bypassing the need for routing. While existing research predominantly addresses static nodes, this paper proposes an evolutionary based, innovative path selection approach based on future position prediction of caching enabled mobile ad-hoc wireless sensor network nodes for UAV data collection, aimed at maximizing node encounters and gathering the most valuable information in a single trip. The proposed technique is evaluated for different movement models and parameter configurations
    corecore