82 research outputs found

    Using Hyperheuristics to Improve the Determination of the Kinetic Constants of a Chemical Reaction in Heterogeneous Phase

    Get PDF
    AbstractThe reaction in the human stomach when neutralizing acid with an antacid tablet is simu- lated and the evolution over time of the concentration of all chemical species present in the reaction medium is obtained. The values of the kinetic parameters of the chemical reaction can be determined by integrating the equation of the reaction rate. This is a classical opti- mization problem that can be approached with metaheuristic methods. The use of a parallel, parameterized scheme for metaheuristics facilitates the development of metaheuristics and their application. The unified scheme can also be used to implement hyperheuristics on top of pa- rameterized metaheuristics, so selecting appropriate values for the metaheuristic parameters, and consequently the metaheuristic itself. The hyperheuristic approach provides satisfactory values for the metaheuristic parameters and, consequently, satisfactory metaheuristics for the problem of determining the kinetic constants

    METADOCK: A parallel metaheuristic schema for virtual screening methods

    Get PDF
    Virtual screening through molecular docking can be translated into an optimization problem, which can be tackled with metaheuristic methods. The interaction between two chemical compounds (typically a protein, enzyme or receptor, and a small molecule, or ligand) is calculated by using highly computationally demanding scoring functions that are computed at several binding spots located throughout the protein surface. This paper introduces METADOCK, a novel molecular docking methodology based on parameterized and parallel metaheuristics and designed to leverage heterogeneous computers based on heterogeneous architectures. The application decides the optimization technique at running time by setting a configuration schema. Our proposed solution finds a good workload balance via dynamic assignment of jobs to heterogeneous resources which perform independent metaheuristic executions when computing different molecular interactions required by the scoring functions in use. A cooperative scheduling of jobs optimizes the quality of the solution and the overall performance of the simulation, so opening a new path for further developments of virtual screening methods on high-performance contemporary heterogeneous platforms.Ingeniería, Industria y Construcció

    Gravitational Search and Harmony Search Algorithms for Solving the Chemical Kinetics Optimization Problems

    Get PDF
    The article is dedicated to the analysis of the global optimization algorithms application to the solution of inverse problems of chemical kinetics. Two heuristic algorithms are considered - the gravitational search algorithm and the harmony algorithm. The article describes the algorithms, as well as the application of these algorithms to the optimization of test functions. After that, these algorithms are used to search for the kinetic parameters of two chemical processes – propane pre-reforming on Ni-catalyst and catalytic isomerization of pentane-hexane fraction. For the first process both algorithms showed approximately the same solution, while for the second problem the gravitational search algorithm showed a smaller value of the minimizing function. Wherefore, it is concluded that on large-scale problems it is better to use the gravitational search algorithm rather than the harmony algorithm, while obtaining a smaller value of the minimizing function in a minimum time. On low-scale problems both algorithms showed approximately the same result, while demonstrating the coincidence of the calculated data with the experimental ones

    Estrategias de paralización para la optimización de métodos computacionales en el descubrimiento de nuevos fármacos.

    Get PDF
    El descubrimiento de fármacos es un proceso largo y costoso que involucra varias etapas; entre ellas destaca la identificación de candidatos a fármacos; es decir moléculas potencialmente activas para neutralizar una determinada proteína involucrada en una enfermedad. Esta etapa se fundamenta en la optimización del acoplamiento molecular entre un receptor y un ingente número de candidatos a fármacos, para determinar cuál de estos candidatos obtiene una mayor intensidad en el acoplamiento. El acoplamiento molecular entre dos compuestos bioactivos está sujeto a una serie de fenómenos físicos presentes en la naturaleza y que se modelan a través de una función de scoring. Estos modelos representan los comportamientos de las moléculas en la naturaleza, permitiendo trasladar esta interacción molecular a una simulación en plataformas computacionales de silicio. Esta tesis doctoral plantea la aceleración y mejora de los métodos de descubrimiento de nuevos fármacos mediante técnicas de inteligencia artificial y paralelismo. Se propone un esquema metaheurístico parametrizado y paralelo que determine la interacción molecular entre compuestos bioactivos. Las técnicas metaheurísticas son técnicas algorítmicas empleadas, generalmente, en la optimización de cualquier tipo de problema, proporcionando soluciones satisfactorias. Algunos ejemplos de metaheurísticas incluyen búsquedas locales; que centran su campo de actuación a su entorno de soluciones (vecinos) más cercanos; búsquedas basadas en poblaciones muy utilizadas en la simulación de procesos biológicos y entre los que destacan los algoritmos evolutivos o las búsquedas dispersas por mencionar algunos ejemplos. Los esquemas parametrizados de metaheurísticas definen una serie de funciones básicas (Inicializar, Fin, Seleccionar, Combinar, Mejorar e Incluir) a fin de parametrizar el tipo de metaheurística concreta a instanciar en cada ejecución de la aplicación, permitiendo así no sólo la optimización del problema a resolver, sino también del algoritmo empleado para su resolución. Trabajar con una combinación de parámetros u otra es un factor vital para encontrar una buena solución al problema. Para abordar este número elevado de parámetros necesitamos alguna estrategia para este nuevo problema de optimización que surge. Esta estrategia es la hiperheurística, que busca la mejor de entre un conjunto de metaheurísticas aplicadas a un mismo problema. La gran mayoría de algoritmos metaheurísticos son, por definición, masivamente paralelos, y por tanto su implementación en plataformas secuenciales compromete tanto la eficiencia como la eficacia de los mismos. En ésta tesis doctoral se adapta además la instanciación del esquema metaheurístico a plataformas masivamente paralelas y heterogéneas como procesadores de memoria compartida y tarjetas gráficas. Las técnicas masivamente paralelas en GPU con soporte CUDA ayudan a realizar estos cálculos poniendo a disposición de la aplicación miles de núcleos capaces de funcionar en paralelo y, además, con la posibilidad de compartir memoria entre ellos y así reducir aún más los accesos a memoria. Aun así, existen compuestos celulares de decenas de miles de átomos para los que el uso de una sola GPU puede ser insuficiente, convirtiéndola en un cuello de botella. Esto hace necesario extender el esquema a multiGPU para dividir la carga computacional y poder abordar este tipo de compuestos con suficientes garantías de rendimiento. Para mejorar el rendimiento y maximizar la paralelización de la aplicación, es fundamental aprovechar al máximo los recursos que nos ofrece la máquina, por ello, se realiza un trabajo previo para ajustar los parámetros de la opción paralela elegida al entorno de ejecución y trabajar con los parámetros que mejor se adapten a la máquina. En un nodo, podemos tener un número limitado de GPUs, y para simular una molécula podemos obtener buenos rendimientos, pero en el problema de descubrimiento de fármacos, podemos tener millones de candidatos a fármacos con los que simular. En este caso, escalamos a un clúster de cómputo. Uno de los enfoques tomados por la comunidad para aprovechar todos los recursos de un clúster de computadores, de manera transparente al usuario, ha sido la virtualización del sistema. Entornos como (VMWARE, XEN) virtualizan todo el sistema y no solo una parte, siendo muy inadecuado en entornos de computación de alto rendimiento, ya que las restricciones a que deben someterse al ser un entorno compartido, introducen una sobrecarga que no es posible asumir. En lugar de virtualizar todo el sistema, sería virtualizar solo un conjunto de recursos específicos, como las GPUs. Este trabajo lo realiza un middleware muy potente denominado rCUDA. Este software permite el uso simultáneo y remoto de GPUs con soporte CUDA. Para habilitar la aceleración remota de GPUs, este software del sistema crea dispositivos virtuales compatibles con CUDA en máquinas sin GPUs locales. Además, rCUDA aporta una reducción de la complejidad algorítmica, evitando utilizar técnicas basadas en paso de mensajes (MPI), muy utilizadas en este tipo de entornos de cómputo. Las técnicas algorítmicas que se van a desarrollar, van a posibilitar la elección de las diferentes plataformas de cómputo disponibles optimizando el entorno de ejecución y, balanceando la carga de trabajo con los parámetros de configuración más idóneos.Ingeniería, Industria y Construcció

    Intelligent simulation of coastal ecosystems

    Get PDF
    Tese de doutoramento. Engenharia Informática. Faculdade de Engenharia. Universidade do Porto, Faculdade de Ciência e Tecnologia. Universidade Fernando Pessoa. 201

    A DISCRETE-CONTINUOUS APPROACH TO MODEL POWDER METALLURGY PROCESSES

    Get PDF

    Deriving Protein Structures Efficiently by Integrating Experimental Data into Biomolecular Simulations

    Get PDF
    Proteine sind molekulare Nanomaschinen in biologischen Zellen. Sie sind wesentliche Bausteine aller bekannten Lebensformen, von Einzellern bis hin zu Menschen, und erfüllen vielfältige Funktionen, wie beispielsweise den Sauerstofftransport im Blut oder als Bestandteil von Haaren. Störungen ihrer physiologischen Funktion können jedoch schwere degenerative Krankheiten wie Alzheimer und Parkinson verursachen. Die Entwicklung wirksamer Therapien für solche Proteinfehlfaltungserkrankungen erfordert ein tiefgreifendes Verständnis der molekularen Struktur und Dynamik von Proteinen. Da Proteine aufgrund ihrer lichtmikroskopisch nicht mehr auflösbaren Größe nur indirekt beobachtet werden können, sind experimentelle Strukturdaten meist uneindeutig. Dieses Problem lässt sich in silico mittels physikalischer Modellierung biomolekularer Dynamik lösen. In diesem Feld haben sich datengestützte Molekulardynamiksimulationen als neues Paradigma für das Zusammenfügen der einzelnen Datenbausteine zu einem schlüssigen Gesamtbild der enkodierten Proteinstruktur etabliert. Die Strukturdaten werden dabei als integraler Bestandteil in ein physikbasiertes Modell eingebunden. In dieser Arbeit untersuche ich, wie sogenannte strukturbasierte Modelle verwendet werden können, um mehrdeutige Strukturdaten zu komplementieren und die enthaltenen Informationen zu extrahieren. Diese Modelle liefern eine effiziente Beschreibung der aus der evolutionär optimierten nativen Struktur eines Proteins resultierenden Dynamik. Mithilfe meiner systematischen Simulationsmethode XSBM können biologische Kleinwinkelröntgenstreudaten mit möglichst geringem Rechenaufwand als physikalische Proteinstrukturen interpretiert werden. Die Funktionalität solcher datengestützten Methoden hängt stark von den verwendeten Simulationsparametern ab. Eine große Herausforderung besteht darin, experimentelle Informationen und theoretisches Wissen in geeigneter Weise relativ zueinander zu gewichten. In dieser Arbeit zeige ich, wie die entsprechenden Simulationsparameterräume mit Computational-Intelligence-Verfahren effizient erkundet und funktionale Parameter ausgewählt werden können, um die Leistungsfähigkeit komplexer physikbasierter Simulationstechniken zu optimieren. Ich präsentiere FLAPS, eine datengetriebene metaheuristische Optimierungsmethode zur vollautomatischen, reproduzierbaren Parametersuche für biomolekulare Simulationen. FLAPS ist ein adaptiver partikelschwarmbasierter Algorithmus inspiriert vom Verhalten natürlicher Vogel- und Fischschwärme, der das Problem der relativen Gewichtung verschiedener Kriterien in der multivariaten Optimierung generell lösen kann. Neben massiven Fortschritten in der Verwendung von künstlichen Intelligenzen zur Proteinstrukturvorhersage ermöglichen leistungsoptimierte datengestützte Simulationen detaillierte Einblicke in die komplexe Beziehung von biomolekularer Struktur, Dynamik und Funktion. Solche computergestützten Methoden können Zusammenhänge zwischen den einzelnen Puzzleteilen experimenteller Strukturinformationen herstellen und so unser Verständnis von Proteinen als den Grundbausteinen des Lebens vertiefen

    Computational Optimizations for Machine Learning

    Get PDF
    The present book contains the 10 articles finally accepted for publication in the Special Issue “Computational Optimizations for Machine Learning” of the MDPI journal Mathematics, which cover a wide range of topics connected to the theory and applications of machine learning, neural networks and artificial intelligence. These topics include, among others, various types of machine learning classes, such as supervised, unsupervised and reinforcement learning, deep neural networks, convolutional neural networks, GANs, decision trees, linear regression, SVM, K-means clustering, Q-learning, temporal difference, deep adversarial networks and more. It is hoped that the book will be interesting and useful to those developing mathematical algorithms and applications in the domain of artificial intelligence and machine learning as well as for those having the appropriate mathematical background and willing to become familiar with recent advances of machine learning computational optimization mathematics, which has nowadays permeated into almost all sectors of human life and activity

    Latest Advances in Nanoplasmonics and Use of New Tools for Plasmonic Characterization

    Get PDF
    Nanoplasmonics is an area that uses light to couple electrons in metals, and can break the diffraction limit for light confinement into subwavelength zones, allowing for strong field enhancements. In the last two decades, there has been a resurgence of this research topic and its applications. Thus, this Special Issue presents a collection of articles and reviews by international researchers and is devoted to the recent advances in and insights into this research topic, including plasmonic devices, plasmonic biosensing, plasmonic photocatalysis, plasmonic photovoltaics, surface-enhanced Raman scattering, and surface plasmon resonance spectroscopy

    A comparative analysis of algorithms for satellite operations scheduling

    Get PDF
    Scheduling is employed in everyday life, ranging from meetings to manufacturing and operations among other activities. One instance of scheduling in a complex real-life setting is space mission operations scheduling, i.e. instructing a satellite to perform fitting tasks during predefined time periods with a varied frequency to achieve its mission goals. Mission operations scheduling is pivotal to the success of any space mission, choreographing every task carefully, accounting for technological and environmental limitations and constraints along with mission goals.;It remains standard practice to this day, to generate operations schedules manually ,i.e. to collect requirements from individual stakeholders, collate them into a timeline, compare against feasibility and available satellite resources, and find potential conflicts. Conflict resolution is done by hand, checked by a simulator and uplinked to the satellite weekly. This process is time consuming, bears risks and can be considered sub-optimal.;A pertinent question arises: can we automate the process of satellite mission operations scheduling? And if we can, what method should be used to generate the schedules? In an attempt to address this question, a comparison of algorithms was deemed suitable in order to explore their suitability for this particular application.;The problem of mission operations scheduling was initially studied through literature and numerous interviews with experts. A framework was developed to approximate a generic Low Earth Orbit satellite, its environment and its mission requirements. Optimisation algorithms were chosen from different categories such as single-point stochastic without memory (Simulated Annealing, Random Search), multi-point stochastic with memory (Genetic Algorithm, Ant Colony System, Differential Evolution) and were run both with and without Local Search.;The aforementioned algorithmic set was initially tuned using a single 89-minute Low Earth Orbit of a scientific mission to Mars. It was then applied to scheduling operations during one high altitude Low Earth Orbit (2.4hrs) of an experimental mission.;It was then applied to a realistic test-case inspired by the European Space Agency PROBA-2 mission, comprising a 1 day schedule and subsequently a 7 day schedule - equal to a Short Term Plan as defined by the European Space Agency.;The schedule fitness - corresponding to the Hamming distance between mission requirements and generated schedule - are presented along with the execution time of each run. Algorithmic performance is discussed and put at the disposal of mission operations experts for consideration.Scheduling is employed in everyday life, ranging from meetings to manufacturing and operations among other activities. One instance of scheduling in a complex real-life setting is space mission operations scheduling, i.e. instructing a satellite to perform fitting tasks during predefined time periods with a varied frequency to achieve its mission goals. Mission operations scheduling is pivotal to the success of any space mission, choreographing every task carefully, accounting for technological and environmental limitations and constraints along with mission goals.;It remains standard practice to this day, to generate operations schedules manually ,i.e. to collect requirements from individual stakeholders, collate them into a timeline, compare against feasibility and available satellite resources, and find potential conflicts. Conflict resolution is done by hand, checked by a simulator and uplinked to the satellite weekly. This process is time consuming, bears risks and can be considered sub-optimal.;A pertinent question arises: can we automate the process of satellite mission operations scheduling? And if we can, what method should be used to generate the schedules? In an attempt to address this question, a comparison of algorithms was deemed suitable in order to explore their suitability for this particular application.;The problem of mission operations scheduling was initially studied through literature and numerous interviews with experts. A framework was developed to approximate a generic Low Earth Orbit satellite, its environment and its mission requirements. Optimisation algorithms were chosen from different categories such as single-point stochastic without memory (Simulated Annealing, Random Search), multi-point stochastic with memory (Genetic Algorithm, Ant Colony System, Differential Evolution) and were run both with and without Local Search.;The aforementioned algorithmic set was initially tuned using a single 89-minute Low Earth Orbit of a scientific mission to Mars. It was then applied to scheduling operations during one high altitude Low Earth Orbit (2.4hrs) of an experimental mission.;It was then applied to a realistic test-case inspired by the European Space Agency PROBA-2 mission, comprising a 1 day schedule and subsequently a 7 day schedule - equal to a Short Term Plan as defined by the European Space Agency.;The schedule fitness - corresponding to the Hamming distance between mission requirements and generated schedule - are presented along with the execution time of each run. Algorithmic performance is discussed and put at the disposal of mission operations experts for consideration
    corecore