131 research outputs found

    On the role of metaheuristic optimization in bioinformatics

    Get PDF
    Metaheuristic algorithms are employed to solve complex and large-scale optimization problems in many different fields, from transportation and smart cities to finance. This paper discusses how metaheuristic algorithms are being applied to solve different optimization problems in the area of bioinformatics. While the text provides references to many optimization problems in the area, it focuses on those that have attracted more interest from the optimization community. Among the problems analyzed, the paper discusses in more detail the molecular docking problem, the protein structure prediction, phylogenetic inference, and different string problems. In addition, references to other relevant optimization problems are also given, including those related to medical imaging or gene selection for classification. From the previous analysis, the paper generates insights on research opportunities for the Operations Research and Computer Science communities in the field of bioinformatics

    Hybrid Meta-heuristic Algorithms for Static and Dynamic Job Scheduling in Grid Computing

    Get PDF
    The term ’grid computing’ is used to describe an infrastructure that connects geographically distributed computers and heterogeneous platforms owned by multiple organizations allowing their computational power, storage capabilities and other resources to be selected and shared. Allocating jobs to computational grid resources in an efficient manner is one of the main challenges facing any grid computing system; this allocation is called job scheduling in grid computing. This thesis studies the application of hybrid meta-heuristics to the job scheduling problem in grid computing, which is recognized as being one of the most important and challenging issues in grid computing environments. Similar to job scheduling in traditional computing systems, this allocation is known to be an NPhard problem. Meta-heuristic approaches such as the Genetic Algorithm (GA), Variable Neighbourhood Search (VNS) and Ant Colony Optimisation (ACO) have all proven their effectiveness in solving different scheduling problems. However, hybridising two or more meta-heuristics shows better performance than applying a stand-alone approach. The new high level meta-heuristic will inherit the best features of the hybridised algorithms, increasing the chances of skipping away from local minima, and hence enhancing the overall performance. In this thesis, the application of VNS for the job scheduling problem in grid computing is introduced. Four new neighbourhood structures, together with a modified local search, are proposed. The proposed VNS is hybridised using two meta-heuristic methods, namely GA and ACO, in loosely and strongly coupled fashions, yielding four new sequential hybrid meta-heuristic algorithms for the problem of static and dynamic single-objective independent batch job scheduling in grid computing. For the static version of the problem, several experiments were carried out to analyse the performance of the proposed schedulers in terms of minimising the makespan using well known benchmarks. The experiments show that the proposed schedulers achieved impressive results compared to other traditional, heuristic and meta-heuristic approaches selected from the bibliography. To model the dynamic version of the problem, a simple simulator, which uses the rescheduling technique, is designed and new problem instances are generated, by using a well-known methodology, to evaluate the performance of the proposed hybrid schedulers. The experimental results show that the use of rescheduling provides significant improvements in terms of the makespan compared to other non-rescheduling approaches

    Matheuristics: using mathematics for heuristic design

    Get PDF
    Matheuristics are heuristic algorithms based on mathematical tools such as the ones provided by mathematical programming, that are structurally general enough to be applied to different problems with little adaptations to their abstract structure. The result can be metaheuristic hybrids having components derived from the mathematical model of the problems of interest, but the mathematical techniques themselves can define general heuristic solution frameworks. In this paper, we focus our attention on mathematical programming and its contributions to developing effective heuristics. We briefly describe the mathematical tools available and then some matheuristic approaches, reporting some representative examples from the literature. We also take the opportunity to provide some ideas for possible future development

    Traveling Salesman Problem

    Get PDF
    This book is a collection of current research in the application of evolutionary algorithms and other optimal algorithms to solving the TSP problem. It brings together researchers with applications in Artificial Immune Systems, Genetic Algorithms, Neural Networks and Differential Evolution Algorithm. Hybrid systems, like Fuzzy Maps, Chaotic Maps and Parallelized TSP are also presented. Most importantly, this book presents both theoretical as well as practical applications of TSP, which will be a vital tool for researchers and graduate entry students in the field of applied Mathematics, Computing Science and Engineering

    When Evolutionary Computing Meets Astro- and Geoinformatics

    Get PDF
    International audienceKnowledge discovery from data typically includes solving some type of an optimization problem that can be efficiently addressed using algorithms belonging to the class of evolutionary and bio-inspired computation. In this chapter, we give an overview of the various kinds of evolutionary algorithms, such as genetic algorithms, evolutionary strategy, evolutionary and genetic programming, differential evolution, and coevolutionary algorithms, as well as several other bio-inspired approaches, like swarm intelligence and artificial immune systems. After elaborating on the methodology, we provide numerous examples of applications in astronomy and geoscience and show how these algorithms can be applied within a distributed environment, by making use of parallel computing, which is essential when dealing with Big Data

    Novel optimization schemes for service composition in the cloud using learning automata-based matrix factorization

    Get PDF
    A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of PhilosophyService Oriented Computing (SOC) provides a framework for the realization of loosely couple service oriented applications (SOA). Web services are central to the concept of SOC. They possess several benefits which are useful to SOA e.g. encapsulation, loose coupling and reusability. Using web services, an application can embed its functionalities within the business process of other applications. This is made possible through web service composition. Web services are composed to provide more complex functions for a service consumer in the form of a value added composite service. Currently, research into how web services can be composed to yield QoS (Quality of Service) optimal composite service has gathered significant attention. However, the number and services has risen thereby increasing the number of possible service combinations and also amplifying the impact of network on composite service performance. QoS-based service composition in the cloud addresses two important sub-problems; Prediction of network performance between web service nodes in the cloud, and QoS-based web service composition. We model the former problem as a prediction problem while the later problem is modelled as an NP-Hard optimization problem due to its complex, constrained and multi-objective nature. This thesis contributed to the prediction problem by presenting a novel learning automata-based non-negative matrix factorization algorithm (LANMF) for estimating end-to-end network latency of a composition in the cloud. LANMF encodes each web service node as an automaton which allows v it to estimate its network coordinate in such a way that prediction error is minimized. Experiments indicate that LANMF is more accurate than current approaches. The thesis also contributed to the QoS-based service composition problem by proposing four evolutionary algorithms; a network-aware genetic algorithm (INSGA), a K-mean based genetic algorithm (KNSGA), a multi-population particle swarm optimization algorithm (NMPSO), and a non-dominated sort fruit fly algorithm (NFOA). The algorithms adopt different evolutionary strategies coupled with LANMF method to search for low latency and QoSoptimal solutions. They also employ a unique constraint handling method used to penalize solutions that violate user specified QoS constraints. Experiments demonstrate the efficiency and scalability of the algorithms in a large scale environment. Also the algorithms outperform other evolutionary algorithms in terms of optimality and calability. In addition, the thesis contributed to QoS-based web service composition in a dynamic environment. This is motivated by the ineffectiveness of the four proposed algorithms in a dynamically hanging QoS environment such as a real world scenario. Hence, we propose a new cellular automata-based genetic algorithm (CellGA) to address the issue. Experimental results show the effectiveness of CellGA in solving QoS-based service composition in dynamic QoS environment

    Energy-aware scheduling in heterogeneous computing systems

    Get PDF
    In the last decade, the grid computing systems emerged as useful provider of the computing power required for solving complex problems. The classic formulation of the scheduling problem in heterogeneous computing systems is NP-hard, thus approximation techniques are required for solving real-world scenarios of this problem. This thesis tackles the problem of scheduling tasks in a heterogeneous computing environment in reduced execution times, considering the schedule length and the total energy consumption as the optimization objectives. An efficient multithreading local search algorithm for solving the multi-objective scheduling problem in heterogeneous computing systems, named MEMLS, is presented. The proposed method follows a fully multi-objective approach, applying a Pareto-based dominance search that is executed in parallel by using several threads. The experimental analysis demonstrates that the new multithreading algorithm outperforms a set of fast and accurate two-phase deterministic heuristics based on the traditional MinMin. The new ME-MLS method is able to achieve significant improvements in both makespan and energy consumption objectives in reduced execution times for a large set of testbed instances, while exhibiting very good scalability. The ME-MLS was evaluated solving instances comprised of up to 2048 tasks and 64 machines. In order to scale the dimension of the problem instances even further and tackle large-sized problem instances, the Graphical Processing Unit (GPU) architecture is considered. This line of future work has been initially tackled with the gPALS: a hybrid CPU/GPU local search algorithm for efficiently tackling a single-objective heterogeneous computing scheduling problem. The gPALS shows very promising results, being able to tackle instances of up to 32768 tasks and 1024 machines in reasonable execution times.En la última década, los sistemas de computación grid se han convertido en útiles proveedores de la capacidad de cálculo necesaria para la resolución de problemas complejos. En su formulación clásica, el problema de la planificación de tareas en sistemas heterogéneos es un problema NP difícil, por lo que se requieren técnicas de resolución aproximadas para atacar instancias de tamaño realista de este problema. Esta tesis aborda el problema de la planificación de tareas en sistemas heterogéneos, considerando el largo de la planificación y el consumo energético como objetivos a optimizar. Para la resolución de este problema se propone un algoritmo de búsqueda local eficiente y multihilo. El método propuesto se trata de un enfoque plenamente multiobjetivo que consiste en la aplicación de una búsqueda basada en dominancia de Pareto que se ejecuta en paralelo mediante el uso de varios hilos de ejecución. El análisis experimental demuestra que el algoritmo multithilado propuesto supera a un conjunto de heurísticas deterministas rápidas y e caces basadas en el algoritmo MinMin tradicional. El nuevo método, ME-MLS, es capaz de lograr mejoras significativas tanto en el largo de la planificación y como en consumo energético, en tiempos de ejecución reducidos para un gran número de casos de prueba, mientras que exhibe una escalabilidad muy promisoria. El ME-MLS fue evaluado abordando instancias de hasta 2048 tareas y 64 máquinas. Con el n de aumentar la dimensión de las instancias abordadas y hacer frente a instancias de gran tamaño, se consideró la utilización de la arquitectura provista por las unidades de procesamiento gráfico (GPU). Esta línea de trabajo futuro ha sido abordada inicialmente con el algoritmo gPALS: un algoritmo híbrido CPU/GPU de búsqueda local para la planificación de tareas en en sistemas heterogéneos considerando el largo de la planificación como único objetivo. La evaluación del algoritmo gPALS ha mostrado resultados muy prometedores, siendo capaz de abordar instancias de hasta 32768 tareas y 1024 máquinas en tiempos de ejecución razonables

    Constrained optimization applied to multiscale integrative modeling

    Get PDF
    Multiscale integrative modeling stands at the intersection between experimental and computational techniques to predict the atomistic structures of important macromolecules. In the integrative modeling process, the experimental information is often integrated with energy potential and macromolecular substructures in order to derive realistic structural models. This heterogeneous information is often combined into a global objective function that quantifies the quality of the structural models and that is minimized through optimization. In order to balance the contribution of the relative terms concurring to the global function, weight constants are assigned to each term through a computationally demanding process. In order to alleviate this common issue, we suggest to switch from the traditional paradigm of using a single unconstrained global objective function to a constrained optimization scheme. The work presented in this thesis describes the different applications and methods associated with the development of a general constrained optimization protocol for multiscale integrative modeling. The initial implementation concerned the prediction of symmetric macromolecular assemblies throught the incorporation of a recent efficient constrained optimizer nicknamed mViE (memetic Viability Evolution) to our integrative modeling protocol power (parallel optimization workbench to enhance resolution). We tested this new approach through rigorous comparisons against other state-of-the-art integrative modeling methods on a benchmark set of solved symmetric macromolecular assemblies. In this process, we validated the robustness of the constrained optimization method by obtaining native-like structural models. This constrained optimization protocol was then applied to predict the structure of the elusive human Huntingtin protein. Due to the fact that little structural information was available when the project was initiated, we integrated information from secondary structure prediction and low-resolution experiments, in the form of cryo-electron microscopy maps and crosslinking mass spectrometry data, in order to derive a structural model of Huntingtin. The structure resulting from such integrative modeling approach was used to derive dynamic information about Huntingtin protein. At a finer level of resolution, the constrained optimization protocol was then applied to dock small molecules inside the binding site of protein targets. We converted the classical molecular docking problem from an unconstrained single objective optimization to a constrained one by extracting local and global constraints from pre-computed energy grids. The new approach was tested and validated on standard ligand-receptor benchmark sets widely used by the molecular docking community, and showed comparable results to state-of-the-art molecular docking programs. Altogether, the work presented in this thesis proposed improvements in the field of multiscale integrative modeling which are reflected both in the quality of the models returned by the new constrained optimization protocol and in the simpler way of treating the uncorrelated terms concurring to the global scoring scheme to estimate the quality of the models

    Analysis of microarray and next generation sequencing data for classification and biomarker discovery in relation to complex diseases

    Get PDF
    PhDThis thesis presents an investigation into gene expression profiling, using microarray and next generation sequencing (NGS) datasets, in relation to multi-category diseases such as cancer. It has been established that if the sequence of a gene is mutated, it can result in the unscheduled production of protein, leading to cancer. However, identifying the molecular signature of different cancers amongst thousands of genes is complex. This thesis investigates tools that can aid the study of gene expression to infer useful information towards personalised medicine. For microarray data analysis, this study proposes two new techniques to increase the accuracy of cancer classification. In the first method, a novel optimisation algorithm, COA-GA, was developed by synchronising the Cuckoo Optimisation Algorithm and the Genetic Algorithm for data clustering in a shuffle setup, to choose the most informative genes for classification purposes. Support Vector Machine (SVM) and Multilayer Perceptron (MLP) artificial neural networks are utilised for the classification step. Results suggest this method can significantly increase classification accuracy compared to other methods. An additional method involving a two-stage gene selection process was developed. In this method, a subset of the most informative genes are first selected by the Minimum Redundancy Maximum Relevance (MRMR) method. In the second stage, optimisation algorithms are used in a wrapper setup with SVM to minimise the selected genes whilst maximising the accuracy of classification. A comparative performance assessment suggests that the proposed algorithm significantly outperforms other methods at selecting fewer genes that are highly relevant to the cancer type, while maintaining a high classification accuracy. In the case of NGS, a state-of-the-art pipeline for the analysis of RNA-Seq data is investigated to discover differentially expressed genes and differential exon usages between normal and AIP positive Drosophila datasets, which are produced in house at Queen Mary, University of London. Functional genomic of differentially expressed genes were examined and found to be relevant to the case study under investigation. Finally, after normalising the RNA-Seq data, machine learning approaches similar to those in microarray was successfully implemented for these datasets

    Entwicklung einer computergestützten Methode zum reaktionsbasierten De-Novo-Design wirkstoffartiger Verbindungen

    Get PDF
    A new method for computer-based de novo design of drug candidate structures is proposed. DOGS (Design of Genuine Structures) features a ligand-based strategy to suggest new molecular structures. The quality of designed compounds is assessed by a graph kernel method measuring the distance of designed molecules to a known reference ligand. Two graph representations of molecules (molecular graph and reduced graph) are implemented to feature different levels of abstraction from the molecular structure. A fully deterministic construction procedure explicitly designed to facilitate synthesizability of proposed structures is realized: DOGS uses readily available synthesis building blocks and established reaction schemes to assemble new molecules. This approach enables the software to propose not only the final compounds, but also to give suggestions for synthesis routes to generate them at the bench. The set of synthesis schemes comprises about 83 chemical reactions. Special focus was put on ring closure reactions forming drug-like substructures. The library of building blocks consists of about 25,000 readily available synthesis building blocks. DOGS builds up new structures in a stepwise process. Each virtual synthesis step adds a fragment to the growing molecule until a stop criterion (upper threshold for molecular mass or number of synthesis steps) is fulfilled. In a theoretical evaluation, a set of ~1,800 molecules proposed by DOGS is analyzed for critical properties of de novo designed compounds. The software is able to suggest drug-like molecules (79% violate less than two of Lipinski’s ‘rule of five’). In addition, a trained classifier for drug-likeness assigns a score >0.8 to 51% of the designed molecules (with 1.0 being the top score). In addition, most of the DOGS molecules are deemed to be synthesizable by a retro-synthesis descriptor (77% of molecules score in the top 10% of the decriptor’s value range). Calculated logP(o/w) values of constructed molecules resemble a unimodal distribution centred close to the mean of logP(o/w) values calculated for the reference compounds. A structural analysis of selected designs reveals that DOGS is capable of constructing molecules reflecting the overall topological arrangement of pharmacophoric features found in the reference ligands. At the same time, the DOGS designs represent innovative compounds being structurally distinct from the references. Synthesis routes for these examples are short and seem feasible in most cases. Some reaction steps might need modification by using protecting groups to avoid unwanted side reactions. Plausible bioisosters for known privileged fragments addressing the S1 pocket of trypsin were proposed by DOGS in a case study. Three of them can be found in known trypsin inhibitors as S1-adressing side chains. The software was also tested in two prospective case studies to design bioactive compounds. DOGS was applied to design ligands for human gamma-secretase and human histamine receptor subtype 4 (hH4R). Two selected designs for gamma-secretase were readily synthesizable as suggested by the software in one-step reactions. Both compounds represent inverse modulators of the target molecule. In a second case study, a ligand candidate selected for hH4R was synthesized exactly following the three-step synthesis plan suggested by DOGS. This compound showed low activity on the target structure. The concept of DOGS is able to deliver synthesizable and bioactive compounds. Suggested synthesis plans of selected compounds were readily pursuable. DOGS can therefore serve as a valuable idea generator for the design of new pharmacological active compounds.Im Rahmen der vorliegenden Arbeit wird eine neue Methode zum computergestützten de novo Design von wirkstoffartigen Molekülen vorgestellt. Ziel ist es, automatisiert und zielgerichtet neuartige Moleküle mit biologischer Aktivität zu entwerfen. Das entwickelte Programm DOGS (Design of Genuine Structures) schlägt zusätzlich zu den chemischen Verbindungen mögliche Strategien zu deren Synthese vor. Ein vollständig deterministischer Konstruktionsalgorithmus verwendet verfügbare Synthesebausteine und etablierte chemische Reaktionen zum Aufbau der neuen Moleküle. Die Bibliothek der Synthesebausteine umfasst etwa 25.000 Moleküle mit einer molekularen Masse zwischen 30 und 300 Da. Die Sammlung der Reaktionen zur Verknüpfung der Bausteine besteht aus 83 literaturbeschriebenen chemischen Reaktionen. Ein Großteil stellt Syntheseschritte zur Generierung neuer Ringsysteme dar. DOGS baut neue Moleküle schrittweise auf: In jedem virtuellen Syntheseschritt wird ein neues Fragment an das wachsende Molekül angefügt, bis eines der Stoppkriterien (Überschreitung einer maximalen molekulare Masse oder Anzahl Syntheseschritte) erfüllt ist. Zur Bewertung der Qualität der Zischen- und Endprodukte wird eine ligandenbasierte Strategie verwendet. Die entstehenden Moleküle werden mit einem bekannten Referenzliganden verglichen, welcher die gewünschte biologische Aktivität aufweist. Das Verfahren zielt dabei auf die Maximierung der Ähnlichkeit der neu konstruierten Moleküle zur Referenz ab. Eine Graphkernmethode berechnet die Ähnlichkeit zum Referenzliganden anhand des Vergleichs ihrer zweidimensionalen molekularen Struktur. In einer theoretischen Auswertung des Programms werden ca. 1.800 generierte potentielle Trypsin-Inhibitoren hinsichtlich solcher Eigenschaften analysiert, welche für neu entworfene Verbindungen kritisch sind: DOGS ist in der Lage wirkstoffartige Moleküle zu entwerfen (79% verletzen weniger als zwei von Lipinskis 'rule of five' Kriterien zur Abschätzung der oralen Bioverfügbarkeit). Zusätzlich wurde die Wirkstoffartigkeit der DOGS-Moleküle durch einen trainierten Klassifizieralgorithmus bewertet. Hierbei erhielten 51% der Verbindungen einen Wert in den oberen 20% des Wertebereichs des Klassifizierers. Weiterhin wird die synthetische Zugänglichkeit für den Großteil der computergenerierten Moleküle als hoch eingeschätzt (77% erhalten einen Wert in den oberen 10% des Wertebereichs eines Deskriptors zur Abschätzung der Synthetisierbarkeit). Die berechneten logP(o/w) Werte der konstruierten Moleküle entsprechen in ihrer Verteilung denen der Referenzliganden. Die Untersuchung der vorgeschlagenen Trypsin-Inhibitoren auf Bioisostere zur Adressierung der S1-Bindetasche zeigt, dass hierfür plausible Vorschläge von DOGS generiert werden. Der Großteil ist potentiell in der Lage eine kritische ladungsvermittelte Interaktion mit dem Protein in der S1-Bindetasche einzugehen. Unter den Vorschlägen befinden sich unter anderem auch drei Seitenketten, für die Interaktionen mit der S1-Bindetasche von Trypsin experimentell bestätigt sind. Eine Analyse ausgewählter Beispiele aus verschiedenen Läufen zum Ligandenentwurf für unterschiedliche biologische Zielmoleküle zeigt, dass das Programm in der Lage ist, die generelle topologische Anordnung potentieller Interaktionspunkte der Referenzliganden in den neu erzeugten Molekülen beizubehalten. Gleichzeitig sind diese Moleküle strukturell verschieden im Vergleich zu den Referenzliganden. Die generierten Synthesewege sind kurz und erscheinen in den meisten Fällen plausibel. Für einige der Syntheseschritte wird bei der praktischen Umsetzung der ergänzende Einsatz von Schutzgruppen notwendig sein, um unerwünschte Nebenreaktionen zu vermeiden. Die Software wurde zusätzlich zu den theoretischen Analysen in prospektiven Studien zum Ligandenentwurf praktisch evaluiert. Hierzu wurde DOGS zur Generierung von Liganden des humanen Histaminrezeptors 4 (hH4R) sowie der humanen gamma-Sekretase eingesetzt. Für hH4R wurde einer der entworfenen potentiellen Liganden synthetisiert, wobei der vorgeschlagene Syntheseweg exakt nachvollzogen werden konnte. Der Ligand weist eine geringfügige Affinität zum Histaminrezeptor auf. Für die gamma-Sekretase wurden zwei der entworfenen Moleküle zur Synthese und Testung ausgewählt. In beiden Fällen konnte auch hier die von DOGS vorgeschlagene Synthesestrategie nachvollzogen werden. Anschließende in vitro Analysen wiesen beide Verbindungen als inverse Modulatoren der gamma-Sekretase aus. Das Konstruktionskonzept von DOGS ist in der Lage, bioaktive Substanzen vorzuschlagen. Diese sind synthetisch zugänglich und können nach der vorgeschlagenen Strategie synthetisiert werden. Somit kann das Programm als Ideengenerator für den Entwurf neuer bioaktiver Moleküle dienen
    corecore