45 research outputs found

    The interaction network : a performance measurement and evaluation tool for loosely-coupled distributed systems

    Get PDF
    Much of today's computing is done on loosely-coupled distributed systems. Performance issues for such systems usually involve interactive performance, that is, system responsiveness as perceived by the user. The goal of the work described in this thesis has been to develop and implement tools and techniques for the measurement and evaluation of interactive performance in loosely-coupled distributed systems. The author has developed the concept of the interaction network, an acyclic directed graph designed to represent the processing performed by a distributed system in response to a user input. The definition of an interaction network is based on a general model of a loosely-coupled distributed system and a general model of user interactions. The author shows that his distributed system model is a valid abstraction for a wide range of present-day systems. Performance monitors for traditional time-sharing systems reported performance information, such as overall resource utilisations and queue lengths, for the system as a whole. Performance problems are now much more difficult, because systems are much more complex. Recent monitors designed specifically for distributed systems have tended to present performance information for execution of a distributed program, for example the time spent in each of a program's procedures. In the work described in this thesis, performance information is reported for one or more user interactions, where a user interaction is defined to be a single user input and all of the processing performed by the system on receiving that input. A user interaction is seen as quite different from a program execution; a user interaction includes the partial or total execution of one or more programs, and a program execution performs work as part of one or more user interactions. Several methods are then developed to show how performance information can be obtained from analysis of interaction networks. One valuable type of performance information is a decomposition of response time into times spent in each of some set of states, where each state might be defined in terms of the hardware and software resources used. Other performance information can be found from displays of interaction networks. The critical path through an interaction network is then defined as showing the set of activities such that at least one must be reduced in length if the response time of the interaction is to be reduced; the critical path is used in both response time decompositions and in displays of interaction networks. It was thought essential to demonstrate that interaction networks could be recorded for a working operating system. INMON, a prototype monitor based on the interaction network concept, has been constructed to operate in the SunOS environment. INMON consists of data collection and data analysis components. The data collection component, for example, involved the adding of 53 probes to the SunOS operating system kernel. To record interaction networks, a high-resolution global timebase is needed. A clock synchronisation program has been written to provide INMON with such a timebase. It is suggested that the method incorporates a number of improvements over other clock synchronisation methods. Several experiments have been performed to show that INMON can produce very detailed performance information for both individual user interactions and groups of user interactions, with user input being made through either character-based or graphical interfaces. The main conclusion reached in this thesis is that representing the processing component of a user interaction in an interaction network is a very valuable way of approaching the problem of measuring interactive performance in a loosely-coupled distributed system. An interaction network contains a very detailed record of the execution of an interaction and, from this record, a great deal of performance (and other) information can be derived. Construction of INMON has demonstrated that interaction networks can be identified, recorded, and analysed

    LIPIcs, Volume 274, ESA 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 274, ESA 2023, Complete Volum

    Design of Heuristic Algorithms for Hard Optimization

    Get PDF
    This open access book demonstrates all the steps required to design heuristic algorithms for difficult optimization. The classic problem of the travelling salesman is used as a common thread to illustrate all the techniques discussed. This problem is ideal for introducing readers to the subject because it is very intuitive and its solutions can be graphically represented. The book features a wealth of illustrations that allow the concepts to be understood at a glance. The book approaches the main metaheuristics from a new angle, deconstructing them into a few key concepts presented in separate chapters: construction, improvement, decomposition, randomization and learning methods. Each metaheuristic can then be presented in simplified form as a combination of these concepts. This approach avoids giving the impression that metaheuristics is a non-formal discipline, a kind of cloud sculpture. Moreover, it provides concrete applications of the travelling salesman problem, which illustrate in just a few lines of code how to design a new heuristic and remove all ambiguities left by a general framework. Two chapters reviewing the basics of combinatorial optimization and complexity theory make the book self-contained. As such, even readers with a very limited background in the field will be able to follow all the content

    A large-scale computational framework for comparative analyses in population genetics and metagenomics

    Get PDF
    Population genetics is the study of spatio-temporal genetic variants among individuals. Its purpose is to understand evolution: the change in frequency of alleles over time. The effects of these alleles are expressed on different levels of biological organization, from molecular com-plexes to entire organisms. Eventually, they will affect traits that can influence the survival and reproduction of organisms. Fitness is a probability of transferring alleles to subsequent genera-tions with respect to successful survival and reproduction. Due to differential fitness, any phe-notypic properties that confer beneficial effects on survival and reproduction may presumably become prevalent in a population. Random mutations introduce new alleles in a population. The underlying changes in DNA sequences can be caused by replication errors, failures in DNA repair processes, or insertion and deletion of transposable elements. For sexual organisms, genetic recombination randomly mixes up the alleles in chromosomes, in turn, yielding a new composition of alleles though it does not change the allele frequencies. On the molecular level, mutations on a set of loci may cause a gain or loss of function resulting in totally different phenotypes, hereby influencing the survival of an organism. Despite the dominance of neutral mutations, the accumulation of small changes over time may affect the fitness, and further contribute to evolution. The goal of this study is to provide a framework for a comparative analysis on large-scale genomic datasets, especially, of a population within a species such as the 1001 Genomes Project of Arabidopsis thaliana, the 1000 Genomes Project of humans, or metagenomics datasets. Algo-rithms have been developed to provide following features: 1) denoising and improving the ef-fective coverage of raw genomic datasets (Trowel), 2) performing multiple whole genome alignments (WGAs) and detecting small variations in a population (Kairos), 3) identifying struc-tural variants (SVs) (Apollo), and 4) classifying microorganisms in metagenomics datasets (Po-seidon). The algorithms do not furnish any interpretation of raw genomic data but provide anal-yses as basis for biological hypotheses. With the advances in distributed and parallel computing, many modern bioinformatics algo-rithms have come to utilize multi-core processing on CPUs or GPUs. Having increased computa-tional capacity allows us to solve bigger and more complex problems. However, such hardware advances do not spontaneously give rise to the improved utilization of large-size datasets and do not bring insights by themselves to biological questions. Smart data structures and algorithms are required in order to exploit the enhanced computing power and to extract high quality infor-mation. For population genetics, an efficient representation for a pan genome and relevant for-mulas should be manifested. On top of such representation, sequence alignments play pivotal roles in solving biological problems such that one may calculate allele frequencies, detect rare variants, associate genotypes to phenotypes, and infer causality of certain diseases. To detect mutations in a population, the conventional alignment method is enhanced as multiple genomes are simultaneously aligned. The number of complete genome sequences has steadily increased, but the analysis of large, complex datasets remains challenging. Next Generation Sequencing (NGS) technology is consid-ered one of the great advances in modern biology, and has led to a dramatically more precise and detailed understanding of genomes and their activities. The contiguity and accuracy of se-quencing reads have been improving so that a complete genome sequence of a single cell may become obtainable from a sequencing library in the future. Though chemical and optical engi-neering are main drivers to advance sequencing technology, informatics and computer engineer-ing have significantly influenced the quality of sequences. Genomic sequencing data contain errors in forms of substitution, insertion, and deletion of nucleotides. The read length is far shorter than a given genome. These problems can be alleviated by means of error corrections and genome assemblies, leading to more accurate downstream analyses. Short read aligners have been the key ingredient for measuring and observing genetic muta-tions using Illumina sequencing technology, the dominant technology in the last decade. As long reads from newer methods or assembled contigs become accessible, mapping schemes capturing long-range context, but not lingering in local matches should be devised. Parameters for short read aligners such as the number of mismatches, gap-opening and -extending penalty are not directly applicable to long read alignments. At the other end of the spectrum, whole genome aligners (WGA) attempt to solve the alignment problem in a much longer context, providing es-sential data for comparative studies. However, available WGA algorithms are not yet optimized concerning practical uses in population genetics due to high computing demands. Moreover, too little attention has been paid to define an ideal data format for applications in comparative ge-nomics. To deal with datasets representing a large population of diverse individuals, multiple se-quence alignment (MSA) algorithms should be combined with WGA methods, known as multi-ple whole genome alignment (MWGA). Though several MWGA algorithms have been proposed, the accuracy of algorithms has not been clearly measured. In fact, known quality assessment tools have yielded highly fluctuating results dependent on the selection of organisms, and se-quencing profiles. Of even more serious concern, experiments to measure the performance of MWGA methods have been only ambiguously described. In turn, it has been difficult to inter-pret the multiple alignment results. With known precise locations of variants from simulations and standardized statistics, I present a far more comprehensive method to measure the accuracy of a MWGA algorithm. Metagenomics is a study of the genetic composition in a given community (often, predomi-nantly microbial). It overcomes the limitation of having to culture each organism for genome sequencing and also provides quantitative information on the composition of a community. Though an environmental sample provides more natural genetic material, the complexity of analyses is greatly increased. The number of species can be very large and only small portions of a genome may be sampled. I provide an algorithm, Poseidon, classifying sequencing reads to taxonomy identifiers at a species resolution and helping to quantify their relative abundances in the samples. The interactions among individual bacteria in a certain population can result in both conflict and cooperation. Thus, a mixture of diverse bacteria species shows a set of functional adaptations to a particular environment. The composition of species would be changed by dis-tinct biotic or abiotic factors, which may lead to a successive alteration in susceptibility of a host to a certain disease. In turn, basic concerns for a metagenomics study are an accurate quantifica-tion of species and deciphering their functional role in a given environment. In summary, this work presents advanced bioinformatics methods: Trowel, Kairos, Apollo, and Poseidon. Trowel corrects sequencing errors in reads by utilizing a piece of high-quality k-mer information. Kairos aligns query sequences against multiple genomes in a population of a single species. Apollo characterizes genome-wide genetic variants from point mutations to large structural variants on top of the alignments of Kairos. Poseidon classifies metagenomics datasets to taxonomy identifiers. Though the work does not directly address any specific biological ques-tions, it would provide preliminary materials for further downstream analyses.In der Populationsgenetik werden die rĂ€umlichen und zeitlichen Verteilungen von genetischen Varianten in Individuen einer Population untersucht. Über die Generationen Ă€ndert sich die Frequenz von Genen und Allelen. Die Auswirkungen der durch diese evolutionĂ€ren Mechanismen gebildete DiversitĂ€t zeigt sich auf verschiedenen Stufen biologischer Organisation, von einzelnen MolekĂŒlen bis hin zu gesamten Organismen. Sind Eigenschaften betroffen welche einen Einfluss auf die Überlebens- und Reproduktionsrate haben, werden die zugrundeliegenden Allele mit höherer Wahrscheinlichkeit in die nachfolgende Generation ĂŒbetragen werden. Allele mit positiver Auswirkungen auf die Fitness eines Organismus könnten sich so in einer Population verbreiten. ZufĂ€llige Mutationen sind eine Quelle fĂŒr neue Allele in einer Population. Die zugrundeliegenden VerĂ€nderungen der DNA-Sequenzen können durch Fehler bei der DNA-Replikation oder von DNA-Reparaturmechanismen, sowie Insertionen und Deletionen von mobilen genetischen Elementen entstehen. In sich sexuell fortpflanzenden Organismen sorgt genetische Rekombination fĂŒr eine Vermischung der Allele auf den Chromosomen. Obwohl die Allelfrequenzen nicht verĂ€ndert werden, entstehen dadurch neue Kombinationen von Allelen. Auf der molekularen Ebene können Genloci durch Mutationen an AktivitĂ€t gewinnen oder funktionslos werden, was wiederum eine Auswirkung auf den entstehenden PhĂ€notyp und die ÜberlebensfĂ€higkeit des Organismus hat. Trotz der höherer Verbreitung neutraler Mutationen, kann das Ansammeln von kleinen VerĂ€nderungen im Laufe der Zeit die Fitness beeinflussen und weiter der Evolution beitragen. Das Ziel dieser Arbeit war es ein Rahmenwerk fĂŒr die vergleichende Analyse großer genomischer Datensets zur VerfĂŒgung zu stellen. Im Besonderen fĂŒr DatensĂ€tze mit vielen Individuen einer Spezies wie im 1001 Genomes Project (Arabidopsis thaliana), im 1000 Genomes Project (Homo sapiens) sowie in metagenomische DatensĂ€tzen. FĂŒr die folgenden Problemstellungen wurden Algorithmen entwickelt: 1) Fehlerkorrektur und Verbesserung der effektiven Coverage von genomischen Rohdaten (Trowel), 2) multiple Gesamt-Genomalinierungen (whole genome alignments; WGAs) und die Detektion kleiner Unterschiede innerhalb einer Population (Kairos), 3) Identifikation struktureller Varianten (SV) (Apollo), und 4) Klassifikation von Mikroorganismen in metagenomischen DatensĂ€tzen (Poseidon). Diese Algorithmen nehmen keine Interpretation biologischer Rohdaten vor sondern stellen Ausgangspunkte fĂŒr biologische Hypothesen zur VerfĂŒgung. Auf Grund der Fortschritte in verteiltem und paralellem Rechnen nutzen viele moderne Bioinformatikalgorithmen Paralellisierung auf CPUs oder GPUs. Diese erhöhte RechenkapazitĂ€t erlaubt es uns grĂ¶ĂŸere und komplexere Probleme zu lösen. Allerdings machen diese technische Fortschritte allein es noch nicht möglich, sehr große DatensĂ€tze zu nutzen und bringen auch keine Antworten auf biologische Fragen. Um von diesen Fortschritten zu profitieren und hochqualitative Informationen aus Rohdaten extrahieren zu können, sind gut durchdachte Datenstrukturen und Algorithmen notwendig. FĂŒr die Populationsgenetik sollte eine effiziente ReprĂ€sentation eines Pan-Genoms und dazugehöriger Formeln geschaffen werden. ZusĂ€tzlich zu einer solchen ReprĂ€sentation spielen Sequenzalalinierungen eine entscheidende Rolle im Lösen biologischer Probleme wie der Berechnung von Allelfrequenzen, der Detektion seltener Varianten, der Assoziation von Genotypen und PhĂ€notypen und Inferenz von KausalitĂ€t bezĂŒglich bestimmter Krankheiten. Um Mutationen in einer Population zu detektieren wird die konventionelle Alinierungsmethode verbessert da mehrere Genome gleichzeitig aliniert werden. Obwohl die Anzahl vollstĂ€ndiger Genomsequenzen stetig gestiegen ist, ist die Analyse dieser großen und komplexen DatensĂ€tze immer noch schwierig. Die Hochdurchsatz-Sequenzierung (Next Generation Sequencing; NGS), die ein prĂ€ziseres und detaillierteres Bild der Genomik geliefert hat, ist einer der großen Fortschritte in der Biotechnologie. Die LĂ€nge und Genauigkeit der Sequenzier-Abschnitte (Reads) hat sich so weit verbessert, dass in Zukunft wahrscheinlich ein vollstĂ€ndiges Genom von nur einer einzelnen Zelle als Ausgangsmaterial rekonstruiert werden kann. Obwohl die wichtigsten Schritte zur Realisierung von Sequenzierungsfortschritten eine DomĂ€ne der Verfahrenstechnik sind, haben auch die Informatik und Computertechnik die QualitĂ€t der Sequenzen entscheidend beeinflusst. Sequenzierdaten enthalten Fehler in Form von Substitutionen, Insertionen oder Deletionen von Nukleotiden. Außerdem ist die LĂ€nge der erzeugten Reads deutlich kĂŒrzer als die eines vollstĂ€ndigen Genoms. Diese Schwierigkeiten können durch Fehlerkorrekturen und Genomassemblierung verringert werden, wodurch nachfolgende Analysen genauer werden. Programme zur Alinierung kurzer Reads waren bisher die wichtigste Methode um genetische Mutationen zu detektieren. Da nun duch neue Technologien hĂ€ufig lĂ€ngere Reads oder auch Contigs verfĂŒgbar sind, werden Kartierungsmethoden benötigt die sich an langen Ähnlichkeiten orientieren und sich nicht von kurzen lokalen Übereinstimmungen fehlleiten lassen. Die Parameter fĂŒr Programme zur Alinierung von kurzen Reads welche nichtĂŒbereinstimmende Basen und das Eröffnen und VerlĂ€ngern von LĂŒcken bestrafen, sind nicht direkt auf die Alinierung lĂ€ngerer Reads anwendbar. Alternativ können WGA-Algorithmen verwendet werden, die das Alinierungsproblem in einem lĂ€ngeren Kontext lösen und dadurch essentielle Daten fĂŒr vergleichende Studien liefern. Allerdings haben bisherige WGA-Algorithmen noch Probleme in der praktischen Anwendung fĂŒr die Populationsgenetik wegen ihrer hohen Zeit- und SpeicherkomplexitĂ€t. Außerdem wurde der Definition idealer Datenformate fĂŒr Anwendungen der komparativen Genomik nur wenig Aufmerksamkeit gewidmet. Um DatensĂ€tze großer Populationen verarbeiten zu können sollten Algorithmen fĂŒr multiple Sequenzalinierung (MSA) mit WGA-Methoden zur multiplen Gesamtgenomalinierung (MWGA) kombiniert werden. Obwohl bereits viele MWGA-Methoden vorgestellt wurden, wurde ihre Genauigkeit noch nicht aussagekrĂ€ftig ĂŒberprĂŒft. Vielmehr lieferten QualitĂ€tskontrollen sehr unterschiedliche Ergebnisse, abhĂ€ngig von der Auswahl von Organismen und verwendeten Sequenzen. Ein noch grĂ¶ĂŸeres Problem ist die ungenaue Beschreibung von Experimenten zur Messung der FunktionalitĂ€t von MWGA-Methoden. Daher war es schwierig die multiplen Alinierungs-Ergebnisse zu interpretieren. Ich beschreibe in dieser Arbeit eine deutlich umfassendere Methode um die Genauigkeit eines MWGA-Algorithmus zu bestimmen. Sie macht von vorab bekannten Positionen der Varianten Gebrauch wozu Simulationen und standardisierte Statistiken herangezogen werden. Die Metagenomik untersucht die genetische Zusammensetzung einer (oft hauptsĂ€chlich mikrobiellen) natĂŒrlichen Organismen-Gemeinschaft. Sie ist unabhĂ€ngig von der Kultivierung einzelner Mikroben und liefert auch quantitative Informationen zur Zusammensetzung der Gemeinschaft. WĂ€hrend Proben aus der Umwelt ein natĂŒrlicheres Ausgangsmaterial liefern ist gleichzeitig auch die KomplexitĂ€t der Analysen deutlich höher: die Anzahl der enthaltenen Arten kann sehr groß sein, so dass nur ein Bruchteil der Genome tatsĂ€chlich analysiert wird. Ich stelle einen Algorithmus vor, Poseidon, der Reads zur taxonomischen Identifikation mit Arten-genauer Auflösung zuordnet und damit hilft deren relative HĂ€ufigkeit in einer Probe zu quantifizieren. Die Interaktionen zwischen Bakterien kann Konflikte und auch Kooperationen hervorrufen. Die spezielle Mischung unterschiedlicher Artem kann daher eine Reihe funktionaler Anpassungen an eine bestimmte Umgebung aufzeigen. Die Zusammensetzung der Arten könnte durch biotische oder abiotische Faktoren verĂ€ndert werden, was im Kontext eines Krankheitsbildes zu einer VerĂ€nderung der AnfĂ€lligkeit eines Wirts bezĂŒglich eines bestimmten Erregers fĂŒhren kann. Daher sind die genaue Quantifizierung von Arten und die EntschlĂŒsselung ihrer funktionalen Rolle in einer bestimmten Umgebung grundlegend fĂŒr metagenomische Studien. Zusammenfassend stelle ich in dieser Arbeit fortgeschrittene bioinformatische Methoden, Trowel, Kairos, Apollo und Poseidon vor. Trowel korrigiert Fehler in Sequenzabschnitten mit Hilfe von k-mer Informationen von hoher QualitĂ€t. Kairos fĂŒhrt die Alinierung einer Sequenz zu multiplen Genomen einer Art durch. Apollo charakterisiert genomweit genetische Varianten basierend auf den Alinierungen von Kairos, und erfasst sowohl Punktmutationen als auch große strukturelle Varianten. Poseidon ordnet metagenomische DatensĂ€tze taxonomischen Identifikatoren zu. Auch wenn keine spezifischen biologischen Fragestellungen beantwortet werden, wird die Basis fĂŒr zukĂŒnftige Fragen geschaffen

    19th SC@RUG 2022 proceedings 2021-2022

    Get PDF

    19th SC@RUG 2022 proceedings 2021-2022

    Get PDF

    19th SC@RUG 2022 proceedings 2021-2022

    Get PDF
    corecore