211 research outputs found

    Algorithmen fĂŒr Packprobleme

    Get PDF
    Packing problems belong to the most frequently studied problems in combinatorial optimization. Mainly, the task is to pack a set of small objects into a large container. These kinds of problems, though easy to state, are usually hard to solve. An additional challenge arises, if the set of objects is not completely known beforehand, meaning that an object has to be packed before the next one becomes available. These problems are called online problems. If the set of objects is completely known, they are called offline problems. In this work, we study two online and one offline packing problem. We present algorithms that either compute an optimal or a provably good solution: Maintaining Arrays of Contiguous Objects. The problem of maintaining a set of contiguous objects (blocks) inside an array is closely related to storage allocation. Blocks are inserted into the array, stay there for some (unknown) duration, and are then removed from the array. After inserting a block, the next block becomes available. Blocks can be moved inside the array to create free space for further insertions. Our goals are to minimize the time until the last block is removed from the array (the makespan) and the costs for the block moves. We present inapproximability results, an algorithm that achieves an optimal makespan, an algorithm that uses only O(1) block moves per insertion and deletion, and provide computational experiments. Online Square Packing. In the classical online strip packing problem, one has to find a non-overlapping placement for a set of objects (squares in our setting) inside a semi-infinite strip, minimizing the height of the occupied area. We study this problem under two additional constraints: Each square has to be packed on top of another square or on the bottom of the strip. Moreover, there has to be a collision-free path from the top of the strip to the square's final position. We present two algorithms that achieve asymptotic competitive factors of 3.5 and 2.6154, respectively. Point Sets with Minimum Average Distance. A grid point is a point in the plane with integer coordinates. We present an algorithm that selects a set of grid points (town) such that the average L1 distance between all pairs of points is minimized. Moreover, we consider the problem of choosing point sets (cities) inside a given square such that-again-the interior distances are minimized. We present a 5.3827-approximation algorithm for this problem.Packprobleme gehören zu den am hĂ€ufigsten untersuchten Problemen in der kombinatorischen Optimierung. GrundsĂ€tzlich besteht die Aufgabe darin, eine Menge von kleinen Objekten in einen grĂ¶ĂŸeren Container zu packen. Probleme dieser Art können meistens nur mit hohem Aufwand gelöst werden. ZusĂ€tzliche Schwierigkeiten treten auf, wenn die Menge der zu packenden Objekte zu Beginn nicht vollstĂ€ndig bekannt ist, d.h. dass das nĂ€chste Objekt erst verfĂŒgbar wird, wenn das vorherige gepackt ist. Solche Probleme werden online Probleme genannt. Wenn alle Objekte bekannt sind, spricht man von einem offline Problem. In dieser Arbeit stellen wir zwei online Packprobleme und ein offline Packproblem vor und entwickeln Algorithmen, die die Probleme entweder optimal oder aber mit einer beweisbaren GĂŒte lösen: Verwaltung von kontinuierlichen Objekten. Das Problem eine Menge von kontinuierlichen Objekten (Blöcke) in einem Array möglichst gut zu verwalten, ist eng verwandt mit Problemen der Speicherverwaltung. Blöcke werden in einen kontinuierlichen Bereich des Arrays eingefĂŒgt und nach einer (unbekannten) Dauer wieder entfernt. Dabei ist immer nur der nĂ€chste einzufĂŒgende Block bekannt. Um Freiraum fĂŒr weitere Blöcke zu schaffen, dĂŒrfen Blöcke innerhalb des Arrays verschoben werden. Ziel ist es, die Zeit bis der letzte Block entfernt wird (Makespan) und die Kosten fĂŒr die Verschiebe-Operationen zu minimieren. Wir geben eine komplexitĂ€tstheoretische Einordnung dieses Problems, stellen einen Algorithmus vor, der einen optimalen Makespan bestimmt, einen der O(1) Verschiebe-Operationen benötigt und evaluieren verschiedene Algorithmen experimentell. Online-Strip-Packing. Im klassischen Online-Strip-Packing-Problem wird eine Menge von Objekten (hier: Quadrate) in einen Streifen (unendlicher Höhe) platziert, so dass die Höhe der benutzten FlĂ€che möglichst gering ist. Wir betrachten einen Spezialfall, bei dem zwei zusĂ€tzliche Bedingungen gelten: Quadrate mĂŒssen auf anderen Quadraten oder auf dem Boden des Streifens platziert werden und die endgĂŒltige Position muss auf einem kollisionfreien Weg erreichbar sein. Es werden zwei Algorithmen mit GĂŒten von 3,5 bzw. 2,6154 vorgestellt. Punktmengen mit minimalem Durchschnittsabstand. Ein Gitterpunkt ist ein Punkt in der Ebene mit ganzzahligen Koordinaten. Wir stellen einen Algorithmus vor, der eine Anzahl von Punkten aus der Menge aller Gitterpunkte auswĂ€hlt, so dass deren durchschnittlicher L1-Abstand minimal ist. Außerdem betrachten wir das Problem, mehrere Punktmengen mit minimalem Durchschnittsabstand innerhalb eines gegebenen Quadrates auszuwĂ€hlen. Wir stellen einen 5,3827-Approximationsalgorithmus fĂŒr dieses Problem vor

    Chromosome Descrambling Order Analysis in ciliates

    Get PDF
    Ciliates are a type of unicellular eukaryotic organism that has two types of nuclei within each cell; one is called the macronucleus (MAC) and the other is known as the micronucleus (MIC). During mating, ciliates exchange their MIC, destroy their own MAC, and create a new MAC from the genetic material of their new MIC. The process of developing a new MAC from the exchanged new MIC is known as gene assembly in ciliates, and it consists of a massive amount of DNA excision from the micronucleus, and the rearrangement of the rest of the DNA sequences. During the gene assembly process, the DNA segments that get eliminated are known as internal eliminated segments (IESs), and the remaining DNA segments that are rearranged in an order that is correct for creating proteins, are called macronuclear destined segments (MDSs). A topic of interest is to predict the correct order to descramble a gene or chromosomal segment. A prediction can be made based on the principle of parsimony, whereby the smallest sequence of operations is likely close to the actual number of operations that occurred. Interestingly, the order of MDSs in the newly assembled 22,354 Oxytricha trifallax MIC chromosome fragments provides evidence that multiple parallel recombinations occur, where the structure of the chromosomes allows for interleaving between two sections of the developing macronuclear chromosome in a manner that can be captured with a common string operation called the shuffle operation (the shuffle operation on two strings results in a new string by weaving together the first two, while preserving the order within each string). Thus, we studied four similar systems involving applications of shuffle to see how the minimum number of operations needed to assemble differs between the types. Two algorithms for each of the first two systems have been implemented that are both shown to be optimal. And, for the third and fourth systems, four and two heuristic algorithms, respectively, have been implemented. The results from these algorithms revealed that, in most cases, the third system gives the minimum number of applications of shuffle to descramble, but whether the best implemented algorithm for the third system is optimal or not remains an open question. The best implemented algorithm for the third system showed that 96.63% of the scrambled micronuclear chromosome fragments of Oxytricha trifallax can be descrambled by only 1 or 2 applications of shuffle. This small number of steps lends theoretical evidence that some structural component is enforcing an alignment of segments in a shuffle-like fashion, and then parallel recombination is taking place to enable MDS rearrangement and IES elimination. Another problem of interest is to classify segments of the MIC into MDSs and IESs; this is the second topic of the thesis, and is a matter of determining the right "class label", i.e. MDS or IES, on each nucleotide. Thus, training data of labelled input sequences was used with hidden Markov models (HMMs), which is a well-known supervised machine learning classification algorithm. HMMs of first-, second-, third-, fourth-, and fifth-order have been implemented. The accuracy of the classification was verified through 10-fold cross validation. Results from this work show that an HMM is more likely to fail to accurately classify micronuclear chromosomes without having some additional knowledge

    Computational Molecular Biology

    No full text
    Computational Biology is a fairly new subject that arose in response to the computational problems posed by the analysis and the processing of biomolecular sequence and structure data. The field was initiated in the late 60's and early 70's largely by pioneers working in the life sciences. Physicists and mathematicians entered the field in the 70's and 80's, while Computer Science became involved with the new biological problems in the late 1980's. Computational problems have gained further importance in molecular biology through the various genome projects which produce enormous amounts of data. For this bibliography we focus on those areas of computational molecular biology that involve discrete algorithms or discrete optimization. We thus neglect several other areas of computational molecular biology, like most of the literature on the protein folding problem, as well as databases for molecular and genetic data, and genetic mapping algorithms. Due to the availability of review papers and a bibliography this bibliography

    Accelerating Event Stream Processing in On- and Offline Systems

    Get PDF
    Due to a growing number of data producers and their ever-increasing data volume, the ability to ingest, analyze, and store potentially never-ending streams of data is a mission-critical task in today's data processing landscape. A widespread form of data streams are event streams, which consist of continuously arriving notifications about some real-world phenomena. For example, a temperature sensor naturally generates an event stream by periodically measuring the temperature and reporting it with measurement time in case of a substantial change to the previous measurement. In this thesis, we consider two kinds of event stream processing: online and offline. Online refers to processing events solely in main memory as soon as they arrive, while offline means processing event data previously persisted to non-volatile storage. Both modes are supported by widely used scale-out general-purpose stream processing engines (SPEs) like Apache Flink or Spark Streaming. However, such engines suffer from two significant deficiencies that severely limit their processing performance. First, for offline processing, they load the entire stream from non-volatile secondary storage and replay all data items into the associated online engine in order of their original arrival. While this naturally ensures unified query semantics for on- and offline processing, the costs for reading the entire stream from non-volatile storage quickly dominate the overall processing costs. Second, modern SPEs focus on scaling out computations across the nodes of a cluster, but use only a fraction of the available resources of individual nodes. This thesis tackles those problems with three different approaches. First, we present novel techniques for the offline processing of two important query types (windowed aggregation and sequential pattern matching). Our methods utilize well-understood indexing techniques to reduce the total amount of data to read from non-volatile storage. We show that this improves the overall query runtime significantly. In particular, this thesis develops the first index-based algorithms for pattern queries expressed with the Match_Recognize clause, a new and powerful language feature of SQL that has received little attention so far. Second, we show how to maximize resource utilization of single nodes by exploiting the capabilities of modern hardware. Therefore, we develop a prototypical shared-memory CPU-GPU-enabled event processing system. The system provides implementations of all major event processing operators (filtering, windowed aggregation, windowed join, and sequential pattern matching). Our experiments reveal that regarding resource utilization and processing throughput, such a hardware-enabled system is superior to hardware-agnostic general-purpose engines. Finally, we present TPStream, a new operator for pattern matching over temporal intervals. TPStream achieves low processing latency and, in contrast to sequential pattern matching, is easily parallelizable even for unpartitioned input streams. This results in maximized resource utilization, especially for modern CPUs with multiple cores

    Efficient bulk-loading methods for temporal and multidimensional index structures

    Get PDF
    Nahezu alle naturwissenschaftlichen Bereiche profitieren von neuesten Analyse- und Verarbeitungsmethoden fĂŒr große Datenmengen. Diese Verfahren setzten eine effiziente Verarbeitung von geo- und zeitbezogenen Daten voraus, da die Zeit und die Position wichtige Attribute vieler Daten sind. Die effiziente Anfrageverarbeitung wird insbesondere durch den Einsatz von Indexstrukturen ermöglicht. Im Fokus dieser Arbeit liegen zwei Indexstrukturen: Multiversion B-Baum (MVBT) und R-Baum. Die erste Struktur wird fĂŒr die Verwaltung von zeitbehafteten Daten, die zweite fĂŒr die Indexierung von mehrdimensionalen Rechteckdaten eingesetzt. StĂ€ndig- und schnellwachsendes Datenvolumen stellt eine große Herausforderung an die Informatik dar. Der Aufbau und das Aktualisieren von Indexen mit herkömmlichen Methoden (Datensatz fĂŒr Datensatz) ist nicht mehr effizient. Um zeitnahe und kosteneffiziente Datenverarbeitung zu ermöglichen, werden Verfahren zum schnellen Laden von Indexstrukturen dringend benötigt. Im ersten Teil der Arbeit widmen wir uns der Frage, ob es ein Verfahren fĂŒr das Laden von MVBT existiert, das die gleiche I/O-KomplexitĂ€t wie das externe Sortieren besitz. Bis jetzt blieb diese Frage unbeantwortet. In dieser Arbeit haben wir eine neue Kostruktionsmethode entwickelt und haben gezeigt, dass diese gleiche ZeitkomplexitĂ€t wie das externe Sortieren besitzt. Dabei haben wir zwei algorithmische Techniken eingesetzt: Gewichts-Balancierung und Puffer-BĂ€ume. Unsere Experimenten zeigen, dass das Resultat nicht nur theoretischer Bedeutung ist. Im zweiten Teil der Arbeit beschĂ€ftigen wir uns mit der Frage, ob und wie statistische Informationen ĂŒber Geo-Anfragen ausgenutzt werden können, um die Anfrageperformanz von R-BĂ€umen zu verbessern. Unsere neue Methode verwendet Informationen wie SeitenverhĂ€ltnis und SeitenlĂ€ngen eines reprĂ€sentativen Anfragerechtecks, um einen guten R-Baum bezĂŒglich eines hĂ€ufig eingesetzten Kostenmodells aufzubauen. Falls diese Informationen nicht verfĂŒgbar sind, optimieren wir R-BĂ€ume bezĂŒglich der Summe der Volumina von minimal umgebenden Rechtecken der Blattknoten. Da das Problem des Aufbaus von optimalen R-BĂ€umen bezĂŒglich dieses Kostenmaßes NP-hart ist, fĂŒhren wir zunĂ€chst das Problem auf ein eindimensionales Partitionierungsproblem zurĂŒck, indem wir die Daten bezĂŒglich optimierte raumfĂŒllende Kurven sortieren. Dann lösen wir dieses Problem durch Einsatz vom dynamischen Programmieren. Die I/O-KomplexitĂ€t des Verfahrens ist gleich der von externem Sortieren, da die I/O-Laufzeit der Methode durch die Laufzeit des Sortierens dominiert wird. Im letzten Teil der Arbeit haben wir die entwickelten Partitionierungsvefahren fĂŒr den Aufbau von Geo-Histogrammen eingesetzt, da diese Ă€hnlich zu R-BĂ€umen eine disjunkte Partitionierung des Raums erzeugen. Ergebnisse von intensiven Experimenten zeigen, dass sich unter Verwendung von neuen Partitionierungstechniken sowohl R-BĂ€ume mit besserer Anfrageperformanz als auch Geo-Histogrammen mit besserer SchĂ€tzqualitĂ€t im Vergleich zu Konkurrenzverfahren generieren lassen

    Evolution of whole genomes through inversions:models and algorithms for duplicates, ancestors, and edit scenarios

    Get PDF
    Advances in sequencing technology are yielding DNA sequence data at an alarming rate – a rate reminiscent of Moore's law. Biologists' abilities to analyze this data, however, have not kept pace. On the other hand, the discrete and mechanical nature of the cell life-cycle has been tantalizing to computer scientists. Thus in the 1980s, pioneers of the field now called Computational Biology began to uncover a wealth of computer science problems, some confronting modern Biologists and some hidden in the annals of the biological literature. In particular, many interesting twists were introduced to classical string matching, sorting, and graph problems. One such problem, first posed in 1941 but rediscovered in the early 1980s, is that of sorting by inversions (also called reversals): given two permutations, find the minimum number of inversions required to transform one into the other, where an inversion inverts the order of a subpermutation. Indeed, many genomes have evolved mostly or only through inversions. Thus it becomes possible to trace evolutionary histories by inferring sequences of such inversions that led to today's genomes from a distant common ancestor. But unlike the classic edit distance problem where string editing was relatively simple, editing permutation in this way has proved to be more complex. In this dissertation, we extend the theory so as to make these edit distances more broadly applicable and faster to compute, and work towards more powerful tools that can accurately infer evolutionary histories. In particular, we present work that for the first time considers genomic distances between any pair of genomes, with no limitation on the number of occurrences of a gene. Next we show that there are conditions under which an ancestral genome (or one close to the true ancestor) can be reliably reconstructed. Finally we present new methodology that computes a minimum-length sequence of inversions to transform one permutation into another in, on average, O(n log n) steps, whereas the best worst-case algorithm to compute such a sequence uses O(n√n log n) steps
    • 

    corecore