122 research outputs found

    Steps toward accurate reconstructions of phylogenies from gene-order data

    Get PDF
    We report on our progress in reconstructing phylogenies from gene-order data. We have developed polynomial-time methods for estimating genomic distances that greatly improve the accuracy of trees obtained using the popular neighbor-joining method; we have also further improved the running time of our GRAPPA software suite through a combination of tighter bounding and better use of the bounds. We present new experimental results (that extend those we presented at ISMB’01 and WABI’01) that demonstrate the accuracy and robustness of our distance estimators under a wide range of model conditions. Moreover, using the best of our distance estimators (EDE) in our GRAPPA software suite, along with more sophisticated bounding techniques, produced spectacular improvements in the already huge speedup: whereas our earlier experiments showed a one-million-fold speedup (when run on a 512-processor cluster), our latest experiments demonstrate a speedup of one hundred million. The combination of these various advances enabled us to conduct new phylogenetic analyses of a subset of the Campanulaceae family, confirming various conjectures about the relationships among members of the subset and confirming that inversion can be viewed as the principal mechanism of evolution for their chloroplast genome. We give representative results of the extensive experimentation we conducted on both real and simulated datasets in order to validate and characterize our approaches

    Rec-DCM-Eigen: Reconstructing a Less Parsimonious but More Accurate Tree in Shorter Time

    Get PDF
    Maximum parsimony (MP) methods aim to reconstruct the phylogeny of extant species by finding the most parsimonious evolutionary scenario using the species' genome data. MP methods are considered to be accurate, but they are also computationally expensive especially for a large number of species. Several disk-covering methods (DCMs), which decompose the input species to multiple overlapping subgroups (or disks), have been proposed to solve the problem in a divide-and-conquer way

    Models and Algorithms for Sorting Permutations with Tandem Duplication and Random Loss

    Get PDF
    A central topic of evolutionary biology is the inference of phylogeny, i. e., the evolutionary history of species. A powerful tool for the inference of such phylogenetic relationships is the arrangement of the genes in mitochondrial genomes. The rationale is that these gene arrangements are subject to different types of mutations in the course of evolution. Hence, a high similarity in the gene arrangement between two species indicates a close evolutionary relation. Metazoan mitochondrial gene arrangements are particularly well suited for such phylogenetic studies as they are available for a wide range of species, their gene content is almost invariant, and usually free of duplicates. With these properties gene arrangements of mitochondrial genomes are modeled by permutations in which each element represents a gene, i. e., a specific genetic sequence. The mutations that shape the gene arrangement of genomes are then represented by operations that rearrange elements in permutations, so-called genome rearrangements, and thereby bridge the gap between evolutionary biology and optimization. Many problems of phylogeny inference can be formulated as challenging combinatorial optimization problems which makes this research area especially interesting for computer scientists. The most prominent examples of such optimization problems are the sorting problem and the distance problem. While the sorting problem requires a minimum length sequence of rearrangements that transforms one given permutation into another given permutation, i. e., it aims for a hypothetical scenario of gene order evolution, the distance problem intends to determine only the length of such a sequence. This minimum length is called distance and used as a (dis)similarity measure quantifying the evolutionary relatedness. Most evolutionary changes occurring in gene arrangements of mitochondrial genomes can be explained by the tandem duplication random loss (TDRL) genome rearrangement model. A TDRL consists of a duplication of a consecutive set of genes in tandem followed by a random loss of one copy of each duplicated gene. In spite of the importance of the TDRL genome rearrangement in mitochondrial evolution, its combinatorial properties have rarely been studied. In addition, models of genome rearrangements which include all types of rearrangement that are relevant for mitochondrial genomes, i. e., inversions, transpositions, inverse transpositions, and TDRLs, while admitting computational tractability are rare. Nevertheless, especially for metazoan gene arrangements the TDRL rearrangement should be considered for the reconstruction of phylogeny. Realizing that a better understanding of the TDRL model is indispensable for the study of mitochondrial gene arrangements, the central theme of this thesis is to broaden the horizon of TDRL genome rearrangements with respect to mitochondrial genome evolution. For this purpose, this thesis provides combinatorial properties of the TDRL model and its variants as well as efficient methods for a plausible reconstruction of rearrangement scenarios between gene arrangements. The methods that are proposed consider all types of genome rearrangements that predominately occur during mitochondrial evolution. More precisely, the main points contained in this thesis are as follows: The distance problem and the sorting problem for the TDRL model are further examined in respect to circular permutations, a formal concept that reflects the circular structure of mitochondrial genomes. As a result, a closed formula for the distance is provided. Recently, evidence for a variant of the TDRL rearrangement model in which the duplicated set of genes is additionally inverted have been found. Initiating the algorithmic study of this new rearrangement model on a certain type of permutations, a closed formula solving the distance problem is proposed as well as a quasilinear time algorithm that solves the corresponding sorting problem. The assumption that only one type of genome rearrangement has occurred during the evolution of certain gene arrangements is most likely unrealistic, e. g., at least three types of rearrangements on top of the TDRL rearrangement have to be considered for the evolution metazoan mitochondrial genomes. Therefore, three different biologically motivated constraints are taken into account in this thesis in order to produce plausible evolutionary rearrangement scenarios. The first constraint is extending the considered set of genome rearrangements to the model that covers all four common types of mitochondrial genome rearrangements. For this 4-type model a sharp lower bound and several close additive upper bounds on the distance are developed. As a byproduct, a polynomial-time approximation algorithm for the corresponding sorting problem is provided that guarantees the computation of pairwise rearrangement scenarios that deviate from a minimum length scenario by at most two rearrangement operations. The second biologically motivated constraint is the relative frequency of the different types of rearrangements occurring during the evolution. The frequency is modeled by employing a weighting scheme on the 4-type model in which every rearrangement is weighted with respect to its type. The resulting NP-hard sorting problem is then solved by means of a polynomial size integer linear program. The third biologically motivated constraint that has been taken into account is that certain subsets of genes are often found in close proximity in the gene arrangements of many different species. This observation is reflected by demanding rearrangement scenarios to preserve certain groups of genes which are modeled by common intervals of permutations. In order to solve the sorting problem that considers all three types of biologically motivated constraints, the exact dynamic programming algorithm CREx2 is proposed. CREx2 has a linear runtime for a large class of problem instances. Otherwise, two versions of the CREx2 are provided: The first version provides exact solutions but has an exponential runtime in the worst case and the second version provides approximated solutions efficiently. CREx2 is evaluated by an empirical study for simulated artificial and real biological mitochondrial gene arrangements

    Doctor of Philosophy

    Get PDF
    dissertationWe are living in an age where data are being generated faster than anyone has previously imagined across a broad application domain, including customer studies, social media, sensor networks, and the sciences, among many others. In some cases, data are generated in massive quantities as terabytes or petabytes. There have been numerous emerging challenges when dealing with massive data, including: (1) the explosion in size of data; (2) data have increasingly more complex structures and rich semantics, such as representing temporal data as a piecewise linear representation; (3) uncertain data are becoming a common occurrence for numerous applications, e.g., scientific measurements or observations such as meteorological measurements; (4) and data are becoming increasingly distributed, e.g., distributed data collected and integrated from distributed locations as well as data stored in a distributed file system within a cluster. Due to the massive nature of modern data, it is oftentimes infeasible for computers to efficiently manage and query them exactly. An attractive alternative is to use data summarization techniques to construct data summaries, where even efficiently constructing data summaries is a challenging task given the enormous size of data. The data summaries we focus on in this thesis include the histogram and ranking operator. Both data summaries enable us to summarize a massive dataset to a more succinct representation which can then be used to make queries orders of magnitude more efficient while still allowing approximation guarantees on query answers. Our study has focused on the critical task of designing efficient algorithms to summarize, query, and manage massive data

    Matheuristics:survey and synthesis

    Get PDF
    In integer programming and combinatorial optimisation, people use the term matheuristics to refer to methods that are heuristic in nature, but draw on concepts from the literature on exact methods. We survey the literature on this topic, with a particular emphasis on matheuristics that yield both primal and dual bounds (i.e., upper and lower bounds in the case of a minimisation problem). We also make some comments about possible future developments

    Faster Pattern Matching under Edit Distance

    Get PDF
    We consider the approximate pattern matching problem under the edit distance.Given a text TT of length nn, a pattern PP of length mm, and a thresholdkk, the task is to find the starting positions of all substrings of TT thatcan be transformed to PP with at most kk edits. More than 20 years ago, Coleand Hariharan [SODA'98, J. Comput.'02] gave an O(n+k4n/m)\mathcal{O}(n+k^4 \cdot n/m)-time algorithm for this classic problem, and this runtime has not beenimproved since. Here, we present an algorithm that runs in time O(n+k3.5logmlogkn/m)\mathcal{O}(n+k^{3.5}\sqrt{\log m \log k} \cdot n/m), thus breaking through this long-standingbarrier. In the case where n^{1/4+\varepsilon} \leq k \leqn^{2/5-\varepsilon} for some arbitrarily small positive constantε\varepsilon, our algorithm improves over the state-of-the-art by polynomialfactors: it is polynomially faster than both the algorithm of Cole andHariharan and the classic O(kn)\mathcal{O}(kn)-time algorithm of Landau andVishkin [STOC'86, J. Algorithms'89]. We observe that the bottleneck case of the alternative O(n+k4n/m)\mathcal{O}(n+k^4\cdot n/m)-time algorithm of Charalampopoulos, Kociumaka, and Wellnitz[FOCS'20] is when the text and the pattern are (almost) periodic. Our newalgorithm reduces this case to a new dynamic problem (Dynamic Puzzle Matching),which we solve by building on tools developed by Tiskin [SODA'10,Algorithmica'15] for the so-called seaweed monoid of permutation matrices. Ouralgorithm relies only on a small set of primitive operations on strings andthus also applies to the fully-compressed setting (where text and pattern aregiven as straight-line programs) and to the dynamic setting (where we maintaina collection of strings under creation, splitting, and concatenation),improving over the state of the art.<br

    Facility location problems and games

    Get PDF
    We concern ourselves with facility location problems and games wherein we must decide upon the optimal locating of facilities. A facility is considered to be any physical location to which customers travel to obtain a service, or from which an agent of the facility travels to customers to deliver a service. We model facilities by points without a capacity limit and assume that customers obtain (or are provided with) their service from the closest facility. Throughout this thesis we consider distance to be measured exclusively using the Manhattan metric, a natural choice in urban settings and also in scenarios arising from clustering for data analysis with heterogeneous dimensions. Additionally we always model the demand for the facility as continuously and uniformly distributed over some convex polygonal demand region P and it is only within P that we consider locating our facilities.We first consider five facility location problems where n facilities are present in a convex polygon in the rectilinear plane, over which continuous and uniform demand is distributed and within which a convex polygonal barrier is located (removing all demand and preventing all travel within the barrier), and the optimal location for an additional facility is sought. We begin with an in-depth analysis of the representation of the bisectors of two facilities affected by the barrier and how it is affected by the position of the additional facility. Following this, a detailed investigation into the changes in the structure of the Voronoi diagram caused by the movement of this additional facility, which governs the form of the objective function for numerous facility location problems, yields a set of linear constraints for a general convex barrier that partitions the market space into a finite number of regions within which the exact solution can be found in polynomial time. This allows us to formulate an exact polynomial-time algorithm that makes use of a triangular decomposition of the incremental Voronoi diagram and the first order optimality conditions.Following this we study competitive location problems in a continuous setting, in which the first player (''White'') places a set of n points in a rectangular domain P of width p and height q, followed by the second player (''Black''), who places the same number of points. Players cannot place points atop one another, nor can they move a point once it has been placed, and after all 2n points have been played each player wins the fraction of the board for which one of their points is closest. The goal for each player in the One-Round Voronoi Game is to score more than half of the area of P, and that of the One-Round Stackelberg Game is to maximise one's total area. Even in the more diverse setting of Manhattan distances, we determine a complete characterisation for the One-Round Voronoi Game wherein White can win only if p/q >= n, otherwise Black wins, and we show each player's winning strategies. For the One-Round Stackelberg Game we explore arrangements of White's points in which the Voronoi cells of individual facilities are equalised with respect to a number of attractive geometric properties such as fairness (equally-sized Voronoi cells) and local optimality (symmetrically balanced Voronoi cell areas), and explore each player's best strategy under certain conditions

    Efficient Data-Driven Robust Policies for Reinforcement Learning

    Get PDF
    Applying the reinforcement learning methodology to domains that involve risky decisions like medicine or robotics requires high confidence in the performance of a policy before its deployment. Markov Decision Processes (MDPs) have served as a well-established model in reinforcement learning (RL). An MDP model assumes that the exact transitional probabilities and rewards are available. However, in most cases, these parameters are unknown and are typically estimated from data, which are inherently prone to errors. Consequently, due to such statistical errors, the resulting computed policy\u27s actual performance is often different from the designer\u27s expectation. In this context, practitioners can either be negligent and ignore parameter uncertainty during decision-making or be pessimistic by planning to be protected against the worst-case scenario. This dissertation focuses on a moderate mindset that strikes a balance between the two contradicting points of view. This objective is also known as the percentile criterion and can be modeled as risk-aversion to epistemic uncertainty. We propose several RL algorithms that efficiently compute reliable policies with limited data that notably improve the policies\u27 performance and alleviate the computational complexity compared to standard risk-averse RL algorithms. Furthermore, we present a fast and robust feature selection method for linear value function approximation, a standard approach to solving reinforcement learning problems with large state spaces. Our experiments show that our technique is faster and more stable than alternative methods
    corecore