8 research outputs found

    A unifying framework for seed sensitivity and its application to subset seeds

    Get PDF
    We propose a general approach to compute the seed sensitivity, that can be applied to different definitions of seeds. It treats separately three components of the seed sensitivity problem -- a set of target alignments, an associated probability distribution, and a seed model -- that are specified by distinct finite automata. The approach is then applied to a new concept of subset seeds for which we propose an efficient automaton construction. Experimental results confirm that sensitive subset seeds can be efficiently designed using our approach, and can then be used in similarity search producing better results than ordinary spaced seeds

    A unifying framework for seed sensitivity and its application to subset seeds (Extended abstract)

    Get PDF
    We propose a general approach to compute the seed sensitivity, that can be applied to different definitions of seeds. It treats separately three components of the seed sensitivity problem - a set of target alignments, an associated probability distribution, and a seed model - that are specified by distinct finite automata. The approach is then applied to a new concept of subset seeds for which we propose an efficient automaton construction. Experimental results confirm that sensitive subset seeds can be efficiently designed using our approach, and can then be used in similarity search producing better results than ordinary spaced seeds

    Fine-grained parallelization of similarity search between protein sequences

    Get PDF
    This report presents the implementation of a protein sequence comparison algorithm specifically designed for speeding up time consuming part on parallel hardware such as SSE instructions, multicore architectures or graphic boards. Three programs have been developed: PLAST-P, TPLAST-N and PLAST-X. They provide equivalent results compared to the NCBI BLAST family programs (BLAST-P, TBLAST-N and BLAST-X) with a speed-up factor ranging from 5 to 10

    A FAST ALGORITHM FOR COMPUTING HIGHLY SENSITIVE MULTIPLE SPACED SEEDS

    Get PDF
    The main goal of homology search is to find similar segments, or local alignments, be­ tween two DNA or protein sequences. Since the dynamic programming algorithm of Smith- Waterman is too slow, heuristic methods have been designed to achieve both efficiency and accuracy. Seed-based methods were made well known by their use in BLAST, the most widely used software program in biological applications. The seed of BLAST trades sensitivity for speed and spaced seeds were introduced in PatternHunter to achieve both. Several seeds are better than one and near perfect sensitivity can be obtained while maintaining the speed. There­ fore, multiple spaced seeds quickly became the state-of-the-art in similarity search, being em­ ployed by many software programs. However, the quality of these seeds is crucial and comput­ ing optimal multiple spaced seeds is NP-hard. All but one of the existing heuristic algorithms for computing good seeds are exponential. Our work has two main goals. First we engineer the only existing polynomial-time heuristic algorithm to compute better seeds than any other program, while running orders of magnitude faster. Second, we estimate its performance by comparing its seeds with the optimal seeds in a few practical cases. In order to make the computation feasible, a very fast implementation of the sensitivity function is provided

    Improvements in the Accuracy of Pairwise Genomic Alignment

    Get PDF
    Pairwise sequence alignment is a fundamental problem in bioinformatics with wide applicability. This thesis presents three new algorithms for this well-studied problem. First, we present a new algorithm, RDA, which aligns sequences in small segments, rather than by individual bases. Then, we present two algorithms for aligning long genomic sequences: CAPE, a pairwise global aligner, and FEAST, a pairwise local aligner. RDA produces interesting alignments that can be substantially different in structure than traditional alignments. It is also better than traditional alignment at the task of homology detection. However, its main negative is a very slow run time. Further, although it produces alignments with different structure, it is not clear if the differences have a practical value in genomic research. Our main success comes from our local aligner, FEAST. We describe two main improvements: a new more descriptive model of evolution, and a new local extension algorithm that considers all possible evolutionary histories rather than only the most likely. Our new model of evolution provides for improved alignment accuracy, and substantially improved parameter training. In particular, we produce a new parameter set for aligning human and mouse sequences that properly describes regions of weak similarity and regions of strong similarity. The second result is our new extension algorithm. Depending on heuristic settings, our new algorithm can provide for more sensitivity than existing extension algorithms, more specificity, or a combination of the two. By comparing to CAPE, our global aligner, we find that the sensitivity increase provided by our local extension algorithm is so substantial that it outperforms CAPE on sequence with 0.9 or more expected substitutions per site. CAPE itself gives improved sensitivity for sequence with 0.7 or more expected substitutions per site, but at a great run time cost. FEAST and our local extension algorithm improves on this too, the run time is only slightly slower than existing local alignment algorithms and asymptotically the same

    Análise e compressão de sequências genómicas

    Get PDF
    Tese de doutoramento em InformáticaA informação dos códigos genéticos sequenciados é na actualidade, provavelmente, a fonte mais inspiradora para o estudo e avanço das teorias da informação e da codificação. Algoritmos eficientes para a sua compressão antevêm-se essenciais para a optimização do armazenamento e comunicação da informação genómica. A compressão de informação genómica é um caso particular da compressão de informação. A entropia das sequências de ADN é elevada, contudo variável. Ao nível intra-genómico é maior nas regiões codificantes e menor nas regiões não codificantes. Ao nível inter-genómico é maior nos seres procarióticos e menor nos eucarióticos. Na base da redução da entropia estão as regularidades que perfazem as regiões repetitivas do ADN. As regiões repetitivas compõem-se sobretudo de padrões aproximados, que incluem pontualmente mutações, delecções, inserções ou gaps. Os padrões exactos são menos relevantes e geralmente apresentam-se em numerosas repetições adjacentes. A redundância do ADN também tem manifestações estatísticas e probabilísticas. As redundâncias das sequências de ADN são a fonte de recursos de compressão, as grandes repetições indicam-se para a compressão substitucional com recurso a dicionário, enquanto que as evidências estatísticas e probabilísticas permitem modelar e predizer parcialmente a sucessão de símbolos (bases), utilizando compressores estatísticos para capitalizar esse potencial de compressão. Considerando a entropia máxima para o ADN, a sua codificação corresponde a 2 bits por base. Em média, os melhores compressores disponíveis, concebidos para a especificidade do ADN, alcançam os 1,7 bits/base, o que corresponde a uma taxa de compressão de apenas 15%, valor que é demonstrativo da dificuldade inerente. O trabalho realizado corresponde a um framework de análise e compressão de sequências de ADN, cuja aplicação principal corresponde ao DNALight. O DNALight é uma solução híbrida para compressão de informação genómica baseada na cooperação de várias metodologias vocacionadas para absorver ocorrências das diferentes tipologias de redundâncias presentes nas cadeias de nucleótidos. De facto, a compressão não é possível sem análise. É na completa análise que reside a obtenção dos recursos que permitirão reduzir a entropia. Para a análise de sequências de ADN desenvolveram-se algoritmos inovadores para a pesquisa de padrões exactos (GRASPm) e aproximados (SimSearch) que alcançam desempenhos que superam destacadamente o estado da arte. Estes algoritmos intervêm na primeira fase do DNALight que aproveita o potencial dos padrões mais representativos para a compressão substitucional baseada em dicionário de padrões exactos e aproximados. Para maximizar as captações de padrões, a pesquisa é exaustiva e efectuada multi-nível, ou seja, na sequência normal 5’-3’, na complementar natural 3’-5’, e também nas duas restantes complementares artificiais. Na segunda fase do DNALight, que procura fazer o aproveitamento das redundâncias desconsideradas pela captação da primeira fase, são construídos modelos probabilísticos de linguagem compactos com bases nas regiões menos repetitivas que transitam para esta fase, e que constituem o input para esta metodologia complementar. Em concorrência, os modelos geram predições sustentadas nas apreciações probabilísticas de modelos de linguagem globais e locais. As predições acertadas ou aproximadas permitem codificações mais económicas pois criam maior desequilíbrio no modelo probabilístico de codificação, beneficiando o desempenho da codificação aritmética que encerra o processo. O processo de descompressão é similar mas reverso ao descrito para a compressão. Os resultados experimentais colocam o DNALight como novo integrante do estado da arte em compressão de sequências de ADN, superando consistentemente, mas em pequena escala, os seus antecessores.Genetics is nowadays, probably, the most inspiring source for coding theory study and developments. Efficient compression algorithms are essential to optimise genomic data storage and communication. Genomic data compression is a particular case of data compression. The entropy present in DNA sequences is high, however variable. At intra-genomic level, it is higher in coding regions and lower in non-coding regions. At inter-genomic level, it is higher in the prokaryotes and lower in eukaryotes. DNA entropy reduction is achieved by coding more efficiently the repetitive regions of the ADN. Repetitive regions are mainly composed of inexact patterns. Patterns’ errors are caused by biological processes and DNA dynamics including mutations, deletions, insertions or gaps. Exact patterns are less relevant and generally are presented in tandem repetitions. DNA redundancies have also statistical and probabilistic manifestations. The redundancies of DNA sequences are the most proficuous source of compression resources, the larger repetitions are indicated for substitucional compression based on a dictionary, whereas the statistical and probabilistic evidences allow to model and predict the succession of symbols (bases) in the sequence, using statistical compression to capitalize this compression potential. Considering the maximum DNA entropy, its codification cost corresponds to 2 bits per base. On average, the best available compressors, conceived accordingly DNA data specificities, reach 1,7 bits/base, which corresponds to a compression rate of only 15%, and this value is demonstrative of the inherent difficulty. The developed work corresponds to a framework for the analysis and compression of DNA sequences, being DNALight the most representative application. DNALight is a hybrid solution for DNA compression based on the cooperative integration of complementary methodologies to absorb the different redundancies present in DNA sequences. In fact, compression is not possible without analysis. Gathering resources for compression relies mostly in analysis, and the emerged recurrences will allow to reduce the entropy. Innovative algorithms were developed for exact pattern-matching (GRASPm) and approximate and exact pattern discovery (SimSearch) and their performance notoriously surpasses the state of the art. These algorithms play an important role in the first phase of the DNALight to implement substitucional compression based on dictionary of exact and approximated repeats. To maximize pattern recollection, the searching is performed multi-level, i.e., in normal sequence 5' - 3', in natural complementary sequence 3' - 5', and also in the two remaining artificial complementary sequences. In the second phase of DNALight, focused on taking advantage of the missed redundancies in the first phase, probabilistic language models are built based on the less repetitive regions as they constitute the input of this complementary methodology. In competition, the models generate predictions supported in the probabilistic analysis of global and local language models. Accurate or approximated predictions allow compact codifications as they provide a more disproportional probabilistic model for codification, benefiting the arithmetic coding performance that encloses the process. The decompression process is similar, but reverse when compared with compression. The experimental results place DNALight as a new constituent of the state of the art in DNA sequences compression, surpassing consistently, but in small scale, its predecessors.Programa de Desenvolvimento Educativo para Portugal (PRODEP

    Efficient homology search for genomic sequence databases

    Get PDF
    Genomic search tools can provide valuable insights into the chemical structure, evolutionary origin and biochemical function of genetic material. A homology search algorithm compares a protein or nucleotide query sequence to each entry in a large sequence database and reports alignments with highly similar sequences. The exponential growth of public data banks such as GenBank has necessitated the development of fast, heuristic approaches to homology search. The versatile and popular blast algorithm, developed by researchers at the US National Center for Biotechnology Information (NCBI), uses a four-stage heuristic approach to efficiently search large collections for analogous sequences while retaining a high degree of accuracy. Despite an abundance of alternative approaches to homology search, blast remains the only method to offer fast, sensitive search of large genomic collections on modern desktop hardware. As a result, the tool has found widespread use with millions of queries posed each day. A significant investment of computing resources is required to process this large volume of genomic searches and a cluster of over 200 workstations is employed by the NCBI to handle queries posed through the organisation's website. As the growth of sequence databases continues to outpace improvements in modern hardware, blast searches are becoming slower each year and novel, faster methods for sequence comparison are required. In this thesis we propose new techniques for fast yet accurate homology search that result in significantly faster blast searches. First, we describe improvements to the final, gapped alignment stages where the query and sequences from the collection are aligned to provide a fine-grain measure of similarity. We describe three new methods for aligning sequences that roughly halve the time required to perform this computationally expensive stage. Next, we investigate improvements to the first stage of search, where short regions of similarity between a pair of sequences are identified. We propose a novel deterministic finite automaton data structure that is significantly smaller than the codeword lookup table employed by ncbi-blast, resulting in improved cache performance and faster search times. We also discuss fast methods for nucleotide sequence comparison. We describe novel approaches for processing sequences that are compressed using the byte packed format already utilised by blast, where four nucleotide bases from a strand of DNA are stored in a single byte. Rather than decompress sequences to perform pairwise comparisons, our innovations permit sequences to be processed in their compressed form, four bases at a time. Our techniques roughly halve average query evaluation times for nucleotide searches with no effect on the sensitivity of blast. Finally, we present a new scheme for managing the high degree of redundancy that is prevalent in genomic collections. Near-duplicate entries in sequence data banks are highly detrimental to retrieval performance, however existing methods for managing redundancy are both slow, requiring almost ten hours to process the GenBank database, and crude, because they simply purge highly-similar sequences to reduce the level of internal redundancy. We describe a new approach for identifying near-duplicate entries that is roughly six times faster than the most successful existing approaches, and a novel approach to managing redundancy that reduces collection size and search times but still provides accurate and comprehensive search results. Our improvements to blast have been integrated into our own version of the tool. We find that our innovations more than halve average search times for nucleotide and protein searches, and have no signifcant effect on search accuracy. Given the enormous popularity of blast, this represents a very significant advance in computational methods to aid life science research

    Optimizing multiple seeds for protein homology search

    No full text
    Abstract—We present a framework for improving local protein alignment algorithms. Specifically, we discuss how to extend local protein aligners to use a collection of vector seeds or ungapped alignment seeds to reduce noise hits. We model picking a set of seed models as an integer programming problem and give algorithms to choose such a set of seeds. While the problem is NP-hard, and Quasi-NP-hard to approximate to within a logarithmic factor, it can be solved easily in practice. A good set of seeds we have chosen allows four to five times fewer false positive hits, while preserving essentially identical sensitivity as BLASTP. Index Terms—Bioinformatics database applications, similarity measures, biology and genetics.
    corecore