17 research outputs found

    Manycore high-performance computing in bioinformatics

    Get PDF
    Mining the increasing amount of genomic data requires having very efficient tools. Increasing the efficiency can be obtained with better algorithms, but one could also take advantage of the hardware itself to reduce the application runtimes. Since a few years, issues with heat dissipation prevent the processors from having higher frequencies. One of the answers to maintain Moore's Law is parallel processing. Grid environments provide tools for effective implementation of coarse grain parallelization. Recently, another kind of hardware has attracted interest: multicore processors. Graphic processing units (GPUs) are a first step towards massively multicore processors. They allow everyone to have some teraflops of cheap computing power in its personal computer. The CUDA library (released in 2007) and the new standard OpenCL (specified in 2008) make programming of such devices very convenient. OpenCL is likely to gain a wide industrial support and to become a standard of choice for parallel programming. In all cases, the best speedups are obtained when combining precise algorithmic studies with a knowledge of the computing architectures. This is especially true with the memory hierarchy: the algorithms have to find a good balance between using large (and slow) global memories and some fast (but small) local memories. In this chapter, we will show how those manycore devices enable more efficient bioinformatics applications. We will first give some insights into architectures and parallelism. Then we will describe recent implementations specifically designed for manycore architectures, including algorithms on sequence alignment and RNA structure prediction. We will conclude with some thoughts about the dissemination of those algorithms and implementations: are they today available on the bookshelf for everyone

    GPU accelerated RNA-RNA interaction program, A

    Get PDF
    2021 Spring.Includes bibliographical references.RNA-RNA interaction (RRI) is important in processes like gene regulation, and is known to play roles in diseases including cancer and Alzheimer's. Large RRI computations run for days, weeks or even months, because the algorithms have time and space complexity of, respectively, O(N3M3) and O(N2M2), for sequences length N and M, and there is a need for high-throughput RRI tools. GPU parallelization of such algorithms is a challenge. We first show that the most computationally expensive part of base pair maximization (BPM) algorithms comprises O(N3) instances of upper banded tropical matrix products. We develop the first GPU library for this attaining close to theoretical machine peak (TMP). We next optimize other (fifth degree polynomial) terms in the computation and develop the first GPU implementation of the complete BPMax algorithm. We attain 12% of GPU TMP, a significant speedup over the original parallel CPU implementation, which attains less than 1% of CPU TMP. We also perform a large scale study of three small viral RNAs, hypothesized to be relevant to COVID-19

    A Comparative Taxonomy of Parallel Algorithms for RNA Secondary Structure Prediction

    Get PDF
    RNA molecules have been discovered playing crucial roles in numerous biological and medical procedures and processes. RNA structures determination have become a major problem in the biology context. Recently, computer scientists have empowered the biologists with RNA secondary structures that ease an understanding of the RNA functions and roles. Detecting RNA secondary structure is an NP-hard problem, especially in pseudoknotted RNA structures. The detection process is also time-consuming; as a result, an alternative approach such as using parallel architectures is a desirable option. The main goal in this paper is to do an intensive investigation of parallel methods used in the literature to solve the demanding issues, related to the RNA secondary structure prediction methods. Then, we introduce a new taxonomy for the parallel RNA folding methods. Based on this proposed taxonomy, a systematic and scientific comparison is performed among these existing methods

    Staged parser combinators for efficient data processing

    Get PDF
    Parsers are ubiquitous in computing, and many applications depend on their performance for decoding data efficiently. Parser combinators are an intuitive tool for writing parsers: tight integration with the host language enables grammar specifications to be interleaved with processing of parse results. Unfortunately, parser combinators are typically slow due to the high overhead of the host language abstraction mechanisms that enable composition. We present a technique for eliminating such overhead. We use staging, a form of runtime code generation, to dissociate input parsing from parser composition, and eliminate intermediate data structures and computations associated with parser composition at staging time. A key challenge is to maintain support for input dependent grammars, which have no clear stage distinction. Our approach applies to top-down recursive-descent parsers as well as bottom-up nondeterministic parsers with key applications in dynamic programming on sequences, where we auto-generate code for parallel hardware. We achieve performance comparable to specialized, hand-written parsers

    DynaProg for Scala

    Get PDF
    Dynamic programming is an algorithmic technique to solve problems that follow the Bellman’s principle: optimal solutions depends on optimal sub-problem solutions. The core idea behind dynamic programming is to memoize intermediate results into matrices to avoid multiple computations. Solving a dynamic programming problem consists of two phases: filling one or more matrices with intermediate solutions for sub-problems and recomposing how the final result was constructed (backtracking). In textbooks, problems are usually described in terms of recurrence relations between matrices elements. Expressing dynamic programming problems in terms of recursive formulae involving matrix indices might be difficult, if often error prone, and the notation does not capture the essence of the underlying problem (for example aligning two sequences). Moreover, writing correct and efficient parallel implementation requires different competencies and often a significant amount of time. In this project, we present DynaProg, a language embedded in Scala (DSL) to address dynamic programming problems on heterogeneous platforms. DynaProg allows the programmer to write concise programs based on ADP [1], using a pair of parsing grammar and algebra; these program can then be executed either on CPU or on GPU. We evaluate the performance of our implementation against existing work and our own hand-optimized baseline implementations for both the CPU and GPU versions. Experimental results show that plain Scala has a large overhead and is recommended to be used with small sequences (≤1024) whereas the generated GPU version is comparable with existing implementations: matrix chain multiplication has the same performance as our hand-optimized version (142% of the execution time of [2]) for a sequence of 4096 matrices, Smith-Waterman is twice slower than [3] on a pair of sequences of 6144 elements, and RNA folding is on par with [4] (95% running time) for sequences of 4096 elements. [1] Robert Giegerich and Carsten Meyer. Algebraic Dynamic Programming. [2] Chao-Chin Wu, Jenn-Yang Ke, Heshan Lin and Wu Chun Feng. Optimizing dynamic programming on graphics processing units via adaptive thread-level parallelism. [3] Edans Flavius de O. Sandes, Alba Cristina M. A. de Melo. Smith-Waterman alignment of huge sequences with GPU in linear space. [4] Guillaume Rizk and Dominique Lavenier. GPU accelerated RNA folding algorithm

    Alinhamento primário e secundário de sequências biológicas em arquiteturas de alto desempenho

    Get PDF
    Tese (doutorado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2017.O alinhamento múltiplo primário de sequências biológicas é um problema muito importante em Biologia Molecular, pois permite que sejam detectadas similaridades e diferenças entre um conjunto de sequências. Esse problema foi provado NP-Completo e, por essa razão, geralmente algoritmos heurísticos são usados para resolvê-lo. No entanto, a obtenção da solução ótima é bastante desejada e, por essa razão, existem alguns algoritmos exatos que solucionam esse problema para um número reduzido de sequências. As sequências de RNA, diferente do DNA, não possuem dupla-hélice e podem dobrar-se, pois seus nucleotídeos podem formar pares de bases. É conhecido na Biologia Molecular que a função dessa estrutura está ligada à sua conformação espacial, e não à composição de seus nucleotídeos. Obter a estrutura secundária (2D) de uma sequência de RNA também exige uma grande quantidade de recursos computacionais, até mesmo para um pequeno número de sequências. Desta forma, as arquiteturas de alto desempenho são muito importantes para a obtenção dos resultados em um tempo factível. A presente tese visa investigar os problemas do alinhamento múltiplo primário e do alinhamento em pares secundário, utilizando arquiteturas de alto desempenho para acelerar a obtenção de resultados. Para o alinhamento primário ótimo de múltiplas sequências, propusemos na presente Tese o PA-Star, uma estratégia multithreaded baseada no algoritmo A-Star que usa uma política sensível à localidade de atribuição de trabalho às threads. De modo a lidar com o alto uso de memória, nossa estratégia PA-Star usa tanto memória RAM como disco. Para o alinhamento estrutural (2D) de sequências de RNA, propusemos o Foldalign 2.5, que é uma estratégia multithreaded heurística baseada no algoritmo exato de Sankoff, capaz de obter o alinhamento estrutural de grandes sequências em tempo reduzido. Finalmente, propusemos o CUDA-Sankoff, que é capaz de obter o alinhamento estrutural ótimo entre duas sequências de RNA em GPU (Graphics Processing Unit).Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES).The primary multiple sequence Alignment is a very important problem in Molecular Biology since it is able to detect similarities and differences in a set of sequences. This problem has been proven NP-Hard and, for this reason, heuristic algorithms are usually used to solve it. Nevertheless, obtaining the optimal solution is highly desirable and there are indeed some exact algorithms that solve this problem for a reduced number of sequences. The RNA sequences are different than the DNA, they do not have double helix, their nucleotides can form base pairs and the sequence can fold on itself. It is known in the Molecular Biology that, the function of the RNA is related to its spatial structure. Calculating the secondary structure of RNA sequences also demand a high amount of computational resources, even for a small number of sequences. The High Performance Computing (HPC) Platforms can be used in order to produce results faster. The current thesis aims to investigate the primary multiple sequence alignment and the secondary pairwise sequence alignment, using High Performance Architectures to accelerate and obtaining results in reasonable time. For the primary multiple sequence alignment, we propose the PA-Star, a multithreaded solution based on the A-Star algorithm using a locality sensitive hash to distribute the workload among the threads. Due to the high RAM memory usage required by the algorithm, our strategy can also uses disk. For the RNA structural alignment, we proposed the Foldalign 2.5, a multithreaded solution that uses heuristics to reduce the Sankoff Algorithm complexity, and can obtain the pairwise structural alignment of large sequences in reduced time. Finally, we proposed CUDASankoff, that obtains the optimal pairwise structural alignment for RNA sequences using a GPU (Graphics Processing Unit)

    Specialising Parsers for Queries

    Get PDF
    Many software systems consist of data processing components that analyse large datasets to gather information and learn from these. Often, only part of the data is relevant for analysis. Data processing systems contain an initial preprocessing step that filters out the unwanted information. While efficient data analysis techniques and methodologies are accessible to non-expert programmers, data preprocessing seems to be forgotten, or worse, ignored. This despite real performance gains being possible by efficiently preprocessing data. Implementations of the data preprocessing step traditionally have to trade modularity for performance: to achieve the former, one separates the parsing of raw data and filtering it, and leads to slow programs because of the creation of intermediate objects during execution. The efficient version is a low-level implementation that interleaves parsing and querying. In this dissertation we demonstrate a principled and practical technique to convert the modular, maintainable program into its interleaved efficient counterpart. Key to achieving this objective is the removal, or deforestation, of intermediate objects in a program execution. We first show that by encoding data types using Böhm-Berarducci encodings (often referred to as Church encodings), and combining these with partial evaluation for function composition we achieve deforestation. This allows us to implement optimisations themselves as libraries, with minimal dependence on an underlying optimising compiler. Next we illustrate the applicability of this approach to parsing and preprocessing queries. The approach is general enough to cover top-down and bottom-up parsing techniques, and deforestation of pipelines of operations on lists and streams. We finally present a set of transformation rules that for a parser on a nested data format and a query on the structure, produces a parser specialised for the query. As a result we preserve the modularity of writing parsers and queries separately while also minimising resource usage. These transformation rules combine deforested implementations of both libraries to yield an efficient, interleaved result

    Algorithmic and Technical Improvements for Next Generation Drug Design Software Tools

    Get PDF
    [eng] The pharmaceutical industry is actively looking for new ways of boosting the efficiency and effectiveness of their R&D programmes. The extensive use of computational modeling tools in the drug discovery pipeline (DDP) is having a positive impact on research performance, since in silico experiments are usually faster and cheaper that their real counterparts. The lead identification step is a very sensitive point in the DDP. In this context, Virtual high-throughput screening techniques (VHTS) work as a filtering mecha-nism that benefits the following stages by reducing the number of compounds to be tested experimentally. Unfortunately the simplifications applied in the VHTS docking software make them prone generate false positives and negatives. These errors spread across the rest of the DDP stages, and have a negative impact in terms of financial and time costs. In the Electronic and Atomic Protein Modelling group (Barcelona Supercomputing Center, Life Sciences department), we have developed the Protein Energy Landscape Exploration (PELE) software. PELE has demonstrated to be a good alternative to explore the conformational space of proteins and perform ligand-protein docking simulations. In this thesis we discuss how to turn PELE into a faster and more efficient tool by improving its technical and algorithmic features, so that it can be eventually used in VHTS protocols. Besides, we have addressed the difficulties of analyzing extensive data associated with massive simulation production. First, we have rewritten the software using C++ and modern software engineering techniques. As a consequence, our code base is now well organized and tested. PELE has become a piece of software which is easier to modify, understand, and extend. It is also more robust and reliable. The rewriting the code has helped us to overcome some of its previous technical limitations, such as the restrictions on the size of the systems. Also, it has allowed us to extend PELE with new solvent models, force fields, and types of biomolecules. Moreover, the rewriting has make it possible to adapt the code in order to take advantage of new parallel architectures and accelerators obtaining promising speedup results. Second, we have improved the way PELE handles protein flexibility by im-plemented and internal coordinate Normal Mode Analysis (icNMA) method. This method is able to produce more energy favorable perturbations than the current Anisotropic Network Model (ANM) based strategy. This has allowed us to eliminate the unneeded relaxation phase of PELE. As a consequence, the overall computational performance of the sampling is significantly improved (-5-7x). The new internal coordinates-based methodology is able to capture the flexibility of the backbone better than the old method and is in closer agreement to molecular dynamics than the ANM-based method

    Graphics Processing Unit Accelerated Coarse-Grained Protein-Protein Docking

    Get PDF
    Graphics processing unit (GPU) architectures are increasingly used for general purpose computing, providing the means to migrate algorithms from the SISD paradigm, synonymous with CPU architectures, to the SIMD paradigm. Generally programmable commodity multi-core hardware can result in significant speed-ups for migrated codes. Because of their computational complexity, molecular simulations in particular stand to benefit from GPU acceleration. Coarse-grained molecular models provide reduced complexity when compared to the traditional, computationally expensive, all-atom models. However, while coarse-grained models are much less computationally expensive than the all-atom approach, the pairwise energy calculations required at each iteration of the algorithm continue to cause a computational bottleneck for a serial implementation. In this work, we describe a GPU implementation of the Kim-Hummer coarse-grained model for protein docking simulations, using a Replica Exchange Monte-Carlo (REMC) method. Our highly parallel implementation vastly increases the size- and time scales accessible to molecular simulation. We describe in detail the complex process of migrating the algorithm to a GPU as well as the effect of various GPU approaches and optimisations on algorithm speed-up. Our benchmarking and profiling shows that the GPU implementation scales very favourably compared to a CPU implementation. Small reference simulations benefit from a modest speedup of between 4 to 10 times. However, large simulations, containing many thousands of residues, benefit from asynchronous GPU acceleration to a far greater degree and exhibit speed-ups of up to 1400 times. We demonstrate the utility of our system on some model problems. We investigate the effects of macromolecular crowding, using a repulsive crowder model, finding our results to agree with those predicted by scaled particle theory. We also perform initial studies into the simulation of viral capsids assembly, demonstrating the crude assembly of capsid pieces into a small fragment. This is the first implementation of REMC docking on a GPU, and the effectuate speed-ups alter the tractability of large scale simulations: simulations that otherwise require months or years can be performed in days or weeks using a GPU
    corecore