12 research outputs found

    On compile time Knuth-Morris-Pratt precomputation

    Get PDF
    Many keyword pattern matching algorithms use precomputation subroutines to produce lookup tables, which in turn are used to improve performance during the search phase. If the keywords to be matched are known at compile time, the precomputation subroutines can be implemented to be evaluated at compile time versus at run time. This will provide a performance boost to run time operations. We have started an investigation into the use of metaprogramming techniques to implement such compile time evaluation, initially for the Knuth-Morris-Pratt (KMP) algorithm. We present an initial experimental comparison of the performance of the traditional KMP algorithm to that of an optimised version that uses compile time precomputation. During implementation and benchmarking, it was discovered that C++ is not well suited to metaprogramming when dealing with strings, while the related D language is. We therefore ported our implementation to the latter and performed the benchmarking with that version. We discuss the design of the benchmarks, the experience in implementing the benchmarks in C++ and D, and the results of the D benchmarks. The results show that under certain circumstances, the use of compile time precomputation may significantly improve performance of the KMP algorithm

    On compile time Knuth-Morris-Pratt precomputation

    Get PDF
    Many keyword pattern matching algorithms use precomputation subroutines to produce lookup tables, which in turn are used to improve performance during the search phase. If the keywords to be matched are known at compile time, the precomputation subroutines can be implemented to be evaluated at compile time versus at run time. This will provide a performance boost to run time operations. We have started an investigation into the use of metaprogramming techniques to implement such compile time evaluation, initially for the Knuth-Morris-Pratt (KMP) algorithm. We present an initial experimental comparison of the performance of the traditional KMP algorithm to that of an optimised version that uses compile time precomputation. During implementation and benchmarking, it was discovered that C++ is not well suited to metaprogramming when dealing with strings, while the related D language is. We therefore ported our implementation to the latter and performed the benchmarking with that version. We discuss the design of the benchmarks, the experience in implementing the benchmarks in C++ and D, and the results of the D benchmarks. The results show that under certain circumstances, the use of compile time precomputation may significantly improve performance of the KMP algorithm

    On compile time Knuth-Morris-Pratt precomputation

    Get PDF
    Abstract. Many keyword pattern matching algorithms use precomputation subroutines to produce lookup tables, which in turn are used to improve performance during the search phase. If the keywords to be matched are known at compile time, the precomputation subroutines can be implemented to be evaluated at compile time versus at run time. This will provide a performance boost to run time operations. We have started an investigation into the use of metaprogramming techniques to implement such compile time evaluation, initially for the Knuth-Morris-Pratt (KMP) algorithm. We present an initial experimental comparison of the performance of the traditional KMP algorithm to that of an optimised version that uses compile time precomputation. During implementation and benchmarking, it was discovered that C++ is not well suited to metaprogramming when dealing with strings, while the related D language is. We therefore ported our implementation to the latter and performed the benchmarking with that version. We discuss the design of the benchmarks, the experience in implementing the benchmarks in C++ and D, and the results of the D benchmarks. The results show that under certain circumstances, the use of compile time precomputation may significantly improve performance of the KMP algorithm

    Efficient estimation of evolutionary distances

    Get PDF
    The advent of high throughput sequencers has lead to a dramatic increase in the size of available genomic data. Standard methods, which have worked well for many years, are not suitable for the analysis of big data sets, due to their reliance on a time-consuming alignment step. In this thesis, a new alignment-free approach for phylogeny reconstruction is introduced. The corresponding program, andi, is orders of magnitude faster than classical approaches and also superior to comparable alignment-free methods. The central data structure in andi is the enhanced suffix array. It is used to find long exact matches between sequences. In this thesis, various approaches to the construction of enhanced suffix arrays, including novel ones, are evaluated with respect to performance. Additionally, a new parallel algorithm for the computation of suffix arrays is introduced

    On Compile Time Knuth-Morris-Pratt Precomputation

    No full text
    Please help us populate SUNScholar with the post print version of this article. It can be e-mailed to: [email protected] En WysbegeerteSentrum vir Kennisdinamika & Besluitnemin

    On compile time Knuth-Morris-Pratt precomputation

    No full text
    Many keyword pattern matching algorithms use precomputation subroutines to produce lookup tables, which in turn are used to improve performance during the search phase. If the keywords to be matched are known at compile time, the precomputation subroutines can be implemented to be evaluated at compile time versus at run time. This will provide a performance boost to run time operations. We have started an investigation into the use of metaprogramming techniques to implement such compile time evaluation, initially for the Knuth-Morris-Pratt (KMP) algorithm. We present an initial experimental comparison of the performance of the traditional KMP algorithm to that of an optimised version that uses compile time precomputation. During implementation and benchmarking, it was discovered that C++ is not well suited to metaprogramming when dealing with strings, while the related D language is. We therefore ported our implementation to the latter and performed the benchmarking with that version. We discuss the design of the benchmarks, the experience in implementing the benchmarks in C++ and D, and the results of the D benchmarks. The results show that under certain circumstances, the use of compile time precomputation may significantly improve performance of the KMP algorithm

    Expression Refinement

    Get PDF
    This thesis presents a refinement calculus for expressions. The aim of refinement calculi is to make programming a mathematical activity, and thereby improve the correctness of programs. To achieve this, a refinement calculus provides a formal language and a set of rules that allow transformations of the language terms. Using a refinement calculus, to produce a correct program, the programmer writes a possibly non-algorithmic or inefficient term that nevertheless obviously describes the intended program. This term is the specification, and it is transformed into an efficient program by syntactic transformation, using the rules of the refinement calculus. This transformation is refinement

    Transformation of functional programs for identification of parallel skeletons

    Get PDF
    Hardware is becoming increasingly parallel. Thus, it is essential to identify and exploit inherent parallelism in a given program to effectively utilise the computing power available. However, parallel programming is tedious and error-prone when done by hand, and is very difficult for a compiler to do automatically to the desired level. One possible approach to parallel programming is to use transformation techniques to automatically identify and explicitly specify parallel computations in a given program using parallelisable algorithmic skeletons. Current existing methods for systematic derivation of parallel programs or parallel skeleton identification allow automation. However, they place constraints on the programs to which they are applicable, require manual derivation of operators with specific properties for parallel execution, or allow the use of inefficient intermediate data structures in the parallel programs. In this thesis, we present a program transformation method that addresses these issues and has the following attributes: (1) Reduces the number of inefficient data structures used in the parallel program; (2) Transforms a program into a form that is more suited to identifying parallel skeletons; (3) Automatically identifies skeletons that can be efficiently executed using their parallel implementations. Our transformation method does not place restrictions on the program to be parallelised, and allows automatic verification of skeleton operator properties to allow parallel execution. To evaluate the performance of our transformation method, we use a set of benchmark programs. The parallel version of each program produced by our method is compared with other versions of the program, including parallel versions that are derived by hand. Consequently, we have been able to evaluate the strengths and weaknesses of the proposed transformation method. The results demonstrate improvements in the efficiency of parallel programs produced in some examples, and also highlight the role of some intermediate data structures required for parallelisation in other examples

    Sixth Biennial Report : August 2001 - May 2003

    No full text
    corecore