66 research outputs found

    Modular design of data-parallel graph algorithms

    Get PDF
    Amorphous Data Parallelism has proven to be a suitable vehicle for implementing concurrent graph algorithms effectively on multi-core architectures. In view of the growing complexity of graph algorithms for information analysis, there is a need to facilitate modular design techniques in the context of Amorphous Data Parallelism. In this paper, we investigate what it takes to formulate algorithms possessing Amorphous Data Parallelism in a modular fashion enabling a large degree of code re-use. Using the betweenness centrality algorithm, a widely popular algorithm in the analysis of social networks, we demonstrate that a single optimisation technique can suffice to enable a modular programming style without loosing the efficiency of a tailor-made monolithic implementation

    FPGAs for Domain Experts

    Get PDF

    Improving performance of the BGS Geomagnetic Field Modelling Code

    Get PDF
    This poster outlines the speed increase in the inversion part of the BGS global modelling software after rewriting some of it in the SAC language

    Same Difference: Detecting Collusion by Finding Unusual Shared Elements

    Get PDF
    Pam Green, Peter Lane, Austen Rainer, Sven-Bodo Scholz, Steve Bennett, ‘Same Difference: Detecting Collusion by Finding Unusual Shared Elements’, paper presented at the 5th International Plagiarism Conference, Sage Gateshead, Newcastle, UK, 17-18 July, 2012.Many academic staff will recognise that unusual shared elements in student submissions trigger suspicion of inappropriate collusion. These elements may be odd phrases, strange constructs, peculiar layout, or spelling mistakes. In this paper we review twenty-nine approaches to source-code plagiarism detection, showing that the majority focus on overall file similarity, and not on unusual shared elements, and that none directly measure these elements. We describe an approach to detecting similarity between files which focuses on these unusual similarities. The approach is token-based and therefore largely language independent, and is tested on a set of student assignments, each one consisting of a mix of programming languages. We also introduce a technique for visualising one document in relation to another in the context of the group. This visualisation separates code which is unique to the document, that shared by just the two files, code shared by small groups, and uninteresting areas of the file.Peer reviewe

    Efficient and Correct Stencil Computation via Pattern Matching and Static Typing

    Get PDF
    Stencil computations, involving operations over the elements of an array, are a common programming pattern in scientific computing, games, and image processing. As a programming pattern, stencil computations are highly regular and amenable to optimisation and parallelisation. However, general-purpose languages obscure this regular pattern from the compiler, and even the programmer, preventing optimisation and obfuscating (in)correctness. This paper furthers our work on the Ypnos domain-specific language for stencil computations embedded in Haskell. Ypnos allows declarative, abstract specification of stencil computations, exposing the structure of a problem to the compiler and to the programmer via specialised syntax. In this paper we show the decidable safety guarantee that well-formed, well-typed Ypnos programs cannot index outside of array boundaries. Thus indexing in Ypnos is safe and run-time bounds checking can be eliminated. Program information is encoded as types, using the advanced type-system features of the Glasgow Haskell Compiler, with the safe-indexing invariant enforced at compile time via type checking

    PS-NET - a predictable typed coordination language for stream processing in resource-constrained environments

    Get PDF
    Copyright International Academy, Research and Industry AssociationStream processing is a well-suited application pattern for embedded computing. This holds true even more so when it comes to multi-core systems where concurrency plays an important role. With the latest trend towards more dynamic and heterogeneous systems there seems to be a shift from purely synchronous systems towards more asynchronous ones. The downside of this shift is an increase in programming complexity due to the more subtile concurrency issues. Several special purpose streaming languages have been proposed to help the programmer in coping with these concurrency issues. In this paper, we take a different approach. Rather than proposing a full-blown programming language, we propose a coordination language named PS-Net. Its purpose is to coordinate existing resource-bound building blocks by means of asynchronous streaming. Within this paper we introduce code annotations and synchronisation patterns that result in a flexible but still resource-boundable coordination language At the example of a raytracing application we demonstrate the applicability of PS-Net for expressing the coordination of rather dynamic computations in a resource-bound way

    Analysing Ferret XML reports to estimate the density of copied code

    Get PDF
    This document explains a method for identifying dense blocks of copied text in pairs of files. The files are compared suing Ferret, a copy-detection tool which computes a similarity score based on trigrams. This similarity score cannot determine the arrangement of copied text in a file; two files with the same similarity to another file may have different distributions of matched trigrams in the file. For example, in one file the matched trigrams may be in a large block, while they are scattered throughout the other file. However, Ferret produces an XML report which relates matched and unmatched trigrams back to the original text. This report can be analysed to find identical or densely copied blocks in the files. We address the problems of defining and locating the blocks, and of representing the blocks found as a meaningful feature vector, regardless of copy pattern. We provide a step-by-step example to explain our method for finding dense blocks. A set of artificial files, built to mimic different copy patterns, is used to explore a set of features which profile the dense blocks in a file. A range of density parameters is used to construct features which show that the copy patterns in the artificial files can be separated

    Unscrambling code clones for one-to-one matching of duplicated code

    Get PDF
    Code clone detection tools find sections of code that are similar. Different tools use difference representations of the code and different matching algorithms. This diversity makes clone detection tools attractive for other code matching tasks, particularly where code has been edited or rearranged. However, the tools report on every match found. In some applications we are interested in one-to-one matching, meaning that each section of copied code in one file is matched to just one section of code in the other file. In this report we explore ways that clones reported by the detection tools can inflate the amount of matching code. We also explain, with the aid of a worked example, our method for unscrambling the output from clone detection tools to approximate one-to-one matching if the code in one file to that in another file
    corecore