14,104 research outputs found

    Extended Rate, more GFUN

    Get PDF
    We present a software package that guesses formulae for sequences of, for example, rational numbers or rational functions, given the first few terms. We implement an algorithm due to Bernhard Beckermann and George Labahn, together with some enhancements to render our package efficient. Thus we extend and complement Christian Krattenthaler's program Rate, the parts concerned with guessing of Bruno Salvy and Paul Zimmermann's GFUN, the univariate case of Manuel Kauers' Guess.m and Manuel Kauers' and Christoph Koutschan's qGeneratingFunctions.m.Comment: 26 page

    Managing the Change of Cultural Resistance

    Get PDF
    The review of numerous Australian and International Transport and Health Safety cases has highlighted the detrimental effect of cultural resistance when engineers and regulators seek to improve transport safety. This paper will define culture and cultural resistance. It will review a number of cases and provide an overview of the effect of cultural resistance, demonstrating some common characteristics of these cases. A limited number of risk management disciplines will be reviewed as they apply to the problem, and demonstrate how expertise in these fields can be advantageous to the engineer and regulator. The paper will provide the reader with a number of resolution strategies to manage cultural change by reducing resistance using practical methods. This paper has specific relevance to transport safety initiatives in Australia. This paper is an extract of a full research paper "Making the Kingfisher Archipelago a Safer Place", Smith, D.B., 2005, available from the author upon request

    A Large-Scale Comparison of Historical Text Normalization Systems

    Get PDF
    There is no consensus on the state-of-the-art approach to historical text normalization. Many techniques have been proposed, including rule-based methods, distance metrics, character-based statistical machine translation, and neural encoder--decoder models, but studies have used different datasets, different evaluation methods, and have come to different conclusions. This paper presents the largest study of historical text normalization done so far. We critically survey the existing literature and report experiments on eight languages, comparing systems spanning all categories of proposed normalization techniques, analysing the effect of training data quantity, and using different evaluation methods. The datasets and scripts are made publicly available.Comment: Accepted at NAACL 201

    Three Dimensional Pseudo-Spectral Compressible Magnetohydrodynamic GPU Code for Astrophysical Plasma Simulation

    Full text link
    This paper presents the benchmarking and scaling studies of a GPU accelerated three dimensional compressible magnetohydrodynamic code. The code is developed keeping an eye to explain the large and intermediate scale magnetic field generation is cosmos as well as in nuclear fusion reactors in the light of the theory given by Eugene Newman Parker. The spatial derivatives of the code are pseudo-spectral method based and the time solvers are explicit. GPU acceleration is achieved with minimal code changes through OpenACC parallelization and use of NVIDIA CUDA Fast Fourier Transform library (cuFFT). NVIDIAs unified memory is leveraged to enable over-subscription of the GPU device memory for seamless out-of-core processing of large grids. Our experimental results indicate that the GPU accelerated code is able to achieve upto two orders of magnitude speedup over a corresponding OpenMP parallel, FFTW library based code, on a NVIDIA Tesla P100 GPU. For large grids that require out-of-core processing on the GPU, we see a 7x speedup over the OpenMP, FFTW based code, on the Tesla P100 GPU. We also present performance analysis of the GPU accelerated code on different GPU architectures - Kepler, Pascal and Volta

    HiggsBounds: Confronting Arbitrary Higgs Sectors with Exclusion Bounds from LEP and the Tevatron

    Get PDF
    HiggsBounds is a computer code that tests theoretical predictions of models with arbitrary Higgs sectors against the exclusion bounds obtained from the Higgs searches at LEP and the Tevatron. The included experimental information comprises exclusion bounds at 95% C.L. on topological cross sections. In order to determine which search topology has the highest exclusion power, the program also includes, for each topology, information from the experiments on the expected exclusion bound, which would have been observed in case of a pure background distribution. Using the predictions of the desired model provided by the user as input, HiggsBounds determines the most sensitive channel and tests whether the considered parameter point is excluded at the 95% C.L. HiggsBounds is available as a Fortran 77 and Fortran 90 code. The code can be invoked as a command line version, a subroutine version and an online version. Examples of exclusion bounds obtained with HiggsBounds are discussed for the Standard Model, for a model with a fourth generation of quarks and leptons and for the Minimal Supersymmetric Standard Model with and without CP-violation. The experimental information on the exclusion bounds currently implemented in HiggsBounds will be updated as new results from the Higgs searches become available.Comment: 64 pages, 15 tables, 8 figures; three typos which made it to the published version corrected; the code (currently version 3.0.0beta including LHC Higgs search results) is available via: http://projects.hepforge.org/higgsbounds

    An investigation of pulsar searching techniques with the Fast Folding Algorithm

    Full text link
    Here we present an in-depth study of the behaviour of the Fast Folding Algorithm, an alternative pulsar searching technique to the Fast Fourier Transform. Weaknesses in the Fast Fourier Transform, including a susceptibility to red noise, leave it insensitive to pulsars with long rotational periods (P > 1 s). This sensitivity gap has the potential to bias our understanding of the period distribution of the pulsar population. The Fast Folding Algorithm, a time-domain based pulsar searching technique, has the potential to overcome some of these biases. Modern distributed-computing frameworks now allow for the application of this algorithm to all-sky blind pulsar surveys for the first time. However, many aspects of the behaviour of this search technique remain poorly understood, including its responsiveness to variations in pulse shape and the presence of red noise. Using a custom CPU-based implementation of the Fast Folding Algorithm, ffancy, we have conducted an in-depth study into the behaviour of the Fast Folding Algorithm in both an ideal, white noise regime as well as a trial on observational data from the HTRU-S Low Latitude pulsar survey, including a comparison to the behaviour of the Fast Fourier Transform. We are able to both confirm and expand upon earlier studies that demonstrate the ability of the Fast Folding Algorithm to outperform the Fast Fourier Transform under ideal white noise conditions, and demonstrate a significant improvement in sensitivity to long-period pulsars in real observational data through the use of the Fast Folding Algorithm.Comment: 19 pages, 15 figures, 3 table

    A study of general practitioners' perspectives on electronic medical records systems in NHS Scotland

    Get PDF
    <b>Background</b> Primary care doctors in NHSScotland have been using electronic medical records within their practices routinely for many years. The Scottish Health Executive eHealth strategy (2008-2011) has recently brought radical changes to the primary care computing landscape in Scotland: an information system (GPASS) which was provided free-of-charge by NHSScotland to a majority of GP practices has now been replaced by systems provided by two approved commercial providers. The transition to new electronic medical records had to be completed nationally across all health-boards by March 2012. <p></p><b> Methods</b> We carried out 25 in-depth semi-structured interviews with primary care doctors to elucidate GPs' perspectives on their practice information systems and collect more general information on management processes in the patient surgical pathway in NHSScotland. We undertook a thematic analysis of interviewees' responses, using Normalisation Process Theory as the underpinning conceptual framework. <p></p> <b>Results</b> The majority of GPs' interviewed considered that electronic medical records are an integral and essential element of their work during the consultation, playing a key role in facilitating integrated and continuity of care for patients and making clinical information more accessible. However, GPs expressed a number of reservations about various system functionalities - for example: in relation to usability, system navigation and information visualisation. <b>Conclusion </b>Our study highlights that while electronic information systems are perceived as having important benefits, there remains substantial scope to improve GPs' interaction and overall satisfaction with these systems. Iterative user-centred improvements combined with additional training in the use of technology would promote an increased understanding, familiarity and command of the range of functionalities of electronic medical records among primary care doctors

    Similarity of Source Code in the Presence of Pervasive Modifications

    Get PDF
    Source code analysis to detect code cloning, code plagiarism, and code reuse suffers from the problem of pervasive code modifications, i.e. transformations that may have a global effect. We compare 30 similarity detection techniques and tools against pervasive code modifications. We evaluate the tools using two experimental scenarios for Java source code. These are (1) pervasive modifications created with tools for source code and bytecode obfuscation and (2) source code normalisation through compilation and decompilation using different decompilers. Our experimental results show that highly specialised source code similarity detection techniques and tools can perform better than more general, textual similarity measures. Our study strongly validates the use of compilation/decompilation as a normalisation technique. Its use reduced false classifications to zero for six of the tools. This broad, thorough study is the largest in existence and potentially an invaluable guide for future users of similarity detection in source code
    • …
    corecore