887,647 research outputs found

    Links between the personalities, styles and performance in computer programming

    Get PDF
    There are repetitive patterns in strategies of manipulating source code. For example, modifying source code before acquiring knowledge of how a code works is a depth-first style and reading and understanding before modifying source code is a breadth-first style. To the extent we know there is no study on the influence of personality on them. The objective of this study is to understand the influence of personality on programming styles. We did a correlational study with 65 programmers at the University of Stuttgart. Academic achievement, programming experience, attitude towards programming and five personality factors were measured via self-assessed survey. The programming styles were asked in the survey or mined from the software repositories. Performance in programming was composed of bug-proneness of programmers which was mined from software repositories, the grades they got in a software project course and their estimate of their own programming ability. We did statistical analysis and found that Openness to Experience has a positive association with breadth-first style and Conscientiousness has a positive association with depth-first style. We also found that in addition to having more programming experience and better academic achievement, the styles of working depth-first and saving coarse-grained revisions improve performance in programming.Comment: 27 pages, 6 figure

    SAGA: A DSL for Story Management

    Full text link
    Video game development is currently a very labour-intensive endeavour. Furthermore it involves multi-disciplinary teams of artistic content creators and programmers, whose typical working patterns are not easily meshed. SAGA is our first effort at augmenting the productivity of such teams. Already convinced of the benefits of DSLs, we set out to analyze the domains present in games in order to find out which would be most amenable to the DSL approach. Based on previous work, we thus sought those sub-parts that already had a partially established vocabulary and at the same time could be well modeled using classical computer science structures. We settled on the 'story' aspect of video games as the best candidate domain, which can be modeled using state transition systems. As we are working with a specific company as the ultimate customer for this work, an additional requirement was that our DSL should produce code that can be used within a pre-existing framework. We developed a full system (SAGA) comprised of a parser for a human-friendly language for 'story events', an internal representation of design patterns for implementing object-oriented state-transitions systems, an instantiator for these patterns for a specific 'story', and three renderers (for C++, C# and Java) for the instantiated abstract code.Comment: In Proceedings DSL 2011, arXiv:1109.032

    Model Choice and Diagnostics for Linear Mixed-Effects Models Using Statistics on Street Corners

    Full text link
    The complexity of linear mixed-effects (LME) models means that traditional diagnostics are rendered less effective. This is due to a breakdown of asymptotic results, boundary issues, and visible patterns in residual plots that are introduced by the model fitting process. Some of these issues are well known and adjustments have been proposed. Working with LME models typically requires that the analyst keeps track of all the special circumstances that may arise. In this paper we illustrate a simpler but generally applicable approach to diagnosing LME models. We explain how to use new visual inference methods for these purposes. The approach provides a unified framework for diagnosing LME fits and for model selection. We illustrate the use of this approach on several commonly available data sets. A large-scale Amazon Turk study was used to validate the methods. R code is provided for the analyses.Comment: 52 pages, 15 figures, 3 table

    Automatic pipelining and vectorization of scientific code for FPGAs

    Get PDF
    There is a large body of legacy scientific code in use today that could benefit from execution on accelerator devices like GPUs and FPGAs. Manual translation of such legacy code into device-specific parallel code requires significant manual effort and is a major obstacle to wider FPGA adoption. We are developing an automated optimizing compiler TyTra to overcome this obstacle. The TyTra flow aims to compile legacy Fortran code automatically for FPGA-based acceleration, while applying suitable optimizations. We present the flow with a focus on two key optimizations, automatic pipelining and vectorization. Our compiler frontend extracts patterns from legacy Fortran code that can be pipelined and vectorized. The backend first creates fine and coarse-grained pipelines and then automatically vectorizes both the memory access and the datapath based on a cost model, generating an OpenCL-HDL hybrid working solution for FPGA targets on the Amazon cloud. Our results show up to 4.2× performance improvement over baseline OpenCL code

    Neural Dynamics of Phonetic Trading Relations for Variable-Rate CV Syllables

    Full text link
    The perception of CV syllables exhibits a trading relationship between voice onset time (VOT) of a consonant and duration of a vowel. Percepts of [ba] and [wa] can, for example, depend on the durations of the consonant and vowel segments, with an increase in the duration of the subsequent vowel switching the percept of the preceding consonant from [w] to [b]. A neural model, called PHONET, is proposed to account for these findings. In the model, C and V inputs are filtered by parallel auditory streams that respond preferentially to transient and sustained properties of the acoustic signal, as in vision. These streams are represented by working memories that adjust their processing rates to cope with variable acoustic input rates. More rapid transient inputs can cause greater activation of the transient stream which, in turn, can automatically gain control the processing rate in the sustained stream. An invariant percept obtains when the relative activations of C and V representations in the two streams remain uncha.nged. The trading relation may be simulated as a result of how different experimental manipulations affect this ratio. It is suggested that the brain can use duration of a subsequent vowel to make the [b]/[w] distinction because the speech code is a resonant event that emerges between working mernory activation patterns and the nodes that categorize them.Advanced Research Projects Agency (90-0083); Air Force Office of Scientific Reseearch (F19620-92-J-0225); Pacific Sierra Research Corporation (91-6075-2

    Supporting the Maintenance of Identifier Names: A Holistic Approach to High-Quality Automated Identifier Naming

    Get PDF
    A considerable part of the source code is identifier names-- unique lexical tokens that provide information about entities, and entity interactions, within the code. Identifier names provide human-readable descriptions of classes, functions, variables, etc. Poor or ambiguous identifier names (i.e., names that do not correctly describe the code behavior they are associated with) will lead developers to spend more time working towards understanding the code\u27s behavior. Bad naming can also have detrimental effects on tools that rely on natural language clues; degrading the quality of their output and making them unreliable. Additionally, misinterpretations of the code, caused by poor names, can result in the injection of quality issues into the system under maintenance. Thus, improved identifier naming increases developer effectiveness, higher-quality software, and higher-quality software analysis tools. In this dissertation, I establish several novel concepts that help measure and improve the quality of identifiers. The output of this dissertation work is a set of identifier name appraisal and quality tools that integrate into the developer workflow. Through a sequence of empirical studies, I have formulated a series of heuristics and linguistic patterns to evaluate the quality of identifier names in the code and provide naming structure recommendations. I envision and working towards supporting developers in integrating my contributions, discussed in this dissertation, into their development workflow to significantly improve the process of crafting and maintaining high-quality identifier names in the source code

    Parallelisation of algorithms

    Get PDF
    Most numerical software involves performing an extremely large volume of algebraic computations. This is both costly and time consuming in respect of computer resources and, for large problems, often super-computer power is required in order for results to be obtained in a reasonable amount of time. One method whereby both the cost and time can be reduced is to use the principle "Many hands make light work", or rather, allow several computers to operate simultaneously on the code, working towards a common goal, and hopefully obtaining the required results in a fraction of the time and cost normally used. This can be achieved through the modification of the costly, time consuming code, breaking it up into separate individual code segments which may be executed concurrently on different processors. This is termed parallelisation of code. This document describes communication between sequential processes, protocols, message routing and parallelisation of algorithms. In particular, it deals with these aspects with reference to the Transputer as developed by INMOS and includes two parallelisation examples, namely parallelisation of code to study airflow and of code to determine far field patterns of antennas. This document also reports on the practical experiences with programming in parallel

    Software refactoring and rewriting: from the perspective of code transformations

    Full text link
    To refactor already working code while keeping reliability, compatibility and perhaps security, we can borrow ideas from micropass/nanopass compilers. By treating the procedure of software refactoring as composing code transformations, and compressing repetitive transformations with automation tools, we can often obtain representations of refactoring processes short enough that their correctness can be analysed manually. Unlike in compilers, in refactoring we usually only need to consider the codebase in question, so regular text processing can be extensively used, fully exploiting patterns only present in the codebase. Aside from the direct application of code transformations from compilers, many other kinds of equivalence properties may also be exploited. In this paper, two refactoring projects are given as the main examples, where 10-100 times simplification has been achieved with the application of a few kinds of useful transformations.Comment: 10 pages, 4 figure
    • …
    corecore