13 research outputs found

    Detailed Overview of Software Smells

    Get PDF
    This document provides an overview of literature concerning software smells covering various dimensions of smells along with their corresponding references

    An Investigation into the Imposed Cognitive Load of Static & Dynamic Type Systems on Programmers

    Get PDF
    Static and dynamic type systems have long been a point of contention in the programming language wars. Yet, for many years, arguments on either side were drawn from personal experience and not empirical evidence. A challenge for researchers is that the usability of language constructs is difficult to quantify, especially since usability can be interpreted in many ways. By one definition, language usability can be measured in terms of the level of cognitive load imposed on a developer. This can be done through questionnaires, but ultimately user responses are subject to bias. In recent years, eye-tracking has been shown to be an effective means of measuring cognitive load via direct physiological measures. Towards the goal of measuring type system usability, we present a user study in which participants completed programming tasks in Java and Groovy. This thesis explored the use of the Index of Cognitive Activity (ICA) as a cognitive load measurement tool and considered novices and experts separately in the analysis. We found ICA to be an ineffective means of measuring type system usability and we cannot say conclusively whether it can be generally applied to programming tasks. Despite this, our results contradict previous studies as we found type system did not affect success rate, task completion time, or perceived task difficulty

    A Study of Bug Resolution Characteristics in Popular Programming Languages

    Get PDF

    Understanding Formal Specifications through Good Examples

    Get PDF
    Formal specifications of software applications are hard to understand, even for domain experts. Because a formal specification is abstract, reading it does not immediately convey the expected behaviour of the software. Carefully chosen examples of the software’s behaviour, on the other hand, are concrete and easy to understand—but poorly-chosen examples are more confusing than helpful. In order to understand formal specifications, software developers need good examples.We have created a method that automatically derives a suite of good examples from a formal specification. Each example is judged by our method to illustrate one feature of the specification. The generated examples give users a good understanding of the behaviour of the software. We evaluated our method by measuring how well students understood an API when given different sets of examples; the students given our examples showed significantly better understanding

    To Type or Not to Type: Quantifying Detectable Bugs in JavaScript

    Get PDF
    JavaScript is growing explosively and is now used in large mature projects even outside the web domain. JavaScript is also a dynamically typed language for which static type systems, notably Facebook's Flow and Microsoft's TypeScript, have been written. What benefits do these static type systems provide? Leveraging JavaScript project histories, we select a fixed bug and check out the code just prior to the fix. We manually add type annotations to the buggy code and test whether Flow and TypeScript report an error on the buggy code, thereby possibly prompting a developer to fix the bug before its public release. We then report the proportion of bugs on which these type systems reported an error. Evaluating static type systems against public bugs, which have survived testing and review, is conservative: it understates their effectiveness at detecting bugs during private development, not to mention their other benefits such as facilitating code search/completion and serving as documentation. Despite this uneven playing field, our central finding is that both static type systems find an important percentage of public bugs: both Flow 0.30 and TypeScript 2.0 successfully detect 15%!

    Enabling changeability with typescript and microservices architecture in web applications

    Get PDF
    Changeability is a non-functional property of software that affects the length of its lifecycle. In this work, the microservices architectural pattern and TypeScript are studied through a literature review, focusing on how they enable the changeability of a web application. The modularity of software is a key factor in enabling changeability. The micro-services architectural pattern and the programming language TypeScript can impact the changeability of web applications with modularity. Microservices architecture is a well-suited architectural pattern for web applications, as it allows for the creation of modular service components that can be modified and added to the system individually. TypeScript is a programming language that adds a type system and class-based object-oriented programming to JavaScript offering an array of features that enable modularity. Through discussion on relationships between the changeability of web applications and their three key characteristics, scalability, robustness, and security, this work demonstrates the importance of designing for change to ensure that web applications remain maintainable, extensible, restructurable, and portable over time. Combined, the micro-services architecture and TypeScript can enhance the modularity and thus changeability of web applications

    OppropBERT: An Extensible Graph Neural Network and BERT-style Reinforcement Learning-based Type Inference System

    Get PDF
    Built-in type systems for statically-typed programming languages (e.g., Java) can only prevent rudimentary and domain-specific errors at compile time. They do not check for type errors in other domains, e.g., to prevent null pointer exceptions or enforce owner-as-modifier encapsulation discipline. Optional Properties (Opprop) or Pluggable type systems, e.g., Checker Framework, provide a framework where users can specify type rules and guarantee the particular property holds with the help of a type checker. However, manually inserting these user-defined type annotations for new and existing large projects requires a lot of human effort. Inference systems like Checker Framework Inference provide a constraint-based whole-program inference framework. However, to develop such a system, the developer must thoroughly understand the underlying framework (Checker Framework) and the accompanying helper methods, which is time-consuming. Furthermore, these frameworks make expensive calls to SAT and SMT solvers, which increases the runtime overhead during inference. The developers write test cases to ensure their framework covers all the possible type rules and works as expected. Our core idea is to leverage only these manually written test cases to create a Deep Learning model to learn the type rules implicitly using a data-driven approach. We present a novel model, OppropBERT, which takes as an input the raw code along with its Control Flow Graphs to predict the error heatmap or the type annotation. The pre-trained BERT-style Transformer model helps encode the code tokens without specifying the programming language's grammar including the type rules. Moreover, using a custom masked loss function, the Graph Convolutional Network better captures the Control Flow Graphs. Suppose a sound type checker is already provided, and the developer wants to create an inference framework. In that case, the model, as mentioned above, can be refined further using a Proximal Policy Optimization (PPO)-based reinforcement learning (RL) technique. The RL agent enables the model to use a more extensive set of publicly available code (not written by the developer) to create training data artificially. The RL feedback loop reduces the effort of manually creating additional test cases, leveraging the feedback from the type checker to predict the annotation better. Extensive and comprehensive experiments are performed to establish the efficacy of OppropBERT for nullness error prediction and annotation prediction tasks by comparing against state-of-the-art tools like Spotbugs, Eclipse, IntelliJ, and Checker Framework on publicly available Java projects. We also demonstrate the capability of zero and few-shot transfer learning to a new type system. Furthermore, to illustrate the model's extensibility, we evaluate the model for predicting type annotations in TypeScript and errors in Python by comparing it against the state-of-the-art models (e.g., BERT, CodeBERT, GraphCodeBERT, etc.) on standard benchmarks datasets (e.g., ManyTypes4TS)

    An Empirical Study on The Impact of C++ Lambdas And Programmer Experience

    Full text link
    Lambda functions have become prevalent in mainstream programming languages, as they are increasingly introduced to widely used object oriented programming languages such as Java and C++. Some in the scientific literature argue that this feature increases programmer productivity, ease of reading, and makes parallel programming easier. Others are less convinced, citing concerns that the use of lambdas makes debugging harder. This thesis describes the design, execution and results of an experiment to test the impact of using lambda functions compared to the iterator design pattern for iteration tasks, as a first step in evaluating these claims. The approach is a randomized controlled trial, which focuses on the percentage of tasks completed, number of compiler errors, the percentage of time to fix such errors, and the amount of time it takes to complete programming tasks correctly. The overall goal is to investigate, if lambda functions have an impact on the ability of developers to complete tasks, or the amount of time taken to complete them. Additionally, it is tested if developers introduce more errors while using lambda functions and if fixing errors takes them more time. Lastly, the impact of experience level on productivity is evaluated by comparing the performance of participants from different levels of experience with one-another. Participants were assigned one of five levels based on their progress in the computer science major. The five levels were freshman, sophomore, junior, senior and professional. The professional level was assigned to individuals out of school with 5 or more years of professional experience. Results show that the impact of using lambdas, as opposed to iterators, on the number of tasks completed was significant. The impact on time to completion, number of errors introduced during the experiment and times spent fixing errors were also found to be significant. The analysis of the difference between the different levels of experience also shows a significant difference. The percentage of time spent on fixing compilation errors was 56.37% for the lambda group while it was 44.2% for the control group with 3.5% of the variance being explained by the group difference. 45.7% of the variance in the sample was explained by the difference between the level of education. Therefore, this study suggests that earlier findings that student results are comparable with the results of professional developers are to be considered carefully
    corecore