12 research outputs found

    At Ease with Your Warnings: The Principles of the Salutogenesis Model Applied to Automatic Static Analysis

    Get PDF
    The results of an automatic static analysis run can be overwhelming, especially for beginners. The overflow of information and the resulting need for many decisions is mentally tiring and can cause stress symptoms. There are several models in health care which are designed to fight stress. One of these is the salutogenesis model created by Aaron Antonovsky. In this paper, we will present an idea on how to transfer this model into a triage and recommendation model for static analysis tools and give an example of how this can be implemented in FindBugs, a static analysis tool for Java.Comment: 5 pages, 4 figure

    Get started imminently: Using tutorials to accelerate learning in automated static analysis

    Get PDF
    Static analysis can be a valuable quality assurance technique as it can find problems by analysing the source code of a system without executing it. Getting used to a static analysis tool, however, can easily take several hours or even days. In particular, understanding the warnings issued by the tool and rooting out the false positives is time consuming. This lowers the benefits of static analysis and demotivates developers in using it. Games solve this problem by offering a tutorial. Those tutorials are integrated in the setting of the game and teach the basic mechanics of the game. Often it is possible to repeat or pick topics of interest. We transfer this pattern to static analysis lowering the initial barrier of using it as well as getting an understanding of software quality spread out to more people. In this paper we propose a research strategy starting with a piloting period in which we will gather information about the questions static analysis users have as well as hone our answers to these questions. These results will be integrated into the prototype. We will evaluate our work then by comparing the fix times of user using the original tool versus our tool

    Assessing iterative practical software engineering courses with play money

    Get PDF
    Changing our practical software engineering course from the previous waterfall model to a more agile and iterative approach created more severe assessment challenges. To cope with them we added an assessment concept based on play money. The concept not only includes weekly expenses to simulate real running costs but also investments, which correspond to assessment results of the submissions. This concept simulates a startup-like working environment and its financing in an university course. Our early evaluation shows that the combination of the iterative approach and the play money investments is motivating for many students. At this point we think that the combined approach has advantages from both the supervising and the students point of view. We planned more evaluations to better understand all its effects

    How are functionally similar code clones syntactically different? An empirical study and a benchmark

    Get PDF
    Background. Today, redundancy in source code, so-called ‘‘clones’’ caused by copy&paste can be found reliably using clone detection tools. Redundancy can arise also independently, however, not caused by copy&paste. At present, it is not clear how only functionally similar clones (FSC) differ from clones created by copy&paste. Our aim is to understand and categorise the syntactical differences in FSCs that distinguish them from copy&paste clones in a way that helps clone detection research. Methods. We conducted an experiment using known functionally similar programs in Java and C from coding contests. We analysed syntactic similarity with traditional detection tools and explored whether concolic clone detection can go beyond syntax. We ran all tools on 2,800 programs and manually categorised the differences in a random sample of 70 program pairs. Results. We found no FSCs where complete files were syntactically similar. We could detect a syntactic similarity in a part of the files in <16% of the program pairs. Concolic detection found 1 of the FSCs. The differences between program pairs were in the categories algorithm, data structure, OO design, I/O and libraries. We selected 58 pairs for an openly accessible benchmark representing these categories. Discussion. The majority of differences between functionally similar clones are beyond the capabilities of current clone detection approaches. Yet, our benchmark can help to drive further clone detection research

    How Are Functionally Similar Code Clones Syntactically Different? An Empirical Study and a Benchmark

    No full text
    Abstract Background. Today, redundancy in source code, so-called &quot;clones&quot;, caused by copy&amp;paste can be found reliably using clone detection tools. Redundancy can arise also independently, however, caused not by copy&amp;-paste. At present, it is not clear how only functionally similar clones (FSC) differ from clones created by copy&amp;paste. Our aim is to understand and categorise the syntactical differences in FSCs that distinguish them from copy&amp;paste clones in a way that helps clone detection research. Methods. We conducted an experiment using known functionally similar programs in Java and C from coding contests. We analysed syntactic similarity with traditional detection tools and explored whether concolic clone detection can go beyond syntax. We ran all tools on 2,800 programs and manually categorised the differences in a random sample of 70 program pairs. Results. We found no FSCs where complete files were syntactically similar. We could detect a syntactic similarity in a part of the files in &lt; 16 % of the program pairs. Concolic detection found 1 of the FSCs. The differences between program pairs were in the categories algorithm, data structure, OO design, I/O and libraries. We selected 58 pairs for an openly accessible benchmark representing these categories. Discussion. The majority of differences between functionally similar clones are beyond the capabilities of current clone detection approaches. Yet, our benchmark can help to drive further clone detection research

    Introductory Practical Software Engineering course - Documentation

    No full text
    <p>Documentation material for the Introductory Practical Software Engineering course of the Software Engineering Group at the University of Stuttgart.</p

    Slides for Towards the Assessment of Stress and Emotional Responses of a Salutogenesis-Enhanced Software Tool Using Psychophysiological Measurements

    No full text
    Semotion'17 slides for "Towards the Assessment of Stress and Emotional Responses of a Salutogenesis-Enhanced Software Tool Using Psychophysiological Measurements"<div><br></div><div>Abstract--Software development is intellectual, based on collaboration, and performed in a highly demanding economic market. As such, it is dominated by time pressure, stress, and emotional trauma. While studies of affect are emerging in software engineering research, stress has yet to find its place in the literature despite that it is highly related to affect. In this paper, we study stress coping with the affect-laden framework of Salutogenesis, which is a validated psychological framework for enhancing mental health through a feeling of coherence. We propose a controlled experiment for testing our hypotheses that a static analysis tool enhanced with the Salutogenesis model will bring 1) a higher number of fixed quality issues, 2) reduced cognitive load, 3) reduction of the overall stress, and 4) positive affect induction effects to developers. The experiment will make use of validated physiological measurements of stress as proxied by cortisol and alpha-amylase levels in saliva samples, a psychometrically validated measurement of mood and affect disposition, and stress inductors such as a cognitive load task. Our hypotheses, if empirically supported, will lead to the creation of environments, methods, and tools that alleviate stress among developers while enhancing affect on the job and task performance. </div

    Detection of Functionally Similar Code Clones: Data, Analysis Software, Benchmark

    No full text
    <p>We analysed 2,800 programs in Java and C for which we knew they are functionally similar. We checked if existing clone detection tools are able to find these functional similarities and classified the non-detected differences. We make all used data, the analysis software as well as the resulting benchmark available here.</p
    corecore