25 research outputs found

    Lecturers’ and Students’ Experiences with an Automated Programming Assessment System

    Get PDF
    Assessment of source code in university education has become an integral part of grading students and providing them valuable feedback on their developed software solutions. Thereby, lecturers have to deal with a rapidly growing number of students from heterogeneous fields of study, a shortage of lecturers, a highly dynamic set of learning objectives and technologies, and the need for more targeted student support. To meet these challenges, the use of an automated programming assessment system (APAS) to support traditional teaching is a promising solution. This paper examines this trend by analyzing the experiences of lecturers and students at various universities with an APAS and its impact over the course of a semester. In doing so, we conducted a total number of 30 expert interviews with end users, including 15 lecturers and 15 students, from four different universities within the same country. The results discuss the experiences of lecturers and students and highlight challenges that should be addressed in future research

    SoHist: A Tool for Managing Technical Debt through Retro Perspective Code Analysis

    Get PDF
    Technical debt is often the result of Short Run decisions made during code development, which can lead to long-term maintenance costs and risks. Hence, evaluating the progression of a project and understanding related code quality aspects is essential. Fortunately, the prioritization process for addressing technical debt can be expedited with code analysis tools like the established SonarQube. Unfortunately, we experienced some limitations with this tool and have had some requirements from the industry that were not yet addressed. Through this experience report and the analysis of scientific papers, this work contributes: (1) a reassessment of technical debt within the industry, (2) considers the benefits of employing SonarQube as well as its limitations when evaluating and prioritizing technical debt, (3) introduces a novel tool named SoHist which addresses these limitations and offers additional features for the assessment and prioritization of technical debt, and (4) exemplifies the usage of this tool in two industrial settings in the ITEA3 SmartDelta project

    Security Testing: A Survey

    Get PDF
    Identifying vulnerabilities and ensuring security functionality by security testing is a widely applied measure to evaluate and improve the security of software. Due to the openness of modern software-based systems, applying appropriate security testing techniques is of growing importance and essential to perform effective and efficient security testing. Therefore, an overview of actual security testing techniques is of high value both for researchers to evaluate and refine the techniques and for practitioners to apply and disseminate them. This chapter fulfills this need and provides an overview of recent security testing techniques. For this purpose, it first summarize the required background of testing and security engineering. Then, basics and recent developments of security testing techniques applied during the secure software development lifecycle, i.e., model-based security testing, code-based testing and static analysis, penetration testing and dynamic analysis, as well as security regression testing are discussed. Finally, the security testing techniques are illustrated by adopting them for an example three-tiered web-based business application

    Fieldwork Monitoring in Practice: Insights from 17 Large-scale Social Science Surveys in Germany

    Get PDF
    This study provides a synopsis of the current fieldwork monitoring practices of large-scale surveys in Germany. Based on the results of a standardized questionnaire, the study summarizes fieldwork monitoring indicators used and fieldwork measures carried out by 17 large-scale social sciences surveys in Germany. Our descriptive results reveal that a common set of fieldwork indicators and measures exist on which the studied surveys rely. However, it also uncovers the need for additional design-specific indicators. Finally, it underlines the importance of a close cooperation between survey representatives and fieldwork agencies to optimize processes in fieldwork monitoring in the German survey context. The article concludes with implications for fieldwork practice

    Using Double Machine Learning to Understand Nonresponse in the Recruitment of a Mixed-Mode Online Panel

    Get PDF
    Survey scientists increasingly face the problem of high-dimensionality in their research as digitization makes it much easier to construct high-dimensional (or "big") data sets through tools such as online surveys and mobile applications. Machine learning methods are able to handle such data, and they have been successfully applied to solve predictive problems. However, in many situations, survey statisticians want to learn about causal relationships to draw conclusions and be able to transfer the findings of one survey to another. Standard machine learning methods provide biased estimates of such relationships. We introduce into survey statistics the double machine learning approach, which gives approximately unbiased estimators of parameters of interest, and show how it can be used to analyze survey nonresponse in a high-dimensional panel setting. The double machine learning approach here assumes unconfoundedness of variables as its identification strategy. In high-dimensional settings, where the number of potential confounders to include in the model is too large, the double machine learning approach secures valid inference by selecting the relevant confounding variables.Wissenschaftlerinnen und Wissenschaftler im Feld "Umfrageforschung" sehen sich in ihrer Forschung zunehmend mit dem Problem der hohen DimensionalitĂ€t konfrontiert, da es durch die Digitalisierung viel einfacher geworden ist, hochdimensionale (oder "große") DatensĂ€tze mit Hilfe von Tools wie Online-Umfragen und mobilen Anwendungen zu erstellen. Methoden des maschinellen Lernens sind in der Lage, mit solchen Daten umzugehen, und sie wurden bereits erfolgreich zur Lösung von Vorhersageproblemen eingesetzt. In vielen Situationen möchten Umfragestatistiker*innen jedoch kausale ZusammenhĂ€nge erkennen, um Schlussfolgerungen ziehen und die Ergebnisse einer Umfrage auf eine andere ĂŒbertragen zu können. Standardmethoden des maschinellen Lernens liefern verzerrte SchĂ€tzungen solcher ZusammenhĂ€nge. Die Autor*innen fĂŒhren in die Umfragestatistik den Ansatz des doppelten maschinellen Lernens ein, der annĂ€hernd unverzerrte SchĂ€tzer der interessierenden Parameter liefert, und zeigen, wie er zur Analyse von Umfrage-Nonresponse in einem hochdimensionalen Panel-Umfeld eingesetzt werden kann

    Comparison of the FMEA and STPA safety analysis methods–a case study

    No full text
    As our society becomes more and more dependent on IT systems, failures of these systems can harm more and more people and organizations. Diligently performing risk and hazard analysis helps to minimize the potential harm of IT system failures on the society and increases the probability of their undisturbed operation. Risk and hazard analysis is an important activity for the development and operation of critical software intensive systems, but the increased complexity and size puts additional requirements on the effectiveness of risk and hazard analysis methods. This paper presents a qualitative comparison of two hazard analysis methods, failure mode and effect analysis (FMEA) and system theoretic process analysis (STPA), using case study research methodology. Both methods have been applied on the same forward collision avoidance system to compare the effectiveness of the methods and to investigate what are the main differences between them. Furthermore, this study also evaluates the analysis process of both methods by using a qualitative criteria derived from the technology acceptance model (TAM). The results of the FMEA analysis were compared to the results of the STPA analysis, which were presented in a previous study. Both analyses were conducted on the same forward collision avoidance system. The comparison shows that FMEA and STPA deliver similar analysis results

    Do We Preach What We Practice? Investigating the Practical Relevance of Requirements Engineering Syllabi - The IREB Case

    Full text link
    Nowadays, there exist a plethora of different educational syllabi for Requirements Engineering (RE), all aiming at incorporating practically relevant educational units (EUs). Many of these syllabi are based, in one way or the other, on the syllabi provided by the International Requirements Engineering Board (IREB), a non-profit organisation devoted to standardised certification programs for RE. IREB syllabi are developed by RE experts and are, thus, based on the assumption that they address topics of practical relevance. However, little is known about to what extent practitioners actually perceive those contents as useful. We have started a study to investigate the relevance of the EUs included in the IREB Foundation Level certification programme. In the first phase reported in this paper, we have surveyed practitioners mainly from DACH countries (Germany, Austria and Switzerland) who participated in the IREB certification. Later phases will widen the scope both by including other countries and by not requiring IREB-certified participants. The results shall foster a critical reflection on the practical relevance of EUs built upon the de-facto standard syllabus of IREB
    corecore